Archive for the 'XML' Category
Theoretical Dogma 
August 24th, 2005
There are a multitude of ways to get data to and from the client in an AJaX application.
A recent article by Jon Tiresn [1] outlines what he feels are the three most useful methods of returning data to AJaX client applications and makes a point of dismissing the idea of sticking to standards and other theoretical dogmas. I highly recommend reading the article but will mention the three methods he discusses which are:
1) simple return
2) snippit return
3) behaviour return
and I will add what I feel is an important fourth:
4) XML return
I added the last one because it is a very important tool in the AJaX toolbox since XML data can be transformed on the client very quickly using XSL-T thus reducing server load, enabling the inherent use of app server data caching and adhering to standard design principles. You can even do various tricks to reduce the size of your XML data so that you are actually transferring very little data.
All of these four options are good if used in the proper situation. I generally agree with Jon that there should not be any of this generic interface (SOAP, SOA, WS-*) type malarky in AJaX applications. There may be some special cases where one might put high value on being able to re-purpose AJaX data and thus make the client and server very loosely coupled but for the most part AJaX services will not be available to the general public and exist primarily to support the user interface.
Furthermore, due to the constraints that JavaScript places on AJaX applications in terms of latency and usability, one has to engineer both the client and sever interfaces for the best performance possible; this often includes performing time consuming operations on the server rather than the client and returning ready to process JavaScript snippets or behaviours as opposed to raw data / XML. This is not interoperable or standards based (yet) but pays huge dividends in terms of application usability.
That being said there may be instances where you are dealing with the same data in both internal and external applications and it might be helpful to be more standards based. Another strong case for being standards based arises from situations where you are building AJaX based components for use by the development community in web applications. These type of developer components should be easily integrated into internal systems and thus can benefit from being standards based - that is why there are standards in the first place after all.
In the end one has to ask several questions to determine what method to use. Some of the important questions to ask about the application would be
- is the data going to be used for several interfaces or systems inside or outside of your company (SOA type situations)
- is the application a one off (Google Suggest)
- how important is application latency
- how important is browser compatibility
- how much traffic and server load is expected
- how much client processing can be done without compromising latency goals
- how important is it to be standards based
- how difficult will it be for a new developer to extend / debug the application
- how much raw data is being transferred between the server and client
- how much formatted (HTML or JavaScript) data is being transferred
and so on (its late and I am tired:)). One thing is certain - the lines that define a traditional MVC architecture can get very blurry when dealing with AJaX.
What are other metrics that people have found useful when considering data access in an AJaX application?
[1] Designs for Remote Calles in AJaX
Posted in Web2.0, AJAX, JavaScript, XML, XSLT | No Comments »
Beyond Model-View-Controller 
July 18th, 2005
Bill Scott of Sabre / Rico LiveGrid fame (who is now on the way to Yahoo!) recently posted an excellent blog about Ajax and the relationship it has with the Model-View-Controller architecture pattern [1]. In particular he focuses on how it applies to the Rico LiveGrid.
Although I can see how at first glance using Ajax to implement an MVC architecture seems like a good idea. Don�t get me wrong here it is without a doubt an improvement over an MVC architecture in a �traditional� or pre XML HTTP request application (though I am sure there are many MVC purists who would say Ajax is an abomination). Of course the difference between Ajax and traditional web applications is that Ajax gives you the ability to choose what data to send and receive as well as what parts of the user-interface get updated. Anyone concerned about application latency should use Ajax to send small packets of data between the View and Model using the Control layer thus improving application performance because it does not require an entire page refresh.
So Ajax can, in many cases, cut down on the amount of data flowing between the View and Model. Having said that, one can envision situations where the MVC architecture pattern is not necessarily the best solution. One of Bill�s examples is sorting. To sort data in an Ajax grid control using MVC, some event causes a request to be sent to the server where all the data is sorted and a small subset is returned and presented in the user-interface. This is very nice if you have a very large amount of data and/or if the data on the server is changing often but this can also introduce considerable latency. If you can afford to get all your data into the browser (this is obviously not the case with Sabre) either because it is of small size or changes infrequently (like a contact list say) then it can be very advantageous from a latency perspective to do data manipulation, such as sorting, in the browser. Some of this type of data can even be stored on the client machine in certain browsers [2]. Or maybe if you have an Ajax Grid that deals with smaller data sets you may want to pre-sort the data by each column to decrease the latency even further.
Given the power in today�s web browsers, there are various methods one can envision to improve the latency of Ajax operations that can significantly deviate from the MVC model. It may mean making less clean code or deviating from traditional architecture patterns but it can result in a much better product.
[1] Model-View-Controller at Wikipedia
[2] MSDN Persisting User Data
Posted in Web2.0, AJAX, JavaScript, XML | 4 Comments »
A is for Asynchronous 
July 15th, 2005
There has been a flury of activity over at Ajaxian [1] regarding the asynchronicity of Ajax and Nick Lothian�s [2] two ideas for dealing with it. Nick�s initial ideas were 1) locking the view and 2) sending view state data with all requests [3]. The first idea certainly applies in some special cases but the second is closer to a general solution I think.
One of the comments on Ajaxian from Matt pretty much sums it up though and goes something like �yes well asynch programming is nothing new � it is used in swing apps [and many others] all the time�; this was the first thought that I had when I saw the headline in my rss reader. So here are my thoughts.
Building on Nick�s second idea, I think that rather than sending ALL the view state data to the server it makes sense to �store� the view state on the client (ie leave it alone) and create a unique identifier that is sent with each request and returns with the response. This keeps the request / response less complicated and less bloated. Now on the client it is a simple task to determine what data is for what request and perform the appropriate action. Furthermore, you can keep track of the timing of the requests. In the case of Nick�s Ajax tree control, one may come across a situation where a response has not returned from the server after clicking on a tree node (it may take a long time because of many child nodes say) and the user eagerly clicks on another node in the tree. If the second node click request gets back before the first the client has to decide which request has precedence. The client can look up the request time stamp and see that there was a previous TreeNodeClick event which is still waiting for a response. As I see it there are three main paths to choose from:
1) Let the events go at their own pace (if the requests don�t change the same area of the view then who cares)
2) Cancel the second quicker event (slow down tiger lets see what is in this first node you clicked)
3) Cancel the first slow event and move on (obviously if they clicked somewhere else they don�t care about the first)
4) Keep track of all response data and queue the response from the quick server request to occur after the slow request returns (aye that�s the rub)
Ok that�s four. Given these four options one can make up rules to decide which route to take. For example, given two events such as TreeNodeClick and TreeClose, the later event should of course take precedence and have other evets cancelled. In the end it boils down to the idea that, depending on the situation, asynchronous data requests should be able to cancel, block or ignore each other.
What I see as the hard part about the A in Ajax is understanding how the data from various requests may change the view on the client and their dependencies. This can certainly be onerous for the developer but the end result is a responsive and intuitive web based user-interface.
[1] Ben/Dion and Dion�s/Ben�s AJaX Mission
[2] BadMagicNumber
[3] BadMagicNumber - AJAX: Best Practice for Asynchronous JavaScript
Posted in Web2.0, AJAX, JavaScript, XML | 2 Comments »
JavaScript Benchmarking - Part 1 
July 10th, 2005
As the name suggests this is part I of a series of JavaScript benchmarking blogs. The reason for these is to investigate the performance of various Ajax programming tasks. The first entry investigates how the XSL-T processors of Internet Explorer and Firefox / Mozilla (on Windows 2000) compare and how they compare to pure Javascript code that arrives at the same end result.
So what I have done is loaded some XML and XSL for building a table structure in an HTML page. The transformation is timed and we take an average and standard deviaton for each browser. In Internet Explorer I used the Msxml2.DOMDocument.3.0 object and the XSLTProcessor in Firefox. The XSLT transformation speed is then also compared to a pure Javascript implementation. The Javascript implementation is the fastest method one can use to insert HTML into a web page [1]; a string array is used to store all the rows of the table after which the array join method is called to return a string that is inserted into the DOM using innerHTML, just as the XML/XSL approach does.
The results are somewhat surprising and can be seen below.
(note the y-axis should be in ms not s)
One can see that the XSL-T processor is Firefox / Mozilla leaves much to be desired and is no match for the Javascript method, nor is it any match for either the XSL-T or the Javasctipt method in Internet Explorer. On the other hand, the XSL-T and Javascript methods in Internet Explorer are more or less the same with a slight edge being given to the XSL-T method.
It is curious just how much variance their is on the Firefox XSL-T data. I am not sure what is causing this but all measurements were done 50 times to get the statistics and there was nothing significantly different on the system on which the tests were run.
So for the best cross-browser performance going with pure Javascript is not so bad when presenting large amounts of data to the user. Further tests will look at the performance of XSL-T and Javascript for sorting data, object and class level CSS manipulation and the recently released Google Javascript XSL-T [2] implementation.
These types of JavaScript speed issues are very important for companies like us [3] that make high performance Ajax controls and web based information systems.
[1] Quirksmode
[2] Google AJAXSLT
[3] eBusiness Applications
Posted in Web, AJAX, JavaScript, XML, XSLT | 3 Comments »