Beyond Model-View-Controller 
July 18th, 2005
Bill Scott of Sabre / Rico LiveGrid fame (who is now on the way to Yahoo!) recently posted an excellent blog about Ajax and the relationship it has with the Model-View-Controller architecture pattern [1]. In particular he focuses on how it applies to the Rico LiveGrid.
Although I can see how at first glance using Ajax to implement an MVC architecture seems like a good idea. Don�t get me wrong here it is without a doubt an improvement over an MVC architecture in a �traditional� or pre XML HTTP request application (though I am sure there are many MVC purists who would say Ajax is an abomination). Of course the difference between Ajax and traditional web applications is that Ajax gives you the ability to choose what data to send and receive as well as what parts of the user-interface get updated. Anyone concerned about application latency should use Ajax to send small packets of data between the View and Model using the Control layer thus improving application performance because it does not require an entire page refresh.
So Ajax can, in many cases, cut down on the amount of data flowing between the View and Model. Having said that, one can envision situations where the MVC architecture pattern is not necessarily the best solution. One of Bill�s examples is sorting. To sort data in an Ajax grid control using MVC, some event causes a request to be sent to the server where all the data is sorted and a small subset is returned and presented in the user-interface. This is very nice if you have a very large amount of data and/or if the data on the server is changing often but this can also introduce considerable latency. If you can afford to get all your data into the browser (this is obviously not the case with Sabre) either because it is of small size or changes infrequently (like a contact list say) then it can be very advantageous from a latency perspective to do data manipulation, such as sorting, in the browser. Some of this type of data can even be stored on the client machine in certain browsers [2]. Or maybe if you have an Ajax Grid that deals with smaller data sets you may want to pre-sort the data by each column to decrease the latency even further.
Given the power in today�s web browsers, there are various methods one can envision to improve the latency of Ajax operations that can significantly deviate from the MVC model. It may mean making less clean code or deviating from traditional architecture patterns but it can result in a much better product.
[1] Model-View-Controller at Wikipedia
[2] MSDN Persisting User Data
Del.icio.us
This entry was posted on Monday, July 18th, 2005 at 10:57 am and is filed under Web2.0, AJAX, JavaScript, XML. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

July 19th, 2005 at 10:59 am
Dave,
Excellent points.
Yes, I am coming from the large data set world.
I have in previous places (nextjet) implemented client sorting for a table and it worked well. Although it added a little more to the client JS (but that was easily encapsulated.)
One of the assumptions we often make in dealing with getting data through ajax is that the latency is low or that the server can fetch data quickly.
When I wrote the original data grid at Sabre, I emulated the Java Swing Table Model, so an initial fetch was made to the real server data and then the 10,000 rows would be cached in the app server containing the table model. Subsequent ajax calls fetch from the memory of the app server. This caching smooths out some of the latency issues.
This is actually leads to some very interesting discussions about how much to put on the client vs. the server. With things like SVG, Flash we started migrating more to the client. With Ajax it seems to lead us to push data back to the server but with rich DHTML it pushes more gui to the client.
Thanks for the excellent feedback.
Bill
July 25th, 2005 at 11:02 am
Hi Bill,
Thanks for the comments!
This does certainly lead to some very interesting discussions about how much to put on the client vs the server. One has to have a good knowledge of all the issues such as request latency, DHTML latency (particularly innerHTML calls), script size and data pre-loading.
I think that for web application development in particular it is also very important to consider, as I mentioned, the data persistance features of Internet Explorer (which may be on their way to Mozilla). This can make something like Backpack even more responsive. For example, when new tasks etc are added they can be added to the client data store and sent to the server at an appropriate time. The next time the application is accessed the only data to transfer from the server to the client is any updates that may not be in the client data set. Of course this is not something that can be done with all of your Google emails but at least it can store the last X emails you received and just upload the newest ones when you reconnect.
Well I think that I am going astry now but suffice it to say that all these issues need to be considered in the context of ones application and there are certainly no stead-fast rules at this point in time.
It is kind of funny actually, I was just putting together a Yahoo! search example for our Ajax ComboBox and Grid components when you put up your sample!
Anyhow, keep up the great work and I am sure we will be seeing amazing Ajax apps from Yahoo! very soon. Good luck!
July 26th, 2005 at 11:02 am
My biggest concern is dealing with large amounts of data, on the order of millions of records. I skim of portions of client�s log files for SEO purposes, and have been struggling with a way to make ALL this data conveniently accessible. So as you speculate, this Ajax OpenRico approach is heaven-sent for this purpose. I no longer need to choose which filtered sub-set of these millions of records I want to show the user. And I always show the latest data. So, how about for small tables?
I did some experimenting with disconnected result sets under ASP.NET, local manipulation of data, then re-uploading the data. That�s part of ASP.NET�s answer to massively scalable web apps. And now that I�ve generalized this Ajax/OpenRico technique, I�m never going back. Whereas millions of records become fast and manageable, dozens of records become blazingly fast. HTTP latency is hardly noticeable. Webserver and database speed is almost a non-issue. Page reloads appear to have been the biggest culprit. Local data manipulation or Ajax can eliminate page reloads. But my plan is to stay consistent with my implementation, whether it�s millions of records or dozens. It�s just that fast.
June 14th, 2006 at 12:04 pm
http://25fe8.infofonte.com/dawnload/
http://25fe8.infofonte.com/codicefiscaleit/ http://262fe.infofonte.com/copioniteatrali/ http://262fe.infofonte.com/truetones/ e72ece221d9974c18e4b752cbaa0a4f4 bab082b56459094297aaec359c3273c9