Archive for the 'Semantic Web' Category
More on Wink and Tag Search 
January 24th, 2006
I read an interesting post by Jeff Clavier the other day and have been wondering about how an implicit search context, such as that used by Wink, could work for or against you. Btw I also still get a JavaScript error on the Wink homepage when I try to click on the search box
.
I have posted on various issues regarding tag based search before and there was good discussion on a recent(ish) post by Om Malik entitled People Power vs Google. The new problem I envision is that when you are searching for something that is syntactically the same but semantically different from concepts which you or other people have tagged, then the results will be skewed in the wrong direction. It is a very good idea on Wink’s part to put Google search results on the same page.
Of course this problem can be overcome with a little work by the searcher who can make a more exact search string; however, one could then argue that if you have to make a more exact search string to find things outside of your tagospehere then why bother when it is likely that Google searching (ie not using tags) in your area of interest will generally return the results you want with or without tags. The same is generally true with using del.icio.us in that it is faster to go and search on Google than to find what you are looking for on del.icio.us.
It is interesting to think about the problem in terms of information theory. When you encode the western alphabet for transmission like using Morse code, one would usually want to devote as few bits as possible to the letters “e” and “s” because they have a high degree of redundancy. Tag supported search is similar, in that it reduces the number of tags needed to find frequently accessed information (like reducing the number of bits that represent the letter “e”) by leveraging the work that people have put into tagging pages. This can also backfire of course when you are looking for AJAX the football club rather than AJAX the wicked-awesome programming technique when most of the pages you tag with AJAX are those relating to the technology. The user essentially has to climb out of this “context pit” created by their tagging habits by specifying “AJAX amsterdam” or “AJAX football”. Really it all depends on your search habits.
I am not sure we can prevent this problem when searching for obscure or syntactically different topics. While this might be a slightly larger problem with tag based searching it can also be a problem with Google - the main difference being that Google bases its results on actual HTML links between pages, which, in my opinion, should generally result in a more robust and less biased result set. Will this problem become even worse once we start using things like the Semantic Web?
Posted in Web2.0, Search, Tagging, Semantic Web | No Comments »
What Makes a Service Last? 
January 20th, 2006
I have been intently following Dion, as you do, over at the good old SOA Blog. One recent post is, as usual, more of the same commentary about Web 2.0 and SOA.
In his latest post Dion suggests that:
“Writing software from scratch will continue going away. It’s just too easy to wire things together now. Witness the growth of truly amazing mash-ups of which things like Retrievr and Meebo are only two tiny examples.”
This is a bit too far off the Web 2.0 global SOA deep end for me. Retrievr is admitedly an interesting mash-up but is it really “truly amazing”? Is it something you need to use everyday - something to write home to Mom about? I suppose it could be considered amazing from the perspective of available mash-ups but in general mash-up quality and usefulness is relatively low. From what I can tell the main reason to provide API’s to your software is to either:
a) get more users and increase your valuation when selling your Web 2.0 company to Yahoo!
b) hope that Google likes your mash-up and hires you
c) gain the support of the increasingly trendy niche of hybrid “programmer / blogger / never the cool kid in school” types to help you achieve goals a) or b)
d) attract attention to attain status of trendy hybrid “programmer / blogger / never the cool kid in school”
(please leave any other ideas in comments below)
Flickr in itself is only marginally amazing, and it was written from scratch - shock horror!
If one even considers what a mash-up really is, one finds that we have always developed software by “wiring things together” have we not? I can imagine with every level of programming language abstraction there is some journalist somewhere who heralds it as evidence of a new golden age of programming productvity. The only difference here is that programming languages - unlike mash-ups - can actually be useful!
The real amazing software that I find myself using is that which actually *enables* the mash-ups; for example, Google or eBay have great technology and are products/services that can not simply be created by mash’ing up a few JSON based JavaScript streams in a browser.
In his latest post, Dion even says:
“Maybe software developers should just go back to sprouting acronyms and delivering software that doesn’t do what people want.”
To me, he is trying to say that Web 2.0 let’s people build good, useable software - this is sort of true and I am a big believer in AJaX of course. However, I would like to know how many social networking, tagging, blogging, sharing //insert buzz word here// Web 2.0 applications we need!
The actually point that I was thinking about when I gave this post a title was that I just don’t understand why creating REST based services is really that open, easy, or robust? At least with Web Services and WSDL one can automatically build a C# or Java proxy for a service and even have JavaScript emitted for use on the client, can you do the same for the del.icio.us REST API so easily? In fact I find it astounding that an API such as that of Flickr, which is actually quite robust, does not even have a standard WSDL based description of the bindings (addmitedly some aspects of the API are not that complicated to warrant SOAP based services but at least a binding description would be nice) - my point being that it seems to me WSDL descriptions (or any kind of machine readable one for that matter) of mash-up enabling APIs are a few and far between despite the fact that they are actually quite useful for generating proxies etc. Also, how will these supposedly simple services work with the Semantic Web? I am not sure the Semantic Web will be that easy in itself so does that mean we should forego it and just settle for Web 2.0 or maybe 1.5? Well yeah maybe we should :S I guess I could be alone in thinking that the Semantic Web is what we should really be talking about rather than mashing-up Google with Craigslist (I know, Google + Craigslist is sooooooo 2005). The whole idea of an API for a service that one has to actually physically read makes me shudder - haven’t people had enough of mapping inputs and outputs to services (whether they are REST or otherwise)??? Maybe I should quit complaining and define a REST service description language (RSDL) that is a simple version of WSDL …
I suspect this drive to simplicity is going lead us down a path we have been on before. As you make things more simple you also, generally, make them less valuable. I know that many take the KISS principle too literally sometimes and apply it to the nth degree. Sure Google is pretty damn complex but they also have billions of dollars in revenue - complex and valuable. On the other hand, look at Retrievr - simple and worthless. Choose your poison.
Posted in Web2.0, XML, Service Oriented Architecture, Semantic Web | No Comments »
Social Annotation 
December 28th, 2005
I have just read about a company currently in private beta called Diigo, which is in the business of social annotation (SA).
Apparently SA is a superset of social bookmarking or tagging, which is of course the piece de resistance of ‘Web 2.0′. The question is can SA be an even better route to getting aquired by MAGY? Don’t quite me on ‘MAGY’ though since I am not sure what order those names should go in…
I had been thinking about SA for some time but did not have the time / resources to get anything together for public showing - but this might be a good reason to do so. Of course given my record with getting code up on my blog I won’t have a sample till next year this time. Anyhow, the possiblities for SA are much more attractive than social bookmarking in my mind. With social annotations (at least what I consider it to be) I can surf to any web page and place tagged sticky notes (private or public) in a browser agnostic fashion that will contain my comments and refer to a certain block in the web page DOM. Then I can go to some central place to view / oranize my comments and can also subscribe to RSS of other people’s comments on those pages or from particular people. The main problem that I have with Diigo (from the looks of their Flash demo) is that I need to install their toolbar - yuck!
The useful part of these systems for end users is that they can tag particular bits of content on a page and find exactly what they were referring to with a tag. Then if you combine this idea with microformats and the Semantic Web you might really be cooking with something combustible like methane.
This brings us to the all important (both dreaded and revered at the same time) question of ‘monetization’ - I guess I have to eat somehow but that is why I have a day job
In a perfect world I imagine the toolbar from Diigo being essentially a web toolbar (as opposed to browser integrated) that floats over the current page and is inserted by using a bookmarklet in true AJaX fashion . With the toolbar could be relevant ads and there could also be relevant advertisement on the notes themselves. But hey who needs money when you have a few hundred thousand users and ’social tagging/sharing/annotation’ hype to help you implement your ‘Web 2.0′ exit strategy.
Posted in Web2.0, Search, Tagging, Semantic Web | No Comments »
Fuel for the Tag Embers 
December 23rd, 2005
Om Malik posted about the increasing interest in people power vs the power of Google [1]. I think that tags will lose out to automated clustering (such as Vivisimo) in the short term but that doesn’t mean we will not see more players like Wink trying to get a piece of the tagging pie. Not that I won’t give Wink a chance and don’t get me wrong I do think that services like Wink have a place in the blogosphere today but we already have the likes of Technorati and my new favourite Google Blog Search.
The topic of tag utility has been covered quite a bit in the past by the likes of Tim Bray [2] and Stephen Green [3] (both good canucks) and I am sure it will be discussed well into the future! On the whole I have to agree with Tim, and Stephen brings up some very interesting points from his research that should be considered. I will discuss that in a moment.
But first, there are a few issues that I can see with the new emphasis on the old idea of tagging …
- People are lazy. who wants to waste their time rating pages when Google does a _pretty good_ job on its own?
- People who are not lazy (like geeks maybe) cause tagged content to be very skewed to their interest group and therefore it becomes inaccesible to the majority of people.
- There is lots of meta-data (some may even call it “tags”) available to search engines based on page content - so why do more work?
- If I tag a page as “interesting” that is only in the context of what I am thinking at that moment in time. Tags can have temporal/geographic/personal dependence which is something that is not easy to manage with tags today.
For example, a current topic that I am very interested in is the science (or maybe art?) of data binding - ie how to create a binding language that provides rich mechanisms for indirection and how to express it using a declarative / mark-up approach. This is something that is quite difficult to find information about using Google or Yahoo!. Could tagging of content help me find some obscure piece of very relevant and useful information on this topic? If someone has found it before me and tagged it with the pecise tags that I would use for the topic then maybe. However, I’m not convinced [4] and it seems that John Battelle is not either [5].
Here is the thing, people need to look beyond the tag - it is a stop-gap that has been tried many times before (web page keywords?). Places that tags have had some success, as Stephen mentions, are instances where you have defined vocabularies or taxonomies. Content is tagged by domain experts and integrated into a taxonomy at great expense but with great reward (this seems to be a re-occuring theme to me). I am not sure that people using the web want to be constrained like this - yet it is the best way to get value from tagging so that everyone “talks the same language”.
This brings me to a point that I have brought up before [4]. Forget tags. Think semantics. Think Semantic Web [6]. The discussion should not be about the value of tags but about moving towards a richer Web. More on that soon.
References
[1] People Power vs Google - Om Malik, Dec 22, 2005
[2] Do Tags Work? - Tim Bray, Mar 4, 2005
[3] Tags, keywords, and inconsistency - Stephen Green, May 13, 2005
[4] More Tags - Dave Johnson, Dec 14, 2005
[5] Will Tagging Work - John Battelle, Dec 4, 2005
[6] Tagging Tags - Dave Johnson, Dec 1, 2005
Posted in Web2.0, Search, Tagging, Semantic Web | 1 Comment »
Structured Blogging 
December 16th, 2005
Paul Kedrosky chimed in on the recent introduction of Structured Blogging (SB). Paul suggests that laziness is going to prevent SB from taking off and I would have to agree. Like many Web 2.0 concepts, it puts too much faith in the hands of the user - and aside from over zealous alpha-geeks, it will likely be too much work for users to actually use.
As time goes on I am certainly finding that just using a search engine is actually faster than using del.icio.us and is less work to boot! Flickr is the one exception where tagging is actually slightly more useful [1,2] - seeing as how search engines have a hard time indexing image content
. This is my common conclusion from using many different online services. Sure I sign up for all the great new Web 2.0 / AJAX services … I signed up for Writely and they can use me in their stats of doubling their user base every X weeks but I am never going to use it again; not because it is not cool and slightly useful but because I am simply too lazy.
This subject also came up yesterday as I was reading one of the latest fire stoking, “Five somethings about Web2.0 / AJAX”, post [3] by Dion Hinchcliffe over on the Web 2.0 blog. Dion’s number one reason that Web 2.0 matters is because it “seeks to ensure that we engage ourselves, participate and collaborate together”. Again I can’t help but think about how lazy most people are. Sure people that are actually interested in Web 2.0, tagging and the like make it seem really great but for the most part they cannot be bothered.
For Web 2.0 to get traction beyond the alpha-geeks I think it needs to empower developers and ask less of end-users.
References
[1] More Tags - Dave Johnson, Dec 14, 2005
[2] Tagging Tags - Dave Johnson, Dec 1, 2005
[3] Five Reasons Why Web 2.0 Matters - Dion Hinchcliffe, Dec 7, 2005
Posted in Web2.0, XML, Service Oriented Architecture, Semantic Web, Microformat | No Comments »
More Tags 
December 14th, 2005
I just stumbled upon a short post by John Battelle where he asks whether tags are going to work in the long run [1].
From my point of view the only good application of tags is for data that has no computer readable meta-data - ie they are a stop-gap. Photos, movies, songs even smells (one day) are the types of information that are hard to find using a search engine. Though sooner than later we should be able to search for “sunset” and Flickr will return a picture like this. However, when it comes to web pages there is plenty of information for search engines to work with. Why use a limited set of usually homogeneous tags to define a web page on del.icio.us when you can likely find it just as fast, or faster, using a search engine instead?
Furthermore, I’m lazy, I don’t like to think up new tags for resources that I find and for the most part I end up tagging almost everything that I find with my homogeneous tag set of XML, JavaScript, blog and AJAX … go figure. So in the end tagging is only slightly better than using my favourites in my web browser.
Having said that, there is one place that tagging might actually be useful, but only to a slightly larger degree, and that is with news. Having the del.icio.us RSS feed for AJAX is great since it is essentially a human aggregated feed for AJAX news. Still, in the future I anticipate that I will likely just ask Technorati or equivalent instead.
All and all I have quickly fallen out of love with tags and the limited use they have [2].
As for the companies that are building businesses based on tagging - it seems to be a pretty good idea.
Update: found a great post about tags here.
[1] Will Tagging Work - John Battelle, Dec 4, 2005
[2] Tagging Tags - Dave Johnson, Dec 1, 2005
Posted in Web2.0, AJAX, Business, Tagging, Semantic Web | 2 Comments »
Tagging Tags 
December 1st, 2005
I found it quite interesting some months ago when somebody posted a comment on one of my photos in Flickr asking why I had tagged it with the word “photovoltaic”. It appears that I have since taken down that photo but just take a look at this one and most people can likely see the confusion
I am sorry but how can we expect a couple of words describe everything about some picture to someone who doesn’t know me or know anything about the photo? At best they could say something like
“this photo is tagged with barcelona, 2005 and photovoltaic. if I Google those I find the first result is a photovoltaic conference in Barcelona in 2005 so he was probably there. but what the hell do cargo containers have to do with anything”
But when I look at that photo I think
“oh yeah that was at the photovoltaic conference in Barcelona in 2005 where I gave my talk on photon recycling and we were living in London and Ian and Annabelle came from Vancouver to visit and I felt really horrible about all those CO2 emissions from their airplane and we went to that castle in Barcelona where there was a good view of the harbour and I thought that those shipping crates looked kind of cool so I snapped this photo - I wonder what relationship this photo has with the next and previous one other than time and group? oh shit did I leave the stove on? what are the implications of cargo containers on AJAX in Spain? “
Of course this sort of thing even happens when we are talking or reading other people’s writing. Just the other day, and what actually spurred me to write this post, I posted a response to an AJAX question in a group on Google and for some reason a really picky guy replied to my answer complaining about my saying “data transport encoding”. He suggested that I meant to say “data transport formatting” because encoding _really_ means ASCII, UTF-16 etc. While dictionary.com says that encoding is “To format (electronic data) according to a standard format” - ok so it’s just data formatting - like he said. My point here is that if I said “data encoding” and that is all, then you could think that I meant DVD encoding or Huffman coding or ASCII encoding or XML encoding. Only when you take into account the _entire_ context of a statement can you ascertain the _real_ meaning. You have to take into account that I was just reading about phase modulation for wireless communication and so used the word encoding or maybe I had a bad lunch or maybe I was actually thinking a completely different word but just wrote that one instead. Looking at the problem with writing is obviously a bit far fetched but no less interesting to thing about. Incidentally the poster also objected to my use of the term “array” when using it to refer to a group of objects - he insisted it was a data-structure; I can certainly see his concern if he has just had his head in some code for a day.
And my point is what? My point is tags, even writing, is just not good enough. There is too much context to provide to give the tags their proper meaning. I may use the word “photovoltaic” to refer to the fact that a picture was taken while I was in a city attending a photovoltaic conference but I may also use it to describe an actual picture of a PV panel all at the same time.
Tags need tags.
What do other people think?
Posted in Web2.0, Tagging, Semantic Web | 1 Comment »
SOAP + WSDL in Mozilla 
September 12th, 2005
I sure am behind the times. I just saw found out about the SOAP and WSDL support in Mozilla / Gecko based browsers. This is very cool and I am not sure why more people are not using this � especially in AJaX circles.
The other interesting thing that I found was that you can extend the DOM in Mozilla to support Microsoft HTML Component files or HTC�s - these are used in Internet Explorer to implement things such as SOAP and WSDL support. So you can in fact have SOAP and WSDL support in Gecko with either the built in objects or using HTC�s.
Ok so why aren�t more AJaX people using this built in support for SOAP + WSDL in Mozilla? If you prefer to generate JSON on the server and pass that up you are just crazy since you could instead pass it up as XML embedded in SOAP and then use XSLT on the client to (very quickly) generate HTML or CSS or whatever from the XML.
Posted in AJAX, XML, Service Oriented Architecture, XSLT, Semantic Web | 3 Comments »
Service Oriented Architecture: The 4th Dimension of the Rich Web 
September 7th, 2005
As usual, Bill Scott has recently shared with us some of his keen insight into what makes Web 2.0 tick. In his latest post he introduces and defines the three (rich) dimensions of Web 2.0 as visual, interaction and data [1].
Before the arrival of so many AJaXified applications, data was the bottleneck through which the other two dimensions had to be squeezed. Now, developers are free to work in any dimension almost completely disjoint from the others using CSS, DOM and XMLHTTP for visual, interaction and data respectively.
I say almost because the choices you make in any dimension can and do influence the others (AJaX string theory). AJaX developers generally insist on minimalist and tightly coupled data communication methods; the reason for this is simple - if you pass SOAP, or worse, some WS-* compliant messages between the server and client you are going to have lots of extra data passed back and forth and will require more processing which both take time and reduce the usability of an application. Take Google for example, to get the best performance from their AJaX applications they generally return pure JavaScript or JSOR (JavaScript on the rocks). Doing this is great for a one-off customer facing application but when you want to share and open up data it becomes a lot of work to interoperate between Google, MSN and Amazon maps. In short, by making the data dimension more complicated to allow for say SOAP interoperability, we make the job of the DOM / JavaScripr dimension that much more difficult due to the increased overhead. This trade-off in performance has to be considered.
So as Web Services and all the standards that come under that umbrella are currently moving towards implementing Service Oriented Architectures (SOA) and (maybe even more importantly) the Semantic Web, were is AJaX going? What CSS and DOM trade-offs are we willing to make for the sake of rich data? Sure AJaX is young but let�s face it, everyone and their dog was using iFrames or XMLHTTP since the 90�s. AJaX and Web 2.0 developers should think about looking to SOA for guidance if we truly want to see rich data at its best. Let�s not get hung up on a Google map + housing listing �mashup� (not to say that I wasn�t excited to see it ) or worrying so much about back-buttons. We need to be driving development of Internet technologies on the client as well as hacking around and pushing the boundaries of the today�s Web!
Where do we go from here? In my previous post I discussed the synergies between SOA and AJaX [2] and in light of that discussion, I have been thinking about AJaX and how to create a truly data rich Internet application. Most of my thoughts end up at the sad conclusion that we are at the mercy of the web browser vendors that don�t have WS-* or even SOAP processing built-in (which Mozilla actually does have now ). Alternatively, maybe we should be looking at building a 4th dimension into AJaX applications of light weight standards based on the SOA tenets (discoverability, reusability, abstract models, policies)?
[1] Richness: The Web in 3D - Bill Scott, August 30, 2005
[2] SOAJaX: Where does SOA Stop and AJaX Begin- Dave Johnson, September 02, 2005
Posted in Web2.0, AJAX, Service Oriented Architecture, Semantic Web | No Comments »
SOAJaX: Where SOA Stops and AJaX Begins 
September 2nd, 2005
There has recently been a maelstrom brewing over SOAJaX with some people claiming their is no correlation whatsoever between the two [1], some comparing the software industry to the fashion industry [2], some making nice graphics outlining important implications for SOA designers [3,4] the old �it�s all semantics� argument [5] and some being completely inane [6].
As many before have noted, SOA and AJaX are both just ridiculous acronyms describing architectural paradigms that encompass entire families of web technologies - but let�s try to look beyond that So let�s try to answer the question of what exactly SOA and AJaX have to do with life the universe and everything.
To start with, the �it�s all semantics� argument is correct. If you look around you can find a different definition of SOA depending on the time of day [8]. So, as Dion Hinchcliffe discusses [7], I think a good place to start is looking at what exactly SOA and AJaX are.
To get the definition of SOA I went straight to the horses mouth - OASIS. Since some smart people realized that SOA was completely ambiguous and didn�t mean anything in the real world, a technical committee created specifically to define a SOA reference model (they call this the SOA-RM TC). Hopefully, the result of all the hard work being done by the SOA-RM TC should be some guidelines to help in defining what components are required to actually call something a SOA. The work is not completely done but a recent SOA-RM Technical Committee overview presentation [9] by Duane �cosmic genius� Nickull (I hope that some of the absurd smarts rub off on this Canadian technologist) of Adobe. From this presentation there are at least five things that are required for something to be considered a SOA: a service that can be called through a prescribed interface, a service description declaring all relevant aspects of the service for would be consumers, discoverability, abstract data and behavioural models, and finally a policy which imposes constraints on consumers of the service. Of course loose-coupling is also a SOA hallmark.
Ok. I did not see any mention of Flickr, CSS, XML or the �yellow fade� technique there. Things are looking grim.
Now let�s consider what a RM for AJaX might look like. I am thinking that the important things for AJaX must be some degree of Asynchronicity, JavaScript and XML? Let�s knock of the last two first. Since SOA is quite technology agnostic it cannot really have anything specifically to do with JavaScript or (although most implementations use) XML. However, we may be able to weave a connection around the thin thread that is the capital �A� in AJaX - of course the second �a� is part of the word JavaScript and so should not be capitalized but that is another kettle of fish as they say. Asynchronous. Both SOA and AJaX (for the sake of argument) use either a synchronous or an asynchronous based communication pattern. So in the strictest sense AJaX can be a nice way to consume SOA services and provide a usable interface to them. That being said, if today�s SOAs are defined using the likes of WS-* then AJaX will never rise to God-like status it is striving for because you don�t want the WS-* stack written in JavaScript. So AJaX can consume services based on a SOA if AJaX developers want to play in the same league, but today I doubt it. This is where the commonalities start and end - but hey it�s better than nothing.
Strictly speaking AJaX is simply an important layer above a SOA like any other web application framework today; they are, for the most part, separate and discrete entities. Their paths may cross at some point in an optical fibre in the middle of the Atlantic Ocean but that is as close as they come. Before AJaX rose to super-stardom developers would simply utilize SOA from their Ruby on Rails or .NET or JAVA application running on the server and then convert the returned data to HTML and serve that up to the client. Now that AJaX has landed people have the opportunity to bypass that server layer and go straight to the source - if they want to deal with SOAP, WS-* etc they can do that. In general this is not the case. Since developers are lazy by design (at least the good ones) and AJaX developers (the laziest of the bunch) have shunned XML and, in an effort to reduce the amount of JavaScript coding to be done, have come up with their own data formats (JSON, JavaScript on the rocks or JSOR, amongst other more obscure or proprietary ones). These formats were spawned outside of the standards world in the wild west that is Web 2.0. Sure you can have a system that follows the tenets of SOA and uses JSON as the data format of choice if you are building a Web 2.0 consumer facing photo sharing website but this might not be so helpful when trying to integrate supply chains.
Although SOA has not quite hit the fashion industry status that AJaX has, SOA is the bricks and mortar that our software systems of the near future will be built upon while AJaX is but the decoration nailed to the walls. It just so happens that in Web 2.0 the walls are generally quite thin and AJaX appears to, and does, blend into a bit of an ad hoc, loosely defined, SOA.
So what implications do SOA and AJaX have for each other? Dion mentioned in one of his articles [3] that AJaX would likely push SOA away from WS-* way of doing things but I contend that AJaX will not have as much influence on SOA as he suggests because:
1) the SOA crowd is more established than the Web 2.0 cowboys thus it will take more than a few rogues to completely turn the tables
2) in web browsers today there is no support for discovery and policy binding, which are necessary for SOA particularly in the enterprise
3) people have been developing web applications that consume services for many years - to think that because of the re-introduction of asynchronous requests from the browser developers will suddenly find that they need to access enterprise Web Services directly from the browser seems unfounded (and a security risk to boot)
If nothing else, AJaX is creating a new generation of developers that will at least think about rich clients and how they interact with SOAs - this is good. Also, the visual side of AJaX helps to put a pretty face to the SOA name (guilt by association) - this is also good.
To finish off on a positive note, I think the biggest implication that AJaX has for SOA is that AJaX (and Web 2.0 in general) represents a vast improvement of client applications in terms of usability, which opens up new, uncharted territory for data manipulation and visualization on the client. This new territory will likely increase the amount and variance of data that web application developers will want; thus, developers will increasingly be faced with situations in which the only logical choice will be to access data through a SOA and to buy into the SOA way of doing things. If SOA gets buy in from the vocal and loveable AJaX crowd it could be a real shot in the arm for SOA as well as the implications that SOA has in store for the future. We are already seeing this trend with our AJaX based components on various platforms, which apparently are �ready for prime-time, white collar, Fortune 1000 usage� [3] as can be seen by our customers such as Time Warner, BMW, Bank of America, Goldman Sachs, and Siemens to name a few.
The question is will AJaX stagnate as purely an extension of current web development techniques or will it mature into it�s own �light� SOA for client side development or even better will browser vendors decide to build WS-* into the browsers of the future so that AJaX can play ball with the big boys?
[1] On Atlas/AJaX and SOA - Nick Malik
[2] SOA, AJAX and REST: The Software Industry Devolves into the Fashion Industry - Dare Obasanjo
[3] State of Ajax: Progress, Challenges, and Implications for SOAs - Dion Hinchcliffe
[4] Ajax: User Interface Pattern or SOA Dissemination Engine? - Dion Hinchcliffe
[5] AJAX, SOA, and FWCAR - Mohair Sam
[6] New Specification for SOA using AJAX = JAXASS - Titus
[7] Beating a Dead Horse: What�s a SOA Again? All About Service-Orientation� - Dion Hinchcliffe
[8] Revisiting the definitive SOA definition - SearchWebServices.com
[9] An Introduction to the OASIS Reference Model for Service Oriented Architecture (SOA) - Duane Nickull
Posted in Web2.0, AJAX, Service Oriented Architecture, Semantic Web | 1 Comment »