Archive

Well, I think its dawning on me, that what Roy Fielding talks about (rather abstractly) [1] is what Henry Story neatly summarises and provides examples of [2] – REST SOA, with connected semantics. I’m one of those who can be accused of implementing REST not in the Roy Fielding manner of the word, but in the anyting thats not WS* “meaning”. I’ve done request mapping, content negotiating, resource rendering in XML, Json (and a bunch of others), GET,PUT,POST,HEAD etc etc etc, but never all together, and never in the true Spirit of Roy. But when you add the semantic web, you can really see that theres something good going on here – “easy” and ubiquitous webservices.

Roy talks about representations, resources and connectedness, about agents or service consumers that deal with well-known media types and links, and nothing else – REST implies that a user agent is “thin”, understands basic-and-well-known types and protocols, and renders a look and feel and a behaviour that reacts to what it is fed. As he says it should work with the “follow your nose” principal (no need for WADL[3,4]).

For a browser this would mean that you point it an URL, it displays content suitably, that it receives and displays links with appropriate CRUD capabilities for it and and relations it is given. For example, given a book resource, render it using the .book CSS class, and create links to add to shopping cart, get a contents list, add to a favourites list. For a chapter in the book, there may be link to print it, to relate it to a chapter in another similar book, to annotate it and send to a colleague. For a daemon or agent it might mean that it alters the time at which it performs an action against a resource, or what action it takes. The navigation and action controls aren’t determined by business or display logic, but by the resource and its relations – the agent consuming the resource knows it has to display or follow a link, the CSS may have display capabilities based on the resource type or context, the workflow steps will appear at the right time for the right user, under the right circumstances. Client logic is solely to deal with converting representations to appropriate media-types, and driving application state – using relations and verbs to make transitions with links.

But the thing that got me spinning, as I tried to understand the abstractedness, and as I looked into JAX-RS [5], and its various implementations (well, Spring* in particular TBH,which doesnt do JAX-RS in fact [6]) was that the connectedness and follow-your-nose principal seemed absent. Its all very well and cosy (and arguably easy) to create some platform code that maps URIs to classes and methods and HTTP verbs, and then to output XML or JSON or not (think JSP), or perhaps even Atom, OData, RDF, N3 or TTL but wheres the linked connectedness – the things we talk about and take for granted in Linked Open Semantic Data world ? And how does it know what links to create, how to generate them, and how they should be presented (if there’s a human involved) ?

Well, Henry blows that lightbulb for me when he illustrates from his foaf profile all the foaf:knows relations [2]. In a RESTful world where a service returns a foaf file and reads the foaf:knows elements it can decide what to do based on that predicate – it can deduce that the resource represented is a Person and can create the links it chooses using what it knows about foaf:knows and REST verbs – create/read/update/delete. It might allow addition of another foaf:knows with a PUT to the URI identifying the owner, an update to a mailing list so that all those foaf:knows objects are added, or automatically update a trust counter against a system resource because if Henry foaf:knows TBL in this context, then TBL must be “good for it” :-). In addition, it only knows that a URI represents that Person, and the URI could be a hypermedia link in the form of an URL, a ftp or webDav link, or some other protocol. Finally, this “knows” concept is really an upfront agreement about what representations are being used for the state of the application (it knows and XML schema, or an Ontology, or perhaps even looks them up on the fly), but navigating thru state is controlled by the interactions with the service (Http verbs) and the responses (status and agreed represenation in the body content) received.

At first sight those RESTful libraries don’t really need to know that much about the connectedness – they only need to map verbs and serve resources with those links embedded (RDF anyone? ) and using those well-known vocabularies, classes,relations and constraints – ie ontologies. But what about workflow : I post an object or resource, I get a response with the ID of that resource, and I need a link that tells me where to go for the next state transition ?

So, lo-and-behold, we have semantic linked data and REST superadditively combined, in a loosely coupled web (or “cloud”, if you like that keyword) of semantic links, intelligent user agents that understand those links or their context, web resolvable URIs, and value-added interlinked services – in effect a “Web Service Bus”. [7] !!

Now

Point your People tool at the RESTful people+location web service and it “just works” to give you a social-network-mashup of connected people and interests (provenance, trust), and then

switch over to your Energy consumption application and it also just works (based on what it has chosen to do and the well-known ontologies and resources it understands) – see how big your carbon footprint is when you meet TBL next week at Geneva if you fly,drive or take the train – and maybe you’ll be able to see who you can meet on the way and who else will be sitting beside you.

But your not out of the woods yet, doing semantic RESTfu web apps isnt a clear open space : your application still has to deal with authentication, input validation, long lived database transaction control, multithreading, performance, perhaps object relational mapping, but jax-rs/REST takes care of the object-message-mapping (the interface-to-implementation layer), your client or agent is thin but intelligent, and your middle tier contains your business logic.

Your application will need to honour the request-response state machine, perhaps checking availability using OPTIONS, or Etags.

You’ll need to decide how to transform from your programming model of choice – OO perhaps – to Resource. Some of the object to RDF mapping within libraries like Empire[15], JenaBean[16], Sommer [22]{defunct?), object-triple [17] may help. Perhaps this wont be an issue for you if you can foist the RESTful resource and linkage proposition onto an object model and remain in the object world – why waste processor and resource when you store data in RDF, convert to an Object on retrieval, process, convert to Xml-RDF or JSON on the way out, then parse and walk in a JSP before rendering as HTML ? As an OO programmer on the web you’re familiar with marshalling objects in and out of different serialization – RDF/XML/JSON/HTML, but you do want and need to minimise those transitions. Perhapsfor “Big Data” we should stay in the Resource world : persist to a fast native RDF triplestore or HPC based system on a cluster of MapReduce or somesuch (CouchDB[20], Heart/HBase [21] ? perhaps BigData[18] or SHARD[19], AllegroGraph[23] ?), and talk to it with ProLog or some such – forget the Object paradigm and embrace the Linked, Open World Resources, and also do it with REST.

You also have to be clear that REST suits what you want to do (other architectures haven’t just been demoted to history) what your services are -what you are interfacing with, what are your domain objects, what service operations are exposed when, what workflow do you need to encompass[13], and how granular you need to be – a shopping cart application will need to save items to a shopping list, rather than save the items themselves (or the cart resource probably), but it will also, behind the service, need to update a stock control or inventory – which isnt exposed to your end user. So be clear about which service level CRUD operations you need to expose to your user or “agent”, and which if any domain objects you need to directly manipulate.

But in the end, hopefully, you’ve still followed your enterprise principles and patterns, but you’ve adopted a long lasting web-scale architecture, and if youve added the semantic vocabulary, you’ve got the basis for successful evolution, a network effect, adaptable clients and agents and a successful resolution to an important business case – thats why your doing this, isn’t it, not because its cool ?

Update : April 24 – read Otavios paper on RESTfulGrounding [25] but also read Alowisheq, Millard and Tiropanis EXPRESS RESTful services paper[26]. RESTfulGrounding does for REST and WADL what OWL-S does for WSDL – it gets Semantic descriptions into the syntactic descriptions that automated services might use to interact with a web service, and facilitates discovery, composition, monitoring and execution. EXPRESS takes a different approach and based on an existing RESTful web service allows you to create an OWL description that can also be RESTfully accessed to describe the services resources, relations and “parameters” (OWL DataTypeProprty and ObjectProperty). They describe an adaptation of Amazon S3 buckets and docs with EXPRESS and compare with SA-REST and OWL-S approaches.

I like EXPRESS more than RESTfulGrounding as the simplicity appeals : the way it in turn relies on REST to underpin the service description access and interaction, adheres to RESTful principles for message exchange – using TTL rather than XML – , follow-you-nose, and the fact that this in turn means I don’t have to learn much if I want to make use of it. It does need the use of a code generator for stubs and URIs and a manual step to define which methods apply to which URIs, and doesn’t do much for discovery and composition – but they acknowledge this and intend to work on it – and a real implementation with these tools needs to be made available so that people like me can try it out. Is there one ?

I need to understand more about WADL[27,28] (why is it needed in the first place ?) and how I might go about actually building a set of services that need to be described and then discovered and composed to provide some useful value, but EXPRESS fits nicely into web scale, lo-fi approaches that quickly gain traction and that might make use of a CPoA kind of approach for discovery and composition.

* You’ve got other choices :

Apache CXF – perhaps best if you come from the WS* camp or have a mixture [8]

GlassFish Jersey – seems to have good traction, with hooks into Spring et al [9]

RESTeasy – JBoss jax-rs implementation [10]

RESTlet – not sure about this, seems to have good support, taking a different approach apparently – eg RESTlet vs SERVlet, but I need more info to do it justice [11]

PLAY Framework – has good REST support I understand from others. [12]

Clerezza – Apache incubator project with RDF, jax-rs, scala and “renderlet” support. Looks interesting from a RDF PoV, but maybe not so interesting from an OOD PoV [14]

Jena had an adaptor for NoSql – CouchDB and so on [3] ? Or would it – whats the benefit here – jena already has a super simple table schema, and rdf brings its own “schema-less” information model, so would it be any better than TDB – concurrency, locking, performance, multiple writers ? A SimpleDB examination has been done with interesting results [1] is CouchDB any better ? Is RDF suitable for NoSQL in the first place ? [4].
Would a hybrid approach like ATOMICS [5] help – denormalised tables, http interface – retain a central database and the SQL advantages and fill it with RDF ? – probably not, its really just mysql underneath, so its still going to have the performance issues.
Neither does trying to mix up a triple or quad store on a NoSQL key-value repository seem right, even if it promises performance improvements. Prolog anyone ?
Update March 14 2011 : Am I changing my mind ?

An Inbox or IMAP store FOAF analyser. Saw DataHug [2] being trumpeted again recently and wonder if what they’re doing isn’t simply possible with either a FOAF analyser (with extensions for count/frequency, response times etc) for TBird or Outlook say (backed by a persistent RDF store so it can be interfaced with sparql), or if you want corporate across inbox analyis, do it at the mail server level ?