The Emissary Design Pattern and RIAs (Rich Internet Applications)

Here is a first draft of a new presentation. I gave it a couple of months ago just after TechEd and thought I would share it as I try to write up some of my thoughts on RIAs. I plan to rework this a bit more and present it again at TechEd Europe. The talk is titled: "The Emissary Design Pattern and RIAs (Rich Internet Applications)"

Abstract:

The Emissary design pattern was first described in 1999 in the old "Fiefdoms and Emissaries" talk. The concept of a "fiefdom" is very similar to what we today call a service in a Service Oriented Architecture. The fiefdom is a separate trust sphere and transactional boundary. An emissary is a prescriptive pattern for interacting with a service (or fiefdom) which leverages reference data and a deep understanding of the service to prepare requests for service and maximize the chance those requests will comply with the requirements of the service. An emissary may be richly interactive and anticipate the validation requirements of the service.

The emerging world of RIAs (Rich Internet Applications) is a fascinating blend of a classic smart client and a browser-based web application. In a RIA app, client code runs in the browser but still must comply with the browser enforced sand-boxing and not cause harm to the host client machine. Navigation, naming, linking, and much more are being defined in a fashion drawing from both the web style and the client style. Many of the design issues with RIAs are under discussion today as this support for these applications is emerging.

This talk examines both the emissary design pattern and the nascent space of Rich Internet Applications. It motivates how one can look to the workflow patterns contained in our parents' use of paper forms for workflow to understand the possibilities of implementing user-centric workflow as shared replicated data. The talk concludes with some preliminary concepts of a shared and declarative definition of the "paper form" model and its constraints and how these may someday be used in the automatic generation of emissary-based RIA clients.

I enjoyed reading your powerpoint. I’ve been thinking similar thoughts for a while now, and it is good to see someone else’s detailed thinking on the subject.

The sound byte summary for my thinking is “sync down, actions up” or more precisely, “synchronize reference data to clients, send user actions to the service as messages.” User actions, once they are authenticated, authorized, and executed by the service, usually turn into new reference data that is then synchronized out to all interested parties. As you note, the same model can also be used between services, often in both directions.

This style requires that you be explicit about who owns and controls the data. If you don’t have clear data ownership rules, you will need hacks like merge conflict resolution to bail you out of your lack of foresight. The Sync Framework, ADO.NET, and SQL Server folks are more than happy to be enablers of this style with a big grab bag of such hacks. People typically get into this mess because of the belief that “sync down, sync up” is good enough.

I think the biggest (and most subtle) hurdle to implementing “sync down, actions up” is preserving user-centric authentication for user actions that occur on offline-able clients. Many implementations take shortcuts, including relying on sync to transfer the client-computed _result_ of user actions, and downgrading from user-centric to client-centric authentication and authorization. More sophisticated hacks try to validate user actions by reverse-engineering the actions from the resulting data sent from the client. Security shortcuts that would never fly in an online web client are the norm in offline-able clients. People are moving emissaries inside the trust sphere because it’s easier that way.

There isn’t a good RIA message queuing solution that securely flows user identity along with service calls. Solving this problem correctly requires queueing up user actions as signed messages that prove that this user identity performed this action, in a way that can be validated by the service once the queued message eventually arrives. This enables online web clients and offline-able clients to use the same exact user-centric authorization code, with the same degree of confidence.

Another issue offline-able clients may need to deal with is treating local updates to data as speculative until the actions that correspond to those updates have been approved by the service. The service reserves the right to veto any client action. This veto needs to eventually compensate for (“undo”, possibly via sync) any related speculative data changes on the client. Allowing the user to take further actions based on speculative local data is a trade-off to be made on a case-by-case basis, keeping in mind the impact of a service veto early in the chain of offline actions.

If the client and the service share the same business logic code, then the client can have a high degree of confidence that its speculative local updates are identical to the data that will eventually be synced back down from the service, assuming the corresponding action is approved. But just in case the client got it wrong, the system will eventually self-heal through synchronization of reference data.

I would love to know your thoughts on this. Sometimes I wonder if I’m unnecessarily complicating things since I don’t hear many people talking about solutions like this.

A quick one while I am on vacation. .NET 3.5 SP1 Mr. Hanselman has come out with the mother of all .NET 3.5 SP1 posts with Hidden Gems – Not the same old 3.5 SP1 post , which includes coverage of all the areas including my WCF changes . Announcing Entity