Cometd – SitePen Bloghttps://www.sitepen.com/blog
Providing Enterprise JavaScript support, training and development to web teams around the world.Sat, 17 Feb 2018 00:04:51 +0000en-UShourly1https://wordpress.org/?v=4.8Dojo WebSocket with AMDhttps://www.sitepen.com/blog/2012/11/05/dojo-websocket-with-amd/
https://www.sitepen.com/blog/2012/11/05/dojo-websocket-with-amd/#commentsMon, 05 Nov 2012 19:23:29 +0000http://www.sitepen.com/blog/?p=5475Dojo has an API for Comet-style real-time communication based on the WebSocket API. WebSocket provides a bi-directional connection to servers that is ideal for pushing messages from a server to a client in real-time. Dojo’s dojox/socket module provides access to this API with automated fallback to HTTP-based long-polling for browsers (or servers) that do not support the new WebSocket API. This allows you start using this API with Dojo now.

The dojox/socket module is designed to be simple, lightweight, and protocol agnostic. In the past Dojo has provided protocol specific modules like CometD and RestChannels, but there are numerous other Comet protocols out there, and dojox/socket provides the flexibility to work with virtually any of them, with a simple foundational interface. The dojox/socket module simply passes strings over the HTTP or WebSocket connection, making it compatible with any system.

The simplest way to start a dojox/socket is to simply call it with a URL path:

The socket module will then connect to the origin server using WebSocket, or HTTP as a fallback. We can now listen for message events from the server:

socket.on("message", function(event){
var data = event.data;
// do something with the data from the server
});

Here we use the socket.on() event registration method (inspired by socket.io and NodeJS’s registration method) to listen to “message events” and retrieve data when they occur. This method is also aliased to the deprecated Dojo style socket.connect().

We can also use send() to send data to the server. If you have just started the connection, you should wait for the open event to ensure the connection is ready to send data:

socket.on("open", function(event){
socket.send("hi server");
});

Finally, we can listen for the connection being closed by the server or network by listening for the close event. And we can initiate the close of a connection from the client by calling socket.close().

The dojox/socket method can also be called with standard Dojo IO arguments to initiate the communication with the server. This makes it easy to provide any necessary headers for the requests. For example:

We can also provide alternate transports in the socket arguments object. This would allow us to use the get() method in dojo/io/script to connect to a server. However, a more robust solution is to use the dojox/io/xhrPlugins for cross-domain long-polling, which will work properly with dojox/socket.

Auto-Reconnect

In addition to dojox/socket, we have also added a dojox/socket/Reconnect module. This wraps a socket, adding auto-reconnection support. When a socket is closed by network or server problems, this module will automatically attempt to reconnect to the server on a periodic basis, with a back-off algorithm to minimize resource consumption. We can upgrade a socket to auto-reconnect by this simple code fragment:

Using Dojo WebSocket with Object Stores

One of the other big enhancements in Dojo is the Dojo object store API (which supercedes the Dojo Data API), based on the HTML5 IndexedDB object store API. Dojo comes with several store wrappers, and the Observable store provides notification events that work very well with Comet driven updates. Observable is a store wrapper. To use it, we first create a store, and then wrap it with Observable:

This store will now provide an observe() method on query results that widgets can use to react to changes in the data. We can notify the store of changes from the server by calling the notify() method on the store:

We can signal a new object by calling store.notify() and omitting the id, and a deleted object by omitting the object (undefined). A changed/updated object should include both.

Handling Long-Polling from your Server

Long-polling style connection emulation can require some care on the server-side. For many applications, the server may have sufficient information from request cookies (or other ambient data) to determine what messages to send the browser. However, other applications may vary on what information should be sent to the browser during the life of the application. Different topics may be subscribed to and unsubscribed from. In these situations, the server may need to correlate different HTTP requests with a single connection and its associated state. While there are numerous protocols, one could do this very easily be defining a unique connection and adding that as a header for the socket (the headers are added to each request in the long-poll cycles). For example, we could do:

In addition, dojox/socket includes a Pragma: long-poll to indicate the first request in a series of long-poll requests to help a server ensure that the connection setup and timeout is properly handled.

We can easily use dojox/socket with other protocols as well:

CometD

To initiate a Comet connection with a CometD server, we can do a CometD handshake, connection, and subscription:

Conclusion

Dojo’s socket API is a flexible simple module for connecting to a variety of servers and building powerful, efficient real-time applications without constraints. This adds to the array of awesome features in Dojo.

]]>https://www.sitepen.com/blog/2012/11/05/dojo-websocket-with-amd/feed/4Now Supporting all Major Toolkits!https://www.sitepen.com/blog/2012/07/19/now-supporting-all-major-toolkits/
https://www.sitepen.com/blog/2012/07/19/now-supporting-all-major-toolkits/#commentsThu, 19 Jul 2012 15:20:25 +0000http://www.sitepen.com/blog/?p=4951We have been providing JavaScript and Dojo support to freelancers, start-ups and Fortune 500 companies for nearly a decade. As we intently watch enterprise organizations everywhere begin to roll out AMD (read about why AMD matters) and the associated code improvements, we are thrilled with the industry’s direction toward toolkit interoperability! Why? Because! Our masterful engineering team, consisting of influential members of various open source communities, positions SitePen perfectly to offer full-on, front-end web development support to the world!

Getting right to the point, (The Official Point!), we are pleased to announce the expansion of SitePen Support to officially include more than fifteen popular open-source JavaScript toolkits!

Now supporting the following JavaScript toolkits:

Dojo

Persevere packages

dgrid

Curl.js

CometD

Twine

jQuery

Backbone

underscore

RequireJS

PhoneGap/Cordova

MooTools

jQueryUI

Wire

Socket.IO

Express

In addition to toolkits, we will continue to support your custom JavaScript source code, as well as key underlying technologies and formats, including JSON, HTML5, WebSockets, SVG/Canvas, Mobile Web, Server-Side JavaScript, AMD, Node.js and many more.

Our expertise with Dojo and advanced JavaScript is relevant for a wide-range of desktop and mobile web application projects and our approach to SitePen Support has always been flexible with the priority being to improve our customers’ web apps. We strive to support our customers in every way possible and we continue to be Dojo experts. In addition, we’re now committed to providing your organization with the front-end development expertise that will optimize your application regardless of which toolkits and technologies your company is comfortable using. You have our word!

]]>https://www.sitepen.com/blog/2012/07/19/now-supporting-all-major-toolkits/feed/3Dojo WebSockethttps://www.sitepen.com/blog/2010/10/31/dojo-websocket/
https://www.sitepen.com/blog/2010/10/31/dojo-websocket/#commentsMon, 01 Nov 2010 06:54:44 +0000http://www.sitepen.com/blog/?p=1960NOTE: This post is out of date.Read our updated version of this post for more up to date information!

Dojo 1.6 introduces a new API for Comet-style real-time communication based on the WebSocket API. WebSocket provides a bi-directional connection to servers that is ideal for pushing messages from a server to a client in real-time. Dojo’s new dojox.socket module provides access to this API with automated fallback to HTTP-based long-polling for browsers (or servers) that do not support the new WebSocket API. This allows you start using this API with Dojo now.

The dojox.socket module is designed to be simple, lightweight, and protocol agnostic. In the past Dojo has provided protocol specific modules like CometD and RestChannels, but there are numerous other Comet protocols out there, and dojox.socket provides the flexibility to work with virtually any of them, with a simple foundational interface. The dojox.socket module simply passes strings over the HTTP or WebSocket connection, making it compatible with any system.

The simplest way to start a dojox.socket is to simply call it with a URL path:

var socket = dojox.socket("/comet");

We can now listen for message events from the server:

socket.on("message", function(event){
var data = event.data;
// do something with the data from the server
});

Here we use the socket.on() event registration method (inspired by socket.io and NodeJS’s registration method) to listen to “message events” and retrieve data when they occur. This method is also aliased to the Dojo style socket.connect().

We can also use send() to send data to the server. If you have just started the connection, you should wait for the open to ensure the connection is ready to send data:

socket.on("open", function(event){
socket.send("hi server");
});

Finally, we can listen for the connection being closed by the server or network by listening for the “close” event. And we can initiate the close of a connection from the client by calling socket.close().

The dojox.socket method can also be called with standard Dojo IO arguments to initiate the communication with the server. This makes it easy to provide any necessary headers for the requests. For example:

We can also provide alternate transports in the socket arguments object. This would allow us to use dojo.io.script.get to connect to a server. However, a more robust solution is to use the dojox.io.xhrPlugins for cross-domain long-polling, which will work properly with dojox.socket.

Auto-Reconnect

In addition to dojox.socket, we have also added a dojox.socket.Reconnect module. This wraps a socket, adding auto-reconnection support. When a socket is closed by network or server problems, this module will automatically attempt to reconnect to the server on a periodic basis, with a back-off algorithm to minimize resource consumption. We can upgrade a socket to auto-reconnect by this simple code fragment:

socket = dojox.socket.Reconnect(socket);

Using Dojo WebSocket with Object Stores

One of the other big enhancements in Dojo 1.6 is the new Dojo object store API (supercedes the Dojo Data API), based on the HTML5 IndexedDB object store API. Dojo 1.6 comes with several store wrappers, and the Observable store provides notification events that work very well with Comet driven updates. Observable is a store wrapper. To use it, we first create a store, and then wrap it with Observable:

This store will now provide an observe() method on query results that widgets can use to react to changes in the data. We can notify the store of changes from the server by calling the notify() method on the store:

We can signal a new object by calling store.notify() and omitting the id, and a deleted object by omitting the object (undefined). A changed/updated object should include both.

Handling Long-Polling from your Server

Long-polling style connection emulation can require some care on the server-side. For many applications, the server may have sufficient information from request cookies (or other ambient data) to determine what messages to send the browser. However, other applications may vary on what information should be sent to the browser during the life of the application. Different topics may be subscribed to and unsubscribed from. In these situations, the server may need to correlate different HTTP requests with a single connection and its associated state. While there are numerous protocols, one could do this very easily be defining a unique connection and adding that as a header for the socket (the headers are added to each request in the long-poll cycles). For example, we could do:

In addition, dojox.socket includes a Pragma: long-poll to indicate the first request in a series of long-poll requests to help a server ensure that the connection setup and timeout is properly handled.

We can easily use dojox.socket with other protocols as well:

CometD

To initiate a Comet connection with a CometD server, we can do a CometD handshake, connection, and subscription:

Conclusion

Dojo’s socket API is a flexible simple module for connecting to a variety of servers and building powerful, efficient real-time applications without constraints. This adds to the array of awesome new features and improvements in Dojo 1.6.

]]>https://www.sitepen.com/blog/2010/10/31/dojo-websocket/feed/11Facebook and FriendFeed’s Tornado is now Open Sourcehttps://www.sitepen.com/blog/2009/09/11/facebook-and-friendfeeds-tornado-is-now-open-source/
https://www.sitepen.com/blog/2009/09/11/facebook-and-friendfeeds-tornado-is-now-open-source/#respondFri, 11 Sep 2009 20:12:30 +0000http://www.sitepen.com/blog/2009/09/11/facebook-and-friendfeeds-tornado-is-now-open-source/Orbited, cometD-python, and other Python Comet servers have new competition in the form of Facebook’s now open source Tornado web server. Tornado was part of the technology acquired by Facebook when they purchased FriendFeed last month, and Facebook has decided to open it up under the Apache version 2 license.

Tornado supports long-polling and HTTP streaming, but also includes many of the web site building blocks found in frameworks like Django. This is a really exciting announcement as Facebook and Google (with their Wave product) have both made major announcements around Comet technologies, bringing real-time capabilities to the mainstream, under open source licenses.

]]>https://www.sitepen.com/blog/2009/09/11/facebook-and-friendfeeds-tornado-is-now-open-source/feed/0Using REST Channels with cometDhttps://www.sitepen.com/blog/2009/06/15/using-rest-channels-with-cometd/
https://www.sitepen.com/blog/2009/06/15/using-rest-channels-with-cometd/#respondMon, 15 Jun 2009 08:02:43 +0000http://www.sitepen.com/blog/2009/06/15/using-rest-channels-with-cometd/REST Channels provides a mechanism for receiving notifications of data changes and integrates Comet-style asynchronous server sent messages with a RESTful data-oriented architecture. Dojo includes a REST Channels client module which integrates completely with Dojo’sJsonRestStore, allowing messages to be delivered through the Dojo Data API seamlessly to consuming widgets, with minimal effort. The REST Channels module will automatically connect to a REST Channels server, like Persevere (which offers REST Channels out of the box). However, existing infrastructure may necessitate the use of an alternate Comet server like Jetty’s cometD server. REST Channels can be used on top of another Comet protocol like Bayeux’s long-polling protocol and with a little bit of reconfiguration, you can use Dojo’s REST Channels with a cometD server to achieve Comet-REST integration.

Receiving Data Notifications

REST Channels is designed to unite the concept of topics with resource locators. Therefore, REST Channels can leverage Bayeux’s topic names to indicate the resource that is being targeted. We can then subscribe to Bayeux’s channels and then delegate all the data notification messages to the restListener which will delegate the proper data events through the data stores:

This could be used in conjunction with a JsonRestStore, and messages would be delegated to the store’s events:

weatherStore = new dojox.data.JsonRestStore({target:"/weather/"});

Now it is possible to publish data change notification messages such that clients will receive and interpret the messages properly. To publish a data notification message on Jetty’s cometD hub, we can call the doPublish with something like:

Now this message will be routed through the restListener and onSet events will be fired for any attributes that were updated in the local cache of the “/weather/lax” item. If the lax item was being displayed in a data-aware widget, it will automatically be updated. For example, the DataGrid supports data notification events, and if this item appears in a DataGrid, the corresponding row will automatically display the new value.

Subscribing to Resources

One of the key integration points that REST Channels provides between data access and notification routing is in auto-subscribing to resources when they are retrieved. The primary use case for REST Channels is in providing a real-time view of data, and therefore when data is retrieved from the server, REST Channels can automatically subscribe to the resource to receive all future notifications of changes to the object. With a true REST Channels implementation, subscription information is included in GET requests, so that a single request can be made to a server that both requests a resource and subscribes to it at the same time. If you are just using a cometD server, such an integration is not automatic. There are a couple of different approaches for subscribing to resources.

On the client-side, we can send subscription requests when we make GET requests. This can be done by intercepting the REST GET request handler, and creating subscriptions.

All GET requests will then trigger a subscription automatically. One could easily add some conditional logic that only subscribes on a subset of requests.

The biggest disadvantage of client generated subscription requests is that they generate two requests for each resource access, one GET and one subscription request (as a POST). If the REST handler for a server can be modified to understand the subscription header that REST Channels adds to GET requests, the server can subscribe the client without requiring additional requests. The key to making this work, is to be able to connect cometD’s client connections with the request handler. One way to do this is to define a Bayeux handler that will put the current client object in the HTTP session, and then retrieve it from the session in the REST handler. The REST handler can then subscribe the client to the resource locator topic.

One tricky aspect of this approach is that it is possible for multiple client connections to exist in a single session. Each page will have its own cometD client connection, but will be in the same HTTP session. Therefore, the client connections need to be stored and accessed by their client id. In order for the REST handler to determine the correct client id for a client, you can have the Client-Id header set on the requests, and then this id can be used to find the correct Bayeux client connection. RestChannels automatically sets the Client-Id header, but the correct client id from the cometD handler must be assigned to the RestChannels clientId variable:

Together these techniques can be used to combine a REST architecture with a cometD server for data notifications with Dojo’s REST capabilities in JsonRestStore, and complete integration into Dojo’s data event notification system.

]]>https://www.sitepen.com/blog/2009/06/15/using-rest-channels-with-cometd/feed/0Stocker: Advanced Dojo Made Easyhttps://www.sitepen.com/blog/2009/04/01/stocker-advanced-dojo-made-easy/
https://www.sitepen.com/blog/2009/04/01/stocker-advanced-dojo-made-easy/#commentsWed, 01 Apr 2009 23:41:02 +0000http://www.sitepen.com/blog/2009/04/01/stocker-advanced-dojo-made-easy/SitePen is excited to announce Stocker, which demonstrates some of the more advanced capabilities of Dojo, including the newly released DataChart, the DataGrid, Data Store, Comet, Persevere, and BorderContainer. SitePen is also offering a one-day workshop where you will learn how to create Stocker yourself, but I’m here to give you a sneak peak of what Stocker is and how it works.

Stocker uses these technologies to emulate a stock monitoring application. We’re using made up data, but that’s actually more interesting. The Persevere server generates new stock items at certain intervals, and then pushes them to the browser with Comet. Then the Data Store updates its items and triggers an onSet notification. The DataGrid and DataChart are both connected to the same store, and are listening to that event. They then update their displays and show the stock items and their latest data.

Persevere

Persevere, which we chose to use inside the robust Jetty web server, allows us to quickly develop a data-driven web application that can directly interface with Dojo, without having to setup a relational database and create Ajax request handlers. It’s similar to Jaxer, in that it is a rich interactive server-side JavaScript environment (though Persevere uses Rhino), and is accessible via JSON-RPC, meaning Persevere is great for pure Ajax front-ends. SitePen’s Kris Zyp has a large number of articles explaining how Persevere works.

As you can see, you can even include methods in the schema, which can be used for a variety of things. The method go() shown here calls the server-side method startUpdateStock(). You can access this method from the command-line through this static class:

JS> Stock.go();

In the server-side code, we start with a set of simple objects on which we base the Stock data. An example of one such object looks like this:

{ symbol:"ANDT", name:"Anduct", price:3.13 }

Note: Stocker uses more properties than this, but it is edited for clarity.

When startUpdateStock() is fired the first time, these objects are read in, randomized, and then saved as a Stock instance. When saved, the instance is written to a table and persisted. Basically it works like this: think of Stock as your SQL table, and each instance as a row of data. The columns in the table are represented by the properties that you see in the object above, such as symbol or price. And all you have to do to write to this table is:

new Stock( { symbol:"ANDT", name:"Anduct", price:3.13 } );

And modifying the data is just as easy. We loop through all of the instances in the Stock class (or table) and change the properties of the object. Persevere even automatically commits the changes for you, so this is all you need to do:

Notice we were able to use forEach on the instance array. We’re able to use all the functionality of Mozilla JavaScript 1.7, like array functions or getters and setters, and don’t have to worry about cross-browser issues when writing server-side JavaScript in Persevere.

Comet

In Stocker, we created an arbitrary amount of six Stock instances. Once they are created, the client-side can now fetch them and access their properties using the dojox.data.PersevereStore. Of all the data stores available in Dojo, the PersevereStore is my favorite. You pretty much declare your store with a pointer to your class, do your fetch, and you’re done:

PersevereStore provides you with a connection to the server via Comet. When we randomized our stock data on the Persevere server, this committed the change to the database. These changes are then pushed to the browser. You can listen for these changes in the onSet event of the PersevereStore:

DataGrid

Now it’s time to start wiring up the user interface, and believe it or not, things get easier here. The DataGrid is largely the work of Bryan Forbes and Nathan Toone, and is a first class user interface component. We started with the HTML structure that matched the properties we wanted to display from our Stock instances:

Stock Name

Symbol

Price

You could also add a store and fetch attribute, but we opted to hook this up within the code:

dojo.addOnLoad(function(){
grid.setStore(store, "[?symbol='*']");
});

The DataGrid has a built in fetch which grabs the current stock instances from Persevere, and then updates the items when the store’s onSet event is fired.

We did some CSS styling so the Grid would match our Stocker theme, and we were done with the Grid!

DataChart

The DataChart is brand new for Dojo 1.3, was created by yours truly, and was built specifically for Stocker, as it was obvious that charting was missing the ease of implementation that DataGrid provides. DojoX has an amazing vector graphics charting system written mainly by Eugene Lazutkin. It has a rich API that allows for maximum customization. However, DojoX charts didn’t support data stores, and while heavy customization is good in some cases, it can also create a barrier of entry for new developers. DataChart provides that interface between the two, all while supporting Dojo Data. I did a full write-up of DataChart in a previous post. The intent was to make DataChart as easy to setup as the DataGrid:

There is also some custom code that places chart legends inline with their respective items in the Grid. There’s also some code that handles switching between some of the chart types (just to be fancy). But other than that, we were done with charts!

BorderContainer

To get the BorderContainer to work for you, you have to understand that it supports subsets of two different designs, headline or sidebar. If you want the sidebar pane to extend from the top of the layout to the bottom, you’d naturally use sidebar. If you want the header and footer to extend the entire width, you use headline. Within either layout you have five regions: top, bottom, left, right, and center. The center pane stretches to fit, so you set the widths or heights of all panes but that one. Each region can either be a ContentPane or another BorderContainer. Stocker is slightly more complicated than standard layouts allow, so we use a nested BorderContainer.

What follows is the structure of the Stocker layout. For readability, dojoType is type, BorderContainer is BC and ContentPane is CP. Stocker uses the headline design which is the default, so it need not be not set. Its center pane is another BorderContainer with a top and center.

Header

Sidebar

Grid

Chart

Footer

All children of a BorderContainer have an attribute region, which is the location in the design. BorderContainer’s have two other attributes: liveSplitters and gutters. Setting gutters to true puts a margin around it. The Stocker margins are handled in the CSS so these are set to false. The param liveSplitters indicates that one or more of the children will be resizable. A resizable pane acts much like the framesets of yesteryear, that had dividers between them that you could grab and drag to resize the frames.

Therefore, child panes can have the attribute splitter=true, which allows the user to resize it. It’s not obvious which child panes should get this attribute, since the splitter is shared amongst two panes. The trick to knowing is that the center pane, since it’s always stretchy, never gets this attribute—it is always applied to the pane that shares the splitter with it.

The DataGrid and DataChart were inserted in their proper ContentPane containers, the buttons for chart switching were placed in the sidebar, and the header and footer content was implemented. We launched the Persevere server, opened Stocker in our browser and watched the updates!

Conclusion

Stocker was created to show the power of Dojo and some of it’s more advanced components, and how easily they can be implemented in your site or web application. We’ll also be talking about Stocker and Dojo in a number of upcoming conference talks this year. If you’re interested in learning more about Stocker and how to put it together, drop us a line!

SitePen has workshops planned in various locations around world, and we’d love it if you could join us. You’ll learn everything you need to build Stocker and more at the following one-day Intro to Dojo, Charts, Grids, and Comet workshops:

]]>https://www.sitepen.com/blog/2009/04/01/stocker-advanced-dojo-made-easy/feed/23A Tale of Two Panelshttps://www.sitepen.com/blog/2009/02/25/a-tale-of-two-panels/
https://www.sitepen.com/blog/2009/02/25/a-tale-of-two-panels/#respondWed, 25 Feb 2009 08:02:24 +0000http://www.sitepen.com/blog/2009/02/25/a-tale-of-two-panels/Silicon Valley Web Builder has a series of monthly panels on topics of interest to web application developers. I had the opportunity to attend a pair of events recently, once as a speaker, once as an attendee, and the contrast between the two was intriguing. The first panel in November was focused on Comet, while the most recent panel was a comparison of Ajax toolkits.

As an attendee of the Comet panel, I found the discussion interesting, but was a bit disheartening and negative. In retrospect, the negative tone reflects the pain and disappointment Comet engineers face in trying to come up with the perfect solution for low-latency data transit across the wire. Michael Carter was the lone optimist, describing the work he has done to date with Orbited, and what the HTML5 WebSocket promises to bring us in the near future. The other panelists were not as optimistic, having been burned by specifications not adopted and the ongoing frustrations with HTTP connection limits, proxy configurations, flaky internet connections, and more—all of which prevent many of the better approaches to Comet being viable.

There was also disagreement around the name Comet, what it encompasses, and whether it should be called WebSocket, Comet, Reverse Ajax, Ajax Push, etc. Alex and I think of all of this as Comet, whereas some panelists believe that WebSocket is not part of the Comet moniker, which is silly. One of the main reasons that Ajax took off is that people just agreed on the name (though I personally hated it for months), whereas in the Comet space there’s disagreement on just about everything, from the name to protocols to techniques. In my opinion, a race to standardize and simplify are essential if this fledgling set of techniques is to become approachable to a wider range of developers. An out of the box implementation with Google App Engine wouldn’t hurt either!

I’m optimistic because the work of Kris Zyp, Michael Carter, Andrew Betts, Greg Wilkins, Alessandro Alinone, and others is getting us closer to having ideal Comet solutions, but we have a lot of work to do before Comet is as approachable as Ajax.

The second panel, by contrast, was focused on Ajax toolkits. Dojo was represented by yours truly, with great representation by committers to GWT (Fred Sauer), jQuery (Yehuda Katz), MooTools (Tom Occhino), and YUI (Adam Moore), with Michael Carter moderating the panel. In contrast with the Comet panel, we had a lot of optimism and laughs, the result of several years of seeing very impressive advances in what’s possible in the browser. It’s been a great adventure from pre-Ajax to where we are today, and it showed with the sheer enthusiasm by everyone on the panel. We had our differences of opinion and some very funny moments, but what I really took away from this panel is that we’re all learning from each other, have similar goals in helping developers create amazing user experiences, and that there is tremendous interest in the great work being done in creating Ajax toolkits.

Comet toolkit developers of today remind me of DHTML toolkit developers in the pre-Ajax days: a bunch of frustrated, grizzled engineers that wanted to do more, but didn’t yet have enough code, tools, browser support, and/or momentum to get where they needed to go. Comet techniques are widely adopted in applications like the London Paper, Meebo, Gmail Chat, and Facebook Chat, as well as behind corporate firewalls in advanced financial applications, but the approach has not yet hit the developer tipping point like Ajax.

]]>https://www.sitepen.com/blog/2009/02/25/a-tale-of-two-panels/feed/0The Tech of SitePen Supporthttps://www.sitepen.com/blog/2008/08/19/the-tech-of-sitepen-support/
https://www.sitepen.com/blog/2008/08/19/the-tech-of-sitepen-support/#commentsTue, 19 Aug 2008 07:01:12 +0000http://www.sitepen.com/blog/2008/08/19/the-tech-of-sitepen-support/SitePen’sSupport service is built using a variety of interesting techniques and technologies. Read on to see how we built a system that treats the web browser as a real client tier and bridges the worlds of JavaScript, Python and PHP seamlessly to provide a great experience for our customers.

Starting with the User

Even though this article is about the technology used to implement SitePen’s Support service, it doesn’t make any sense to talk about technology without talking about the user experience. The technology is there to do something. But, what?

With the SitePen Support service we provide support for the Dojo Toolkit, DWR and Cometd open source projects. We’re out to provide customers with the help they need when they need it. At the highest level, we needed to:

Collect support requests from our customers

Act on them

Keep the customer informed

Follow along with the terms of our support contracts

Without those things, we wouldn’t have a service at all. In addition, there are other requirements for making it the kind of service we’d be proud of:

The customer user interface should be very responsive

The user interface should be less like a content-oriented web site and more like an application

People in a company should be able to work together easily (data should be shared)

Customers should be able to bring themselves up-to-date on what’s happening in their account at any time

Email is still a super convenient user interface

Note that our goals were all about providing a great support service, and not about creating software. If there was off-the-shelf software that would do all of the above for us, at the level of quality we expected, we would certainly have used it. But, there wasn’t. Which brings us to…

The Tech of SitePen Support

To realize those goals for the service, we employed a bunch of different tools and techniques:

Dojo runs the client-side

The client drives the whole interaction

The browser speaks JSON-RPC with the server for most operations

The server is built on Python’s WSGI standard

Client-driven apps have very little “obscurity” to try to hide behind, so we needed to be sure we followed best practices

Off-the-shelf help desk software (HelpSpot) handles part of the work for us

The Client-side Runs the App

A typical web app today looks something like this:

Typical modern web app model

Most interactions are decided by the server. The client makes a request, the server gathers data and uses some sort of template engine to format that data and present it. That cycle is repeated over and over again, with the server always deciding what comes next and how the next bit will be displayed. Many apps today add some Ajax to that (that little JavaScript box at the top of the diagram), but very often the formatting of data is handled by the server and the client just uses .innerHTML to drop the fresh content in place.

For our support application, the model looks more like:

One model for rich client web apps

With this setup, the server is not responsible for the presentation layer at all. The server sends up static HTML files, and the browser does all of the work in displaying the data to the user. This approach was the topic of my PyCon 2008 talk: Rich UI Webapps with TurboGears 2 and Dojo.

Dojo Runs the Client

The entire user interaction in the Support application is driven by the JavaScript client-side code. Dojo is a natural fit for this style of working, with its built in module system, RPC support, dojo.data interfaces and powerful Dijits.

We set up a very simple “PageModule” system where we use Dojo’s dynamic loading to load a new JavaScript module from the server and then call “initPage” on the code in that module. That will load up any HTML it needs to display and initialize everything for the user. A simple call to spsupport.core.loadContent is all we need to do to move the user onto the next piece of functionality:

With this kind of modular architecture, we could have a giant application in which everything gets loaded on demand. Combining this setup with Dojo’s build system gives us quite a bit of control over exactly when things load, allowing us to balance initial load time with interactive responsiveness. Any significant “single page” application will need this. The Support application is by no measure a “giant” application, but using good application design techniques like this allows us to add whatever features we need to the application without impacting its load time or responsiveness.

URL Dispatch: Not Just for Servers Anymore

The page content is decided by the JavaScript in the client, not by the server.

URL dispatch is one of the core features of a server-side web framework. It turns out that putting the client in charge moved some of the burden of URL dispatch to the client! When you hit the front page of the Support site, the JavaScript figures out where you really want to go:

/: send the user over to the support page on SitePen’s main site

/?login: give the user a chance to login

/?signup-(someplan): give the user a signup page with a plan selected

When working with server-side frameworks, you get used to URLs being divided up by slashes and everything after the ? denoting extra query parameters. With the client in control, the slashes tell the server what static file to serve up, and everything after the ? tells the client what to display. Given that we only have three different possibilities there, we didn’t have to get fancy with our URL dispatch. You certainly could write a client-side framework that has many of the same features you get from a server-side framework, if that’s what your application needs. The support application only needed to interpret a very small number of URLs.

Full-stack Framework? Not Anymore!

Our support application doesn’t use server-side templates for the user interface and doesn’t really do URL dispatch. The “full-stack” web frameworks in use today (Rails, Django, TurboGears, CakePHP, Grails, to name a few) are basically defined by their URL dispatch, templates and database support. Given that we didn’t need two of the three of those, we could go a lot simpler on the server than a full-stack framework.

JSON-RPC

In the past, I’ve often used plain old HTTP requests returning JSON results as a convenient and simple mechanism for requesting data and actions on the server. In fact, that was the approach I took in my PyCon talk. For the Support project, we decided to use JSON-RPC instead because it’s a little bit cleaner. In Dojo, making a JSON-RPC request is just like making a function call that returns a Deferred. So, the syntax for using our server-side API was very straightforward. Even better, parameters and return values automatically came across as the correct types (strings, numbers, arrays, etc). The server-side is also simplified, because it does not need to do much in the way of URL dispatch (JSON-RPC POSTs to a single URL) and the server doesn’t need to worry about converting incoming values from strings.

wsgi_jsonrpc is quite easy to work with. We subclassed it to handle our authentication easily and added a neat bit where making a GET request to the JSON-RPC URL would return the service description. That little change made it easy to wire up Dojo on the client:

This small snippet will synchronously load the RPC service description. It works synchronously because the UI can’t do much until it can make RPC calls for data. Then, it pulls the function names out of the response to create the JsonService. From that point onward, we just make calls to spsupport.service.function_name whenever we need to call the server.

Each available RPC call is simply a Python function in a module that has some extra metadata attached to it. For example, the request_details function is used to look up a support request by ID and return the detailed information about the request:

The @returns decorator is used to mark the function as one that should be available via JSON-RPC, and to also make note of the return type. The @params decorator, combined with the return type listed in @returns, are used when generating the service description for the client. @auth tells our JSONRPCDispatcher subclass that this function requires authentication. Whenever the @auth decorator is present, the first parameter passed to the function is always the user object. The function itself can then perform additional checks. For example, request_details makes sure that the request is from the same organization as the user.

This design makes it very easy for us to write automated tests for the server side code. I’m personally a fan of test driven development in general, and in the next section we’ll see why automated tests are particularly important for this kind of application.

All Out in the Open

When you provide a rich user interface, particularly using Open Web technologies, you cannot count on security through obscurity. You have to assume that people will study your code and learn about all of your “hidden” URLs that make up sensitive APIs. You would never guess that Gmail doesn’t have a public API, given the number of add-ons people have made for it.

Never trust the client code and requests coming from the client. When creating new, authenticated APIs on the server, the first unit test I write is one that ensures that unauthorized users are given the boot. When working with WSGI and WebOb, writing tests that can run without a server is quite easy:

webob.Request.blank gives you a new Request object that is properly populated to look like a real request. You can then make changes from there to set up your test conditions. In the example above, I’m passing along bad authentication information. At the end, I assert that the result of sending bad authentication information is the expected 401 response.

As easy as testing is using WebOb, unit testing the JSON-RPC calls is even more straightforward. We just call the function directly as we would any Python function that we are unit testing. This kind of application setup makes server-side testing a breeze.

Of course, there’s a lot more work required to ensure that you’re handling data securely than just authenticating the users. We confirm that the user is authorized to access the data they are trying to access (with unit tests, of course). Using SQLAlchemy, we are not vulnerable to SQL injection attacks. We also run all of the requests over SSL to make sure that customer data is not grabbed off of possibly insecure networks.

HelpSpot: the Extra Layer in our Stack

Once a user is logged in, they’re taken to the /dashboard/ page where they can review their requests and account information and create new requests. The support dashboard is built around everything I’ve discussed so far. Each “page” in the dashboard is a separate module loaded via the loadContent call, and those modules make JSON-RPC requests to retrieve and update data on the server.

When you first log in, you can see the recent support request activity.

All of the details of your current support plan are available on one screen.

The server-side software that we wrote is responsible for keeping track of support plans and gathering up support requests for a given organization so that they can be displayed together. The support requests themselves with their complete histories are all tracked by HelpSpot. Within SitePen, we use HelpSpot’s user interface to update requests, and HelpSpot manages all email interaction. This saved us a good deal of implementation work.

HelpSpot is written in PHP, so we can’t directly call its functions from our Python-based server. One reason we chose HelpSpot is that it offers a solid web API of its own. We make simple HTTP requests to HelpSpot and it returns JSON formatted data. All of those requests are between our support application and HelpSpot. There are some instances where we needed to look up or update data in bulk, and HelpSpot did not have APIs specifically for that. Luckily, HelpSpot’s database schema is nicely designed and easy to understand, so there are instances where we also collect data directly from HelpSpot’s database.

Putting it all together

Bringing new developers up to speed on our support project is simple, because we use zc.buildout. zc.buildout creates a sandbox on the developer’s system with all of the pieces they need to work on the project.

Once we’re ready to deploy, we use a Paver pavement file to describe how to package up the software. Our pavement runs the Dojo build system to combine and shrink the JavaScript files and then bundles everything up into an egg file. The pavement also includes a task that will upload the egg to the server. Using zc.buildout also works great at deployment time, because we just have to run “bin/buildout” in the server’s deployment directory.

Creating the Desired User Experience

I think that the tools and techniques we used in building our support application were nifty and different from how most people are building webapps today. Our approach was all driven by a desire to provide a great user experience that meets our four original goals:

Collect up support requests from our customers

Act on them

Keep the customer informed

Follow along with the terms of our support contracts

The right tools helped us to reach these goals without a giant development budget.

Next month, I’ll be writing about the processes and tools we use to manage the support service within SitePen and ensure that we’re always on top of our customers’ needs.

]]>https://www.sitepen.com/blog/2008/08/19/the-tech-of-sitepen-support/feed/3Client/Server Model on the Webhttps://www.sitepen.com/blog/2008/07/18/clientserver-model-on-the-web/
https://www.sitepen.com/blog/2008/07/18/clientserver-model-on-the-web/#commentsFri, 18 Jul 2008 14:48:55 +0000http://www.sitepen.com/blog/2008/07/18/clientserver-model-on-the-web/Prior to the popularity of the web, client/server applications often involved the creation of native applications which were deployed to clients. In this model, developers had a great deal of freedom in determining which parts of the entire client/server application would be in the client and which in the server. Consequently, very mature models for client/server development emerged, and often well designed optimal distribution of processing and logic could be achieved. When the web took off, the client was no longer a viable application platform, it was really more of a document viewer. Consequently the user interface logic existed almost entirely on the server. However, the web has matured substantially and has proven itself to be a reasonable application platform. We can once again start utilizing more efficient and well-structured client/server model design. There are certainly still technical issues, but we are in a position to better to build true client/server applications now.

Traditional web application development has distributed the implementation of the user interface across the network, with much of the user interface logic and code executed on the server (thin client, fat server). This has several key problems:

Poor distribution of processing – With a large number of clients, doing all the processing on the server is inefficient.

High user response latency – Traditional web applications are not responsive enough. High quality user interaction is very sensitive to latency, and very fast response is essential.

Difficult programming model – Programming a user interface across client/server is simply difficult. When every interaction with a user must involves a request/response, user interface design with this model is complicated and error prone. The vast number of web frameworks for simplifying web development testifies to this inherent difficulty. Some have mitigated this difficulty to some degree.

Increased vector of attack – Unorganized mingling of user interface code with business code can increase security risks. If access rules are distributed across user interface code, as user interface code grows and evolves, new vectors of attack emerge. With mixed code, new user interface features can easily create new security holes.

Heavy state management on the servers – When client user interface state information is maintained by the server, this requires a significant increase in resource utilization as server side sessions must be maintained with potentially large object structures within them. Usually these resources can’t be released until a session times out, which is often 30 minutes after a user actually leaves the web site. This can reduce performance and scalability.

Offline Difficulties – Adding offline capabilities to a web application can be a tremendous project when user interface code is predominantly on the server. The user interface code must be ported to run on the client in offline situations.

Reduced opportunity for interoperability – When client/server communication is composed of transferring internal parts of the user interface to the browser, it can be very difficult to understand this communication and utilize it for other applications.

With the massive move of development to the web, developers have frequently complained of the idiosyncrasies of different browsers and demanded more standards compliance. However, the new major shift in development is towards interconnectivity of different services and mashups. We will once again feel the pain of programming against differing APIs. However, we won’t have browser vendors to point our fingers at. This will be the fault of web developers for creating web applications with proprietary communication techniques.

Much of the Ajax movement has been related to the move of user interface code to the client. The maturing of the browser platform and the availability of HTTP client capabilities in the XMLHttpRequest object, have allowed much more comprehensive client side user interface implementations. However, with these new found capabilities, it is important to understand how to build client/server applications.

So how do you decide what code should run on the client and what should the run on the server? I have mentioned the problems with user interface code running on the server. Conversely, running business logic and/or data management on the client is simply not acceptable for security reasons. Therefore, quite simply, user interface code is best run on the browser, and application/business logic and data management is best run on the server side. We can take a valuable lesson from object oriented programming to guide this model. Good OO design involves creating objects that encapsulate most of their behavior and have a minimal surface area. It should be intuitive and easy to interact with a well designed object interface. Likewise, client and server interaction should be built on a well-designed interface. Designing for a modular reusable remote interface is often called service oriented architecture (SOA); data is communicated with a defined API, rather than incoherent chunks of user interface. A high quality client/server implementation should have a simple interface between the client and server. The client side user interface should encapsulate as much of the presentation and user interaction code as possible, and the server side code should encapsulate the security, behavior rules, and data interaction. Web applications should be a cleanly divided into two basic elements, the user interface and the web service, with a strong emphasis on minimal surface area between them.

An excellent litmus test for a good client/server model is how easy is it to create a new user interface for the same application. A well designed client/server model should have clearly defined web services such that a new user interface could easily be designed without having to modify server side application logic. A new client could easily connect to the web services and utilize them. Communication should be primarily composed of data, not portions of user interface. The advantages of a clean client/server model where user interface logic and code is delegated to the browser:

Scalability – It is quite easy to observe the significant scalability advantage of client side processing. The more clients that use an application, the more client machines that are available, whereas the server processing capabilities remain constant (until you buy more servers).

Immediate user response – Client side code can immediately react to user input, rather than waiting for network transfers.

Organized programming model – The user interface is properly segmented from application business logic. Such a model provides a cleaner approach to security. When all requests go through user interface code, data can flow through various interfaces before security checks take place. This can make security analysis more complicated, with complex flows to analyze. On the other hand, with a clean web service interface, there is well-defined gateway for security to work on and security analysis is more straightforward, holes can be quickly found and corrected.

Client side state management – Maintaining transient session state information on the client reduces the memory load on the server. This also allows clients to leverage more RESTful interaction which can further improve scalability and caching opportunities.

Offline applications – If much of the code for an application is already built to run on the client, creating an offline version of the application will almost certainly be easier.

Interoperability – By using structured data with minimal APIs for interaction, it is much easier to connect additional consumers and producers to interact with existing systems.

Difficulties with the Client Server Model on the Web

There are certainly difficulties with applying the client/server model to the web. Accessibility and search engine optimization can certainly be particular challenges with the web services approach. Handling these issues may suggest a hybrid approach to web applications, some user interface generation may be done on the server to create search engine accessible pages. However, having a central architectural approach based around a client/server model, with extensions for handling search engines may be a more solid and future oriented technique for many complex web applications.

Our Efforts to Facilitate the Client Server Model

SitePen is certainly not alone in working to facilitate client/server architecture. However, since I am familiar with the projects we help create, I did want to mention our approaches to the client/server model:

DWR – From inception, DWR has provided an excellent framework for building client side user interfaces that can easily connect with server side business logic. DWR was years ahead of its time in establishing a framework that encouraged good client/server modeling. DWR has a solid structure for interacting with Java business logic objects. DWR has continued to progress, providing means for bi-directional Comet based communication (Reverse Ajax), and is adding more interoperability capabilities as well.

Dojo Toolkit – It should be obvious that building a good client-side user interface can benefit from a good toolkit, and Dojo has long provided just that. However, Dojo is more than just a library and set of widgets. Dojo provides real tools for building client/server applications. Dojo RPC can provides tools for connecting to web services, and can even auto-generate services based on SMD definitions. Dojo Data provides a powerful API for interacting with a data model. Dojo has lead the way with Comet technology, creating standards around browser based two-way communication. Recently we have built the JsonRestStore which allows one to connect to a REST web service and interact with it using the Dojo Data read and write API. This greatly simplifies the construction of user interfaces by simplifying the user interface-business logic interaction, and encouraging standards-based communication that can easily be used by others. Furthermore, Dojo provides comprehensive tools for robust data-driven applications; even templating can be done on the client instead of the server with Dojo’s DTL support.

The benefits of using standards-based client/server communication has facilitated integration with server frameworks like Zend, jabsorb, Persevere, and interoperability with other frameworks will be coming soon.

Cometd – Cometd provides real-time duplex communication between clients and servers. However, the distinguishing characteristic of the Cometd project is the focus on not only achieving Comet-style duplex communication, but doing so with an interoperable standard protocol, Bayeux. Comet uses a quintessential client/server approach. Any Cometd (Bayeux implementing) server can interact with any Cometd/Bayeux client. One can easily connect various different client implementations to a single server, by using the Bayeux standard.

Persevere – Persevere is a recently launched project, built with this service oriented client/server approach. Persevere is a web object database and application server with RESTful HTTP/JSON interfaces, allowing applications to quickly be built with a database backend that can be directly and securely accessed through Ajax. Persevere is focused on provided a comprehensive set of web services interaction capabilities through standard interoperable communication. Data can be accessed and modified with basic RESTful JSON interaction, clients can invoke methods on the server with simple JSON-RPC, and data can be queried with JSONQuery/JSONPath. With Dojo’s new REST data store, and SMD driven RPC services, Dojo clients can seamlessly build applications and interact with Persevere services using the Dojo APIs. Complex application logic can be added to the persisted objects in Persevere to facilitate building service oriented applications with a straightforward interface to the user interface code on the browser. Persevere is integrated with Rhino, so model and application logic can be written in JavaScript, providing a consistent language and environment for distributing client and server roles.

For more information about any of these open source projects, visit the SitePen Labs

Summary

As the web platform matures, as applications evolve to use more interactive and rich interfaces, and as web services increasingly interact, architecting web applications with an intelligent client/server model will become increasingly important. A properly designed client/server model will provide a foundation for modular, adaptable, and interoperable applications equipped for future growth.

]]>https://www.sitepen.com/blog/2008/07/18/clientserver-model-on-the-web/feed/11Comet and Javahttps://www.sitepen.com/blog/2008/05/22/comet-and-java/
https://www.sitepen.com/blog/2008/05/22/comet-and-java/#respondThu, 22 May 2008 13:04:29 +0000http://www.sitepen.com/blog/2008/05/22/comet-and-java/One of the difficulties implementing Comet on Java is the lack of any acknowledgement in the current Servlet spec (v2.5) that any HTTP connection may be anything other than short-lived. Unlike many of the other components in the JavaEE stack, servlets are ubiquitous so we don't really have a choice to use an alternative.

Servlet version 3.0 is in the works, several of the people that blog at Comet Daily are on the Servlet spec expert group and want to see this oversight fixed, but it will be a while before the spec is done, and even longer before we can rely on it’s support everywhere.

The passThreadBackToOS() is the bit where we need to hand wave a bit – "this is not the thread you are looking for".

We need a way to get control back in several circumstances:

There is output (some Comet techniques require re-connection on output)

We've waited longer than 60 seconds (the 'why?' will have to wait for another post)

Something has gone wrong with the connection and we need to reset

The container wants to shut down

So we're going to have to register something to wake us up whenever we want to, but for now we're going to continue hand-waving.

This kind of code would be possible if we were to use full continuations – the kind that are available in RIFE. However full continuations don’t mix well with Comet. The whole point of wanting to reduce thread usage is to reduce resource consumption, and full continuations are fairly heavy on resources. (Interestingly continuations have been available at a JVM level for a long time, and there is talk of them being opened up at an API level before too long, however I suspect they'll still be the wrong tool for the job)

So given that we can't do passThreadBackToOS(), what are the options for Comet?

Just wait()

If you are only allowed to use the Servlet spec then you have no option but to call wait() and arrange for something to notify() us when it's time to do something.

Clearly this doesn't scale well. DWR has a nice way to gradually fallback to polling if the server gets overloaded, but for many systems this won't be needed. What are the options if we can assume some support from the Servlet engine?

Throw back to the start of Servlet.service()

The expense with full continuations is in storing the stack. What if we drop the stack storage? We pass control back to the web container using a special exception (this is how continuations were implemented in RIFE under the covers), and then just re-call doPost() at a later date.

This is how Jetty supports asynchronous servlets in version 6.0. You can read about Ajax Continuations, but note that they are different from full continuations because they're not saving the stack, just restarting the Servlet.service() method. Grizzly has a similar API to the Jetty one, however it's implementation is closer to the idea below.

While this was the first available way to do async servlets, the general consensus now is that the implementation (using Exceptions to restart the method) is a little hacky, and better options are available.

Finish Servlet.service() and have something else signal end of the request

The most common pattern, and the one being adopted in Servlet Spec v3.0 is to have a ServletRequest.suspend() method. This says to the container: "When Servlet.service() ends, we're not done. Only end the request when something calls ServletRequest.complete() or ServletRequest.resume().

Putting it all together

The tricky thing is that these 3 models do radically different things in order to wait. The stack essentially travels in 3 different directions, throwing unwinds it backwards, return unwinds it forwards and wait freezes it.

It's tricky to come up with a model for a system that can use all of these systems. If you are trying to unify them, then you should take a look at the PollHandler class in DWR, and how it deals with the Sleeper and Alarm interfaces. There is also a Getting Started guide.