Entrepreneur, father, and interactive software developer.

I worked with Flash for some 11 years (4.8@Adobe) because as a creative medium “it just worked”. Sure the medium could be used incorrectly but it was great, it had no rules, few limitations, constantly improved, had a great community, and it was ideal for creative graphics/application/gaming purposes. If you could imagine it, you could built it. It was fun. Until 2007 when on new mobile web runtimes lacking plugin apis, it didn’t work.

When I departed Sencha in late 2012, I had been following CreateJS:EaselJS closely after the launch of Atari Arcade. When I was approached to build a print design editor in HTML5, we explored using CreateJS and many other libraries. For a month, we evaluated graphics libraries and built POCs of the many different aspects we needed within the app. Answering many of these questions:

Fonts?

Graphics?

Image Manipulation?

Layering?

Rendering Control and Performance?

DOM <-> Canvas interaction?

Platform support?

CreateJS:EaselJS passed these tests very well and we found that it meshed well with a new JS compiler, TypeScript, we were looking to use. 9 months later we shipped WalmartStationery.com and have shipped 8 versions across 2 holiday seasons for Harland Clarke’s print partner websites.

CreateJS:EaselJS adapts the Flash DisplayObject API to HTML5 Canvas in an elegant manner yet enables lower level access to control rendering to HTML5 canvas and WebGL. addChild, removeChild, addEventListener, .graphics API, they are all there and work great. Actually the graphics API is so good that I wrote a font and typesetting engine txtjs for CreateJS which we now use for print quality text layout. A runtimes ability to render many SVGPath glyphs programmatically is a tall order, yet the API in CreateJS allowed us to ship with great performance with very little optimization. Provided you have a web runtime that supports Canvas, CreateJS “just works” and imagination is the limit again. Fun. Refreshingly fun.

Today with the release of CreateJS 0.8.0 the library has reached a level of maturity and is ready for larger scale production use. Actually we have already deployed CreateJS:EaselJS 0.8.0 into all our projects with no issues. I am looking forward to using this library more and more in the years ahead, it is great tech.

I wanted to highlight how we handle rendering on <canvas> with CreateJS/EaselJS as this approach has drastically improved our app performance. First lets look at how things are typically done. There are 2 common approaches to rendering:

The constant frame rendering approach attempts to render at a frame rate typically aligned with the browser RequestAnimationFrame. The problem here is that typically you are rendering more than you need. If the canvas contents are idle (no visual change) every frame render is a drain on system performance. Essentially you are doing work with no real benefit.

The OnDemand Rendering approach renders frames as you need them rendered. The problem here is that the frequency of rendering can become faster than you need during heavy use. Typically you have many elements that call render so if you update many items at once, there will be a flood of rendering calls. Unlike the constant frame rendering with onDemand rendering it is easy to create a hotspot where you are rendering a ton, then none.

We tested both of these approaches when designing our print applications (walmartstationery/expressionery/iprint) and we fell into a middle ground… OnDemand Deferred Rendering. With this approach our code calls the renderer as we need to but actual rendering is deferred to align with a frame rate we set which happens to align with RequestAnimationFrame. It is a mix of both models in that things always render at a set frame rate but we choose when we render.

Here is how it works:

Create a “render” function to call when we want a render to be performed.

This call simply sets a “renderFrame” variable to true regardless of its value.

Even if every element calls render, it simply sets “renderFrame” to true.

This is a very light operation

Create the “real” rendering function to update the stage only when “renderFrame” is true and then reset “renderFrame” to false

Create a Ticker event to call a realRenderer() at a frame rate aligned with RAF:

createjs.Ticker.timingMode = createjs.Ticker.RAF_SYNCHED;

createjs.Ticker.setFPS( 30 );

createjs.Ticker.addEventListener( "tick" ,realRenderer );

So we have a frame rate but you must call render() for it to really do any real rendering. If you never call render(), nothing happens and <canvas> is essentially an image. If you flood the render() call, it will only render 1 time within a frame rate. If render() is called after realRenderer() is called, you just get 2 frames properly distanced at the frame rate, then back to no rendering.

With this technique we found that performance was very high even on mobile devices, yet we never overloaded the system during heavy interactivity. Better still when the user stopped interacting, the system would go idle and allow the system time to recover. In many cases graphics performance was 3x that of using a constant framerate and we never had a hotspot in our rendering.

It has been 24 months since I started working with TypeScript with our project at Harland Clarke. We were a very early adopter but even then 0.8 showed great signs of improving our team workflow and enabling us to build/maintain a large JS codebase. At the time, we figured that the quality of the JS output was clean enough that if TypeScript was abandoned we could continue development from the raw JS. Since then we have migrated the codebase across every release of TS and have a team of 5 actively maintaining and advancing the application(s). Overall it has been a great experience and has taught me a great deal about Javascript and team development.

Compilers are very smart, very fast, and very exact; Humans however vary wildly in all these areas. Humans make mistakes, I make mistakes, I make mistakes everyday. In a team environment my mistakes impact others, cause delays, and generally make a larger mess. When we started work at Harland Clarke we wanted to minimize our mistakes and be able to quickly correct before the team was impacted. We chose TypeScript to minimize human mistakes but over time we saw lots of additional value.

1) Compile-time errors > Runtime errors.

TypeScript will rather quickly show you complex errors at compile-time. This prevented non-valid builds from entering version control and resulted in a higher quality codebase. Better, these errors are very useful in both refactoring and seeing deeper into a codebase during development. With TypeScript we compile & edit more and test only when the build is valid.

2) You must be this tall to ride.

The compiler forced our team to a quality level before code moved into the team environment. Regardless of how senior or junior, everyone was accountable to the compiler. If your code did not validate, you learned why and fixed it. I know several developers improved the quality of code they wrote over time as a result. Some also found ways around the compiler but as we typed more of the codebase, more errors would surface for us to remove.

The type system in TypeScript is 100% optional. This is a great feature as you can “type-down” important areas of the codebase and all dependent code must conform yet leave other areas alone. We did this within our model classes and within our graphics api use ( CreateJS & Canvas ) and in these areas we were much stricter than others in terms of extension or use. Some of our more recent work was building a text/font engine extending CreateJS/EaselJS in TypeScript and we integrated this as an external library complete with its own type definition file, *.d.ts.

3) Validate all code at compile-time.

TSC takes all the code you feed it and validates it at compile-time. It makes sure that access or use of other developers code is valid. Even down to the type you pass in arguments to the type of data/object returned, to member variable. Over time this added lots of value in that every compilation is a full validation of the codebase. We know the app is valid across the whole codebase before we test.

4) Refactoring bliss

Refactoring in TypeScript is one of my favorite things. I tend to make all my changes at once and then run the compiler to see how the changes impact the codebase. This turns the compiler error output into a handy todo list with line/row numbers. Often times you will discover dependencies you never knew you had. Change a method name, compile, see everywhere it is used by line number. Change the data type passed, see all the incompatible calls. Delete a variable and see the impact. Upgrade the TypeScript compiler from 0.83 to 0.97 and see all the errors TS8.3-9.7p1TS8.3-9.7p2 .

5) External Libraries

External JS libraries can be integrated seamlessly into a TypeScript project even though they are written in JS. The key is support for definition files. A TypeScript definition files describes the interface of an external JS library and adds type support to restrict your use of the api via the compiler. Rather than only be limited to TS libs, you can add definitions to structurally type your use of external libs like JQuery, VueJS, EaselJS and many many others. If you need a definition file, head on over to DefinitelyTyped for the latest from over 2,216 external libraries. If you cannot find one that fits, writing a definition file is fairly simple and easy to learn.

Is TypeScript appropriate for my project?

The real advantage to TypeScript comes in the form of type & interface validation. Most projects will benefit from this, some will not. My rule of thumb is 2+ devs and 5+ classes, anything past that and you will benefit tremendously from using TypeScript. Compiling between changes is not for everyone and for those coming from Javascript it can seem like a downgrade initially. Compared to pressing Refresh in the browser, running a compiler feels antiquated. In general, the benefits of team dev need to outweigh the individual compiler burden and often this is a tough sell on small projects. Many of the devs on our team (4 of 5) suggested we just use JS at the start, yet after we had lots of code (5 of 5) were thankful we used TypeScript. It will slow down your development workflow but it will improve code quality and reduce the testing cycle (no invalid builds are tested). This is a trade off that you will have to make but after 2 years, I feel strongly that slow and steady wins the race.

There are 100 different ways to structure a TypeScript project but after doing this 100 times, my ways are rather set in stone. I prefer a single build.ts file in the root of a project with internal references (/// <reference path=’../tools/requirejs/require.d.ts’/>) to the other project files in dependency order. By compiling one file, all files are processed and output like so:

// compile the project
tsc build.ts

//or with AMD module support
tsc -m 'amd' build.ts

I find this structure is easy to walk into and get working. It is simple and adding additional workflow above this is easy with grunt or gulp. Within our latest project we used gulp for compilation, example.

Having used anything for 2 years you run into a few things you want to see changes. Here is my list.

[NOTE: ADDED as Union Types ] Type cascade – Rather than only providing one type, I would love to hand the compiler a set of appropriate types like so:

var foo:number|string = 123;

var foo:Solid|Liquid = new Liquid();

TypeScript Configuration Metadata – It is hard to denote how you want referenced files handled exactly within a single compiler pass. While there is a notation for referencing files (adding external file into compiler within a .ts file ), there is no output notation other than use of import/export module syntax. Ideally one could add output values to the references to control compiler output. I prefer to have a single built.ts file in the root of the project and compile through it, thus the need for options for packaging, module syntax, concatenation, consolidating the __extends calls (subclass wiring), and optional file output at all (build.ts always results in an empty .js file). [NOTE: Turns out any file with a *.d.ts will not be output by the compiler but any references within will. This makes for an ideal entry point into the compiler with a single build.d.ts file. ]

Better ES6 support

Import/Export

Destructing assignments

Macros – I have found that you can extend the TypeScript language with Sweet.js. Being able to define new language level extensions to the compiler would be a great hard feature.

Summary

I feel lucky to have started using TypeScript when it first came out and have learned a ton about building larger scale Javascript projects. My consulting clients at Harland Clarke have benefitted greatly and hold a maintainable high quality codebase as a result. During our first 12 months of development, we saved at least 3 months by not testing invalid builds. Saving 9 dev-months (3 devs initially) of time is a non-trivial benefit yet it says nothing about the maintainability or quality benefits TypeScript provided. Our later projects have benefitted the same, yet we are rarely building from scratch and typed library reuse becomes even more beneficial. At first TypeScript holds you back by adding a compilation step but it accelerates team development by keeping each developer build valid and improves code quality.

TypeScript is easily one of the best technologies to emerge from Microsoft in a long time. Having seen it’s impact on the quality of our team output, I would strongly recommend using it. Everyone makes mistakes, but having a quality compiler highlighting those errors before they impact your project is a big win. TypeScript will continue to be my co-pilot for large Javascript applications.

Having participated in the Salesforce “Hackathon” and watched the controversy unfold during judging and today, I want to share my opinion.

The event at Dreamforce was NOT a “hackathon” by definition, it was something new and something very different. The entries were more serious, the teams working on solutions were professional, and overall the stakes were higher in terms of raw prize money and visibility at Dreamforce. Along these lines, Salesforce deserves credit for creating a new type of event and should be given the license to make mistakes and innovate, rather than being judged as a classic “Hackathon”.

I feel that the “hackathon” should change into a longer year-long developer event to create new solutions against Salesforce/Heroku and steer clear of the short-term “hackathon” distinction. Here is my thoughts on next years event given that plan are afoot:

Dreamforce “Appathon” – A year long developer contest to create the best salesforce/heroku app, judged at Dreamforce.

12 months to develop, start now.

Due week prior to Dreamforce.

NDA Developer access to new APIS.

Award top 25 teams a mini-booth at Dreamforce to present solutions.

Attendee Voting: Dreamforce attendees vote for top 10 solutions.

Judging: Judging panel determines top 5 and winner.

If the goal is to create new apps that integrate deeply with Salesforce/Heroku and to provide visibility to what is created then shifting towards an “Appathon” seems more appropriate.

This morning I received an email that zip.ly was rejected as a finalist in the Salesforce Hackathon, thus ending a 3 week sprint of software development. Attempting to assemble a product in 3 weeks is an act of insanity but having turned off consulting for the year, a good domain name, and the desire to work on something new; our team jumped in head first. The experience was well worth it as I found myself learning more in these 3 weeks than I learned all year. It was very rewarding and yielded some unexpected fruit in terms of skills and a foundation to build services.

Special thanks to the team at SalesForce who executed the Hackathon. It was perfect but the only thing I would change is adding a RedBull soda fountain onsite next year and forging a partnership with Mel’s dinner around the corner and keep them open all night for Hackathon attendees. As someone who has run developer events, the execution at Dreamforce was world class.

I learned a metric ton in building zip.ly and I wanted to share the list of technology and what worked great and what did not. As with all technology decisions, perspective matters and over the course of development my option shifted on several technologies in use.

Server-Side I used Python/Flask/Jinga/SQLAlchemy/Postgresql and this was by far the best decision. The server was easy to change and the mature ecosystem surrounding python is a great asset. I quickly found key libraries for handling Imaging and PDF generation in PIL, ReportLab.pdfgen, pyPdf, and xhtml2pdf. If you need to generate any type of PDF document between these 4 you can get the job done fast. The hardest element here was not generating the files themselves but working around not having a file system on Heroku. I found several key workarounds using Python’s StringIO and BytesIO that proved to be excellent solutions to avoiding reading and writing to disk.

For persisting files, I leaned on Amazon Web Services via the Python Boto library. It is a great library and makes working with AWS easy. Once you get keys and permissions set, you can control AWS like a boss with very little code. For example here is the code in boto to write a pdf to S3:

The trick with Boto is that keys are optionally pulled from environmental variables so this aligned well with Heroku.

Jinga templating was also a standout as I switched to using “extends” templating late and it saved us. I was quickly able to reuse assets from a library of markup and violently change our ui late. Looking back I would not have finished the app had we not done this. Jinga just works.

SQLAlchemy is amazing. It is easily one of the best DB/ORM abstractions I have every used. Yes I have used Hibernate, but SQLAlchemy falls into the ORM you can control and avoids the dark mysteries of schema migration hell. Rather than having things be automatic in SQLAlchemy, you have to tell it what to do. Here are a few great examples:

In zip.ly I did the db.drop_all() and db.create_all() as part of updating the app. It was trivial to generate schema changes or a whole new schema from our model classes. I will be using this library for years to come… A+.

About 1 week into development, the $99 entrance fee was refunded by SalesForce so I decided to spend those funds on an SSL Certificate for zip.ly. I purchased the cert from RapidSSL and learned the ins and outs of generating .CSR .KEY .PEM and applying them successfully to a domain on Heroku. With the SSL cert in place it enabled us to integrate SalesForce.com OAUTH and thus call the SalesForce REST API.

In the process of integrating SalesForce OAUTH2, I struggled with several libraries but quickly fall back to use Requests in Python. In short order I had OAUTH fully working and was able to call the SalesForce REST API for Identity and adding Contacts. The REST API at SalesForce is a real gem to work with given it’s simplicity. Once you get the AuthToken, you can call any API by setting the “Authorization:Bearer” header with the AuthToken and adding a JSON payload. This was a gem to work with and makes deeper SalesForce integration easy.

Client-side was a frustratingly bumpy road and while I had great ambitions of having a great Single Page Application, late in development things came unglued so we fell back to server-side page rendering. As part of the learning on the app, we started using AngularJS which is an amazing library when you know what you are doing. Unfortunately while integrating AngularJS with our JSON API, I started to see errors that I did not understand and worse the only choice I could make was to rip it out with the Hackathon deadline looming at 36 hours to go. We will return to AngularJS after learning far more.

The “B” word… “Bootstrap”. We picked Bootstrap early and I regret the decision. Bootstrap is brittle as you must accept things the “Bootstrap way” or you will struggle. For some projects it is great and will get you started fast but it is hard technology to integrate when you know exactly what you want. The CSS is filled with “important!” so you quickly go into “important! hell” and are covered in some sticky goo that you need solvent to remove. I will not go back to Bootstrap unless they restructure and remove the “important!” mess within the framework. Again it is great for some things but for detail work it can get hellish. I am tempted to move towards using Topcoat or Pure when we revisit UI selection.

EaselJS, I LOVE YOU! If you want to rock the canvas in an app, look no further than EaselJS. I had to rebuild the signing aspect of the app late to support multiple signatures and embedded text and had it fully working inside of 5hr session onsite at the hackathon. I am now extremely biased towards EaselJS as when pressed for time, this library was predictable and easy to get things pixel perfect. I even had time to spare on this element and decided to add layouts for device orientation to the signature panel. Canvas also pairs really well with FormData in JavaScript as I was able to snap images of the signature panel and iterate over the stored data and upload via XHR. Here is a snippet:

When it was all said and done, the big discovery in writing zip.ly is the HTML/Markdown/Images to PDF developer workflow. It allowed me to manipulate the PDF generation late in development and template it for end users all while generating PDF rapidly server-side. Our team will be investing more to make zip.ly both a Heroku Add-on and a small business centric native app for document signing on the go. This feels like the right pivot to make in the wake of this 3 week development sprint but we will see what time does to that decision.

I had a great time participating in the SalesForce $1M Hackathon and I walked away having learned a ton and met some great people. I would encourage you to take a risk and build something outside your comfort zone, and see what happens. You will learn a ton and you will be surprised by the results.

Since December 2012, I have used TypeScript as my primary language while working on a large scale enterprise project due to ship next month. I want to share the details on how we are using TypeScript as a team and our workflow that has made our project a success.

TypeScript?

TypeScript is an open source language and compiler written by Microsoft that runs on NodeJS. The language is based on the evolving ES6 spec but adds support for types, interfaces that generates JavaScript (ES3 or ES5 dialects based on flag). TypeScript’s compiler is written in TypeScript and run on any compliant JavaScript runtime by default it is distributed as an npm on Nodejs. End of the day, JavaScript is generating JavaScript. For more information see wikipedia or typescriptlang.org.

Evaluation

In November 2012, we selected technologies and our initial evaluation of TypeScript proved surprisingly beneficial. While we evaluated Haxe, Dart, CoffeeScript, we quickly honed in on TypeScript given it is an ES6+ to JavaScript compiler. We wanted all code to be JavaScript but we wanted to inject structure into our development process and be able to lean on the compiler for validation and richer errors. Really our choice boiled down to either use JavaScript or TypeScript. From there we wrote several small scale prototypes and quickly exposed the following:

Validation – TypeScript enabled us to validate code usage cross-modules at compile-time. In assigning types to variables and to method arguments, we were able to effectively validate all call/get/set across every module, every build. If a property was set to type bbox.controls.Image, nothing would satisfy the compiler but an Image instance or a subclass.

Early errors – We would get very detailed errors from the TypeScript compiler and with the addition of types and interfaces, the errors got even more specific.

Zero calorie types – TypeScript’s types and interfaces evaporate at compile time leaving no trace of the language while generating clean JavaScript.

ES6 Today – TypeScript is based on ES6 with additions of types and interfaces. It let us write source in a modern dialect of JavaScript, yet output to compatible ES3, ES5 with a compiler flag. With support for a modern class syntax( constructor, subclassing, interfaces, export, modules ) it made code organization painless.

Build process – One of the first checkins and tests was to create a team build process with Ant. With a properly configured environment any developer could sync with SCM and build a working local server with all code packaged for development or production use. The build process integrates Less, RequireJS, Uglify2, TypeScript, template processing, and server generation.

Usage

In the months that followed our evaluation we settled into a team workflow with TypeScript that really benefited our project. The build process is at the center of development and our daily work. Every day looked like so:

Update from SCM

Run ‘ant all’ ==> Full build + start local server

Run ‘ant dev’ or ‘ant ts’ ==> Incremental build

Error at build or Test in Browser

Rinse & Repeat

I really enjoyed this model in that I found over time that we spent far less time in the browser testing than prior JavaScript projects. As the build would validate and catch syntax/interface/type/usage errors, we only browser tested when the build validated and worked. Once we all got better at decoding the TypeScript compiler output errors, we gained a level of productivity that I have not encountered in web development. It sounds odd to spend 5 seconds compiling but in the end it removed an entire class of useless testing of non-working code from our development schedule. This aspect alone saved us 2 months of time while dramatically improving the output code quality.

When we began development the quality and quantity of available TypeScript definition files was not ideal. Definitions allow you to strongly type the interface of JavaScript Libraries externally and thus your code must conform to the definition at compile-time. Since starting this project definitions have done a complete 180 with many contributions to Definitely Typed for just about any major or minor JavaScript library in existence. We utilized the definition files for Require, JQuery, Backbone, Bootstrap, Underscore, and EaselJS within the build process. To add a definition file you simply add the following reference statements to the file within your main .ts file:

The references above are also how you add in any external libraries to TypeScript. In a way definitions, interfaces, and TypeScript classes all operate in the same way. In order to simplify our build process we unified all these calls in a single init.ts file which when called by the compiler loads in all TypeScript needed by the application. Even classes that are intended to be loaded via module are denoted here so that they are externally compiled to a module file. Note the use of the “export Class” syntax in this external module, this tells the compiler to keep the file external as a module and a compiler flag “–module amd” makes the compiler format modules to conform to either AMD or CommonJS format.

Another elegant item is the module system in TypeScript for code organization. Each class and variable is exported into a path off the browsers window object making it global. We utilized the namespace ‘bbox’ for all classes and variables and with typing support, you can make a very well defined namespace for your application quickly. The module above starts in ‘bbox.controls’ and at runtime you can find this class exported to bbox.controls.Bounds. Considering we have 60+ classes to work with we used the module system to its limit and it never missed a step. You are free to export and type individual variables and whole classes this way so you are not locked into everything needing to be a class. In many ways a typed export variable works great for singletons within a module as it provide limits to the type written and location accessed. With the additions of modules and types, the way you work with JavaScript just changes and it is a blurry line what is considered bad practice. With a first class compiler that can detect overwriting an object with an array or a very specific class type, you begin to work differently as TypeScript induces its own working model as a language. It isn’t JavaScript, yet it is.

Over the course of development I found myself changing development strategies in terms of refactoring. I started to trust the compiler’s behavior to the point where I would intentionally change types, interfaces, and naming to break things in order to expose code affected. I would then correct all lines from the compiler output and refactor as needed. In many ways this has allowed me to work effectively within a larger codebase.

As for development environment, I went with Sublime Text 2 + TypeScript syntax, while the other developers on the team opted for JetBrains. I found Sublime to be a great environment for TypeScript, even without code completion, I had zero complaints. Given that the compiler can provide incremental compilation and richer ide integration I think it will be a matter of time before we see far more advanced TypeScript tooling in Sublime/Edge/Jetbrains offerings.

While I loved working with TypeScript (and will continue to do so) there is one escape hatch that every developer using it should understand. There are times when you will butt heads with the compiler and it will block your attempts to call a method or variable as typing information is unavailable. When this shows up, we found that associative array syntax would unblock the issue til we could fix thing up. Example: foo[‘myProperty’] foo[‘myMethod’]() would allow you to access myProperty or myMethod on foo regardless of typing information. I know it sounds odd but just keep associative array syntax in your back pocket, you will need at some point.

Summary

TypeScript has been a joy to work with over the past 7 months. Having had experience with types/interfaces in ActionScript/ES4, I took to TypeScript very rapidly as it supported the structure I needed, yet maintained the elegant flexibility of JavaScript. In many ways I know our choice in using the compiler really moved our project forward in both delivery date and in code quality. I cannot say I have ever been a Microsoft fan but TypeScript has ‘softened’ me, it is easily one of the best web technologies to arrive in the past 3 years. I am looking forward to working on more projects with it and evolving with the language/compiler ongoing.

This week I will be onsite at Build in San Francisco learning about generics in the 0.9 build and next month I will be speaking on TypeScript at both Senchacon and 360Stack.

I have been exploring Websocket RFC 6455 recently and wanted to share my thoughts on this HTML5 addition to web technologies. Websocket provides an API and HTTP upgrade mechanism to take a standard HTTP request and upgrade it to a bi-directional socket connection. The standard includes a simple JavaScript API to instantiate a new socket and listen for events connect, close, message while being able to send messages programmatically.

Port 80/443 – The first barrier with any network communication is firewall traversal. Port 80(HTTP) and Port 443(HTTPS) are typically open both ways within firewalls. Although Websockets can operate on custom ports, they default operation to using port 80/443 depending on the protocol selected. The key is that HTTP has always used TCP/IP Sockets to communicate. When you make a URL GET/POST/HEAD request you make a TCP/IP socket connection to the target server and send a header. The server responds and closes the connection unless it is using HTTP1.1 which simply provides more persistant connections supporting multiple requests.

2 Way – Websockets allows the server or client to send messages at any time without requiring a request. After the initial HTTP handshake, you can send data to the server and the server can send data to the client. The key is that a request does not imply the server will respond and thus there are 2 independent channels of messages.

I tested Websocket using Play (Java) and Tornado(Python) Frameworks and found it trivial to get up and working with Websockets quickly. Both frameworks provide a simple means to defining a route to serve the Websocket handshake and provide events for events. Given the symmetry within the Websocket, both client and server are very similar in API. Here is the Websocket Class in Play and Tornado:

There are some interesting service possibilities with Websockets allowing clients to query a services layer within a persistant socket connection. The hard part with Websockets is handling scalable server architectures where messages must traverse multiple servers between multiple clients. Although this is a very solvable problem with messaging libraries like ZeroMQ and server to server socket connections.

Try Websockets out! Here is a full Websockets server written in Tornado Framework in Python. It is a single file and should be fairly easy to run once you install the Tornado libraries into Python locally. Instructions for installation are located within the file itself.