Archive

Introducing Atmosphere, a new framework for building portable Comet based applications. Yes, portable, which means it can run on Tomcat, Jetty, Grizzly/GlassFish or any web server that support Servlet 2.5 … and without the needs to learn all those private API floating around…

Currently, writing a portable Comet application is impossible: JBossWeb has AIO, Tomcat has its a different AIO API, Jetty has its Continuation API and pre Servlet 3.0 API support, Grizzly has its Comet Framework and Grizzlet API, etc. So, framework like DWR, ICEFaces and Bindows all added native support and abstracted a layer in order to support different Comet API. Worse, if your application uses those API directly, then you are stuck with one Web Server. Not bad if you are using Grizzly Comet, but if you are using the competitor, then you cannot meet the Grizzly!

The current Servlet EG are working on a proposal to add support for Comet in the upcoming Servlet 3.0 specification, but before the planet fully supports the spec it may takes ages. And the proposal will contains a small subset of the current set of features some containers already supports like asynchronous I/O (Tomcat, Grizzly), container-managed thread pool for concurrently handling the push operations, filters for push operations. etc. Not to say that using Atmosphere, framework will not longer have to care about native implementation, but instead build on top of Atmosphere. Protocol like Bayeux will comes for free, and will run on all WebServer by under the hood using their native API.

So I’m launching Atmosphere, hoping to close the gap and simplify the creation of Comet based application based on the experience/feedback I got since two year with the Grizzly Comet Framework. Atmosphere is a POJO based framework using Inversion of Control (IoC), trying to bring Ajax Push/Comet to the masses! Atmosphere build on top of Jersey and Grizzly Comet code. Now I’ve to be honest, the project is just starting (got some troubles internally since I’ve leaked the information :-)) and it might takes a couple of months before I can support all WebServer. What I’m targeting is to evolve the Grizzlet concept and make the programming model really easy. So far what I have looks like:

The example above is of course quite simple, but it demonstrate the goal I have with Atmosphere: make it easy for anybody to write Comet application. The above is a ridiculous Chat application which suspend request when a GET is sent, push data on POST and just blindly write the “pushed” data.

So, In the upcoming weeks I will start giving more examples and more important, will push the code to the repository. I’m unfortunately distracted by other projects I’m working on so the project might start slowly, but my goal is to build a strong community like I did with the Grizzly Project, hence the project evolve faster and open of anybody….Interested? Just sign to the Atmosphere mailing list. I’ve plenty of work for anybody interested to participate!

Rate this:

An application server can get in a really bad shape when a rogue application/component gets deployed into it. How to prevent the situation using GlassFish Prelude? With the help of the bear, yes , you can minimize those rogues animals…

Just in time for the upcoming v3 Prelude release, Alexey and I have added a feature that add support for web applications isolation. You can isolate rogue components/applications by allocating a subset of the available threads or heap memory. I’ve already described the feature when Ajax based application are deployed in GlassFish, but this time you can apply the same technique for isolating applications from others’ bad behavior. OK, but under which circumstances you want to do that? Well, there are several situations where you don’t want your application to be affected by other deployed applications:

Delayed response: when GlassFish is under load, you want to make sure your application will never get delayed by other applications who are doing expensive calculation

All thread deadlocks: An application using JDBC might eventually eat a significant amount of GlassFish’s WorkerThread because of the remote database. All Threads might ends up in a deadlock state where they are waiting for a response from the remote database. Worse, all your threads can lock, and there will be no available thread for servicing incoming requests

Let’s recap, from a previous blog, how it usually work in GlassFish when a request comes in. When requests comes in, the Grizzly HTTP module on which Prelude build on top of of, put them into a queue (see below)

When a WorkerThread becomes available, it get a request from the queue and execute it. When no thread are available, requests are waiting in the queue to be proceeded (in red below)

Now the normal behavior is to place the request at the end of the queue so every connection (or users) are equally/fairly serviced. Independently of how the request is executed, an application who needs to update its content real time (or very fast) might face a situation where the request is placed at the end of the queue, delaying the response from milliseconds to seconds. Hence, the usability of the application might significantly suffer if the server is getting under load and the queue is very large, or if a rogue/slow applications has already reserved the majority of the threads.

One solution isolate your application of the rogues application. How? By examining incoming requests and assign them to priority queues. Being able to prioritize requests might significantly improve the usability of an application and prevent rogue applications to affect its environment. Why? Because with resource isolation, you can make sure that specific requests will always be executed first and never placed into a queue, by either being placed at the head of the queue or by being executed by another queue:

As an example, request taking the form of /myApp/realTime might always gets executed before /rogueApp/

Want to try it? Then download GlassFish v3 Prelude and do the following (you can use the admin-gui if you don’t want to edit the file manually):

In the example above, Grizzly will reserve 50% of the thread to request taking the form of /yourApp/requestURI1, 30% for yourApp/requestURI2 and the remaining for all other incoming requests. Technically, it will means three queues will be created and Grizzly will dispatch the request to them based on the request URI

In conclusion, being able to isolate rogues applications/component on some policy rules (here request based) might significantly improve performance of your application. Have doubt? Just try it :-)