A blog mainly about Java

O Application Servers, Where Art Thou?

With Java EE 6 and GlassFish v3 out, it is time to take a little break and look at the application server world. J2EE 1.2 was created in 1999, that’s 10 years ago. The application server market at the time was completely different of the one today. There was Weblogic and Websphere, other proprietary application servers (not following J2EE) and no open source application server. Today, it is a completely different story.

The application servers’ world has changed so much in the last 10 years, but people still have in mind the heavyweight server that takes huge amount of RAM, disk space and takes ages to start (being completely useless for agile developers who need to test and code quickly and often). So let’s focus on some application servers (Geronimo, GlassFish, JBoss, Jetty, JOnAS, Resin, Tomcat, Weblogic and Webspere) and check some parameters.

The benchmark

Disclaimer :This is not a real benchmark !

In this little test I’m just concerned about the usability of an application server for a developer. The idea is to download it, install it, start it and take some measurements (size of download, ease of installation, size of disk, startup time, size of RAM…). That’s all. No deployment of an application, no fancy twists to gain performance…. Because some of these application servers are resource consuming, I’m doing all my tests on a Windows XP virtual machine (running on Virtual Box 3.1). It is a fresh install of Windows XP sp3 with 1Gb of RAM and not too many software installed (I had to install Avast but). So when I boot there are 27 processes running and using 230 Mb of RAM (leaving 770 Mb free). Virtualizing can be slower, so keep in mind that startup time can be a bit faster that what I’m giving you here. I use JDK 1.6.0_17 (when it’s not bundled with the server). No optimization at all is made (I haven’t twisted the JVM, or antyhing application server parameter, everything comes out of the box).

To calculate the startup time, I don’t do any fancy rocket science either. I just start the server a few times, check how long it takes and use the best startup time. Also remember that some servers do not load any container at startup, making them very fast to start. Others load everything (making them slower). Again, I have to point out that I’m not deploying an application, so it’s really starting up the server from a fresh install. To calculate the memory footprint, I just use the Windows task manager and check the size of the java.exe process.

Disclaimer bis : Some of these servers were completely unknown for me, so I might not have been completely accurate. Feel free to leave a comment.

Disclaimer Ter : And again, this is not a real benchmark, so don’t use it to sale any product to your customer ;o) I just wanted to give an overview of most application servers and some numbers.

Conclusion

There is not a real conclusion that you can take out of these tests, just some hints and numbers to show you that things are moving. Comparing Websphere with Tomcat is like comparing apples with bananas. Of course Tomcat is way faster and lighter than Websphere, but it doesn’t do the same thing. Also some application servers are minimalist and only come with application containers. On the other hand, other come with integrated portals and so on. So, of course, the startup time is different.

All that to say that the figures of the table below have to be taken with respect of what the application server does (remember that Tomcat and Jetty are just servlet containers). In red the less efficient, in orange the second less efficient, in green the most efficient, in blue the second most efficient.

App server

What you get

Size on drive

Startup time

Process size

Geronimo 2.x

Java EE 5

101 Mb

17.6 s

118.8 Mb

GlassFish 2.x

Java EE 5

128 Mb

12 s

96.3 Mb

GlassFish 3.x

Java EE 6

90.4 Mb

5.6 s

76.2 Mb

JBoss 5.x

Java EE 5

151 Mb

1m 17 s

242.7 Mb

JBoss 6.x

Java EE 6

180 Mb

59.7 s

225.2 Mb

Jetty 6.x

Servlet 2.5

69.3 Mb

7.1 s

26.8 Mb

Jetty 7.x

Servlet 3.0

2.8 Mb

1240 ms

26.9 Mb

JOnAS 5.x

Java EE 5

148 Mb

15.3 s

73.6 Mb

Resin 3.x

Servlet 2.5

26.6 Mb

1752 ms

75.1 Mb

Resin 4.x

Java EE 6 Web Profile

28.2 Mb

1640 ms

49 Mb

Tomcat 6.x

Servlet 2.5

9.8 Mb

591 ms

23.3 Mb

Weblogic 10.x

Java EE 5

1.2 Gb

9 s

126.2 Mb

Websphere 7.x

Java EE 5

1.16 Gb

47.2 s

133 Mb

Java EE 5

If you only concentrate on the Java EE 5 implementation, you’ll get the sub-table below :

App server

Size on drive

Startup time

Process size

Geronimo 2.x

101 Mb

17.6 s

118.8 Mb

GlassFish 2.x

128 Mb

12 s

96.3 Mb

JBoss 5.x

151 Mb

1m 17 s

242.7 Mb

JOnAS 5.x

148 Mb

15.3 s

73.6 Mb

Weblogic 10.x

1.2 Gb

9 s

126.2 Mb

Websphere 7.x

1.16 Gb

47.2 s

133 Mb

When you read this table there are things that you would have not expected. For example, JBoss is the slowest server to startup and Weblogic is the fastest. JBoss is also the one that has the biggest memory footprint. Websphere has the second less efficient score of all categories, while GlassFish has the second most efficient scores. JOnAS, on the other hand, has the smallest memory footprint.

Again, I haven’t deployed any application, used the administration console extensively, created clusters and did any fancy performance tests. But I wanted to show you that a full Java EE 5 server can start in 9 seconds or can only take 73 Mb or RAM. With Java EE 6 coming and the Web Profile, application servers have to become more modular and carry less weight. GlassFish v3 is a good example as it starts up in only 5.6 seconds and Resin 4 in 1640 ms. Application servers today are not the same that when they were created 10 years ago. Things have changed : Java EE 6 has become lighter, and so are the application servers.

PS : If I have omitted an application server that you (and I) think is important, let me know and I’ll give it a try and add it to the list. If you have any information to report, feel free to leave a comment.

I would be curious to know if the measurements remain alike when you deploy a simple test application (servlet or JEE, respectively) that uses some basic features (state full session bean, JMS, JPA). I think deploying an application would reveal more interesting details.

Thank you Antonio for these data but I’m with Tasha here. Without deploying a business application (as opposed to the admin console), I think it’s hard to interpret anything and to conclude. Moreover, I’m not even sure startup times of “empty” app servers are comparable. Some of them have an admin console, some other don’t, some even “lazy load” it (e.g. WebLogic 10.x). So I’m very impatient to read your next posts 🙂

@pascal To a degree, I think Antonio’s tests do deliver an important point. Pretty much every Java EE application server vendor talks about modularity. These tests give some insight into each vendor’s implementation (given Antonio’s admitted constraints), and how the modularity benefits the end user versus benefiting the vendor (a product’s internal architecture). I also completely agree with your point that including deployed applications is more relevant.

I agree with Alexis : it would have been a little more accurate to compare on the one hand only the web profile (for JavaEE6 container) or their equivalent (tomcat, resin 3, …) and on the other hand complete JavaEE containers.

Well Antonio, you should be happy. At least that means you’ll be able to sell your book at every Devoxx for the next 5 years ;).

More seriously, I’m a bit surprized about JBoss. I guess that’s because they turn on too many services by default (even though you haven’t taken the “all” profile).
It would be interesting to add a column for the catalog prices into your table (for the commercial servers of course).

Thanks. If you are going for non-JavaEE containers, you may (depending on time/ambition) also look at the Grizzly-only Web Container. I need to check on the status of their Servlet container, but that might be the closest approximation to Jetty and Tomcat

Yes, that’s what I’ve said, I’ll be more precise. App servers come with different flavours (load everything at startup or load on demand). I’ll write another post where I’ll deploy a web app and an enterprise app and take some measures

What if the web container is not required for an app? What if it is a JRuby app? EJB batch application? Agreed that this is an apples and oranges comparison, but the fact that GlassFish doesn’t *require* a web container to start speaks to the benefits of the GlassFish v3 architecture. I agree with other threads that deploying an app would yield more relevant results. I’d also like to see a JRuby app deployed across all appservers (using the native container on GlassFish v3).

Your comparison remind me the old time when JBoss was scaled for developers. That was at the early months of our century (2000, 2001) : WAS was taking about 4 minutes to start and JBoss only 20 seconds. Time has changed and JBoss start time is one of worst !

I think that you should add an other info in your comparison : is the appserv starting in client or server mode ? If it starts on server mode, switching in client mode in the start script could enhance the start time.

I think that memory footprint that you give is not correct without knowing the JVM parameters as :
-Xms
-Xmx
-XX:PermSize=
-XX:MaxPermSize=
for example for Sun JVM.
On the other side, the startup time ( and memory footprint) depends on what you load on startup :
autoload mecanism
load on demand mecanism

I don’t really get it…
What’s the interest of doing such benchmark ???
Who cares about a 1min boot or 0.00005sec for a “server” which is at the end supposed to be running for months/…
This answer will not be acceptable: “because the developer needs to do start/stop his app/server often when doing tests”.

The answer you rejected would be an excellent one. I don’t know exactly where, but an article was recently written (to illustrate the relevance of JRebel, actually, I think) that shows the high cost of app server restarts during dev step.

As you noticed, these servers are “supposed to be running for month”, actually, it is a default of quality not to prepare the case they are not running as long.

On the other hand, it must be enough to see the comments to understand this benchmarks is of interest for a lot of people.

I agree with Seb that production application servers are not chosen by companies with developer’s reboot time in mind. But, as Benoit says, I usually work with customers who have two different platforms. For example Windows & Glassfish in development, and Linux and Websphere in production. That’s a really common pattern. Then, what you have to do, is integrate really often so you come across incompatibilities issues as soon as possible.

I have seen clients where this pattern was the beginning of the end 🙂 .

Once you have proven that a cheap Tomcat or Glassfish fits your application’s needs, someone may ask why a more expensive big Java EE server is used on production. The operations team then need strong arguments to justify the investment 🙂 .

It’s not that big Java EE servers are useless. It’s just that if you don’t use their specific features, it may worth looking at a cheaper and simpler solution.

I understand your point but dev teams are influencers of middleware selection and dev teams like fast restart to gain productivity.

I have been working for more than 7 years with WebSphere, I remember we criticized the middleware because our client didn’t buy enough RAM on our desktops and WebSphere IDE (WSAD then RAD) was really slow.

Were these critics fair for the middleware ? I don’t think so because the root problem was that the infra guy didn’t align desktops power on WebSphere IDE pre-requisite. These guys didn’t admit that choosing a big J2EE container causes big costs also on the testing and dev environments.
At the end of the day, many dev teams had bad experience with WebSphere and influence their organization to choose another product.

@Benoît,

The restart duration is not very important for me as long as it is shorter than something like 5 minutes because, in a clustered architecture, node are restarted one after and the service is not interrupted.

I currently have 3 nodes clusters on production for a high volume web site. When we deploy a new version of the application, we stop one node, replace the war, restart, test with a wget and once it is ok, go to the next node. We always have at least 2 nodes to serve the request.
OK, it is Tomcat, the middleware start time is negligible in comparison to the application start time 😉 .

I agree with your argument for the nominal case of a critic project with HA. But if you look at departement size project, a cluster won’t be used (essentialy because of IT administration cost). Then restart time is important.

On the other hand, on very critic HA 99,999 % projects, the cluster is not enough to ensure HA. Remember a few years ago (2003 or 2004), when 75% of all the phone numbers of the french phone operator ByTel where migrated to pass through a unique clustered big HLR.

For a reason, that I am not aware of, one of the HLR was getting down, then all the traffic were redirected to the other node of the cluster, and the same action was realized on the second HLR, which was getting down to.

For one day, no call phone was possible for 75% of the clients of ByTel.

The condition are not the ones of a JavaEE project, but its the proof that clustering is not a miracle solution for critic project.

I must admit that I have faced few tricky problems with domino effects when restarting cluster nodes during peak hours ; very uncomfortable situation 🙂 .

Having a load balancer that offers to progressively add load on a restarted node is very valuable. Unfortunately, Apache mod_proxy_balancer doesn’t offer such a high level feature even if the low level features are here (primarily on the fly modification of the load factor) 😦 .

Another use case (of a short restart time in production) : for the previous versions of the WAS Deployment manager (I don’t know for the last version), whose aim was to upgrade webapp or ear in a cluster of WAS nodes, the restart of all the nodes was launched without waiting one node was up to date.

I even don’t speak of the re-rooting management at the load balancer and the session affinity management. If your really want a HA. You have a overhead with a great cost (In term of IT and dev.). And sometimes, for less critical applications, it may be acceptable not to manage it, but have a short restart time of the app server.

I personnaly think that all the best practices, the technologies, and the hard have a cost and must be seen globally in a project budget, and must be presented to the client, with their ROI.

And my opinion is still that the restart time of the app server, even in production is one of the element that must be taken in account.

It’s is rarely the main aspect, it may be considerer as irrelevant in a project context, but it must be considered.

> “If I have omitted an application server that you (and I) think is important, let me know and I’ll give it a try and add it to the list”.

G-WAN v3.9 is currently presented at the ORACLE Open World Expo. in San Francisco. While v3.9 is not published yet, v3.3 already support Java and could be tested here (v3.9 adds support for C#, a reverse-proxy, etc.).