Antonio's Bloghttps://antoniogoncalves.org
A blog mainly about JavaSat, 10 Feb 2018 10:23:01 +0000enhourly1http://wordpress.com/https://secure.gravatar.com/blavatar/5dd5107518dc70737e9e52b02a1d369a?s=96&d=https%3A%2F%2Fs2.wp.com%2Fi%2Fbuttonw-com.pngAntonio's Bloghttps://antoniogoncalves.org
Java EE vs Spring Testinghttps://antoniogoncalves.org/2018/01/16/java-ee-vs-spring-testing/
https://antoniogoncalves.org/2018/01/16/java-ee-vs-spring-testing/#commentsTue, 16 Jan 2018 15:01:54 +0000http://antoniogoncalves.org/?p=3139I’ve recently posted a Tweet about my day to day life. This Tweet said “I’ve reached a point where I can test Spring code in a couple of minutes, and Java EE code in a couple of hours :o(“

I was a bit surprised to read some reactions. In fact, some people asked me to explain this Tweet a bit more… so here I am.

Who Am I? What Am I Doing?

First of all, it’s important to recap who I am and where I come from. Basically, I’ve started working for BEA Systems in 1998, that’s when I discovered J2EE 1.2. In 1999 I was working for a big BEA Systems customer (think of a famous English Airway company ;o) setting up their on-line ticketing. Yes, in 1999 I was doing micro-services (EJBs), with clusters of dozens of services talking to each other in a highly optimized binary protocol (no, not gRPC, its ancestry RMI/IIOP), aggregating logs, setting up clusters, failover, sticky sessions, scalability and so forth. All that with J2EE (and yes, it was in 1999).

When I left BEA Systems I worked on and off with Java EE and Spring. I then wrote a few Java EE books, got involved in the JCP, gave a few talks mainly on Java EE, while I kept working at my customers on Java EE and Spring.

With the entire “Java EE 8 drama” (i.e. the specification wasn’t moving anywhere), the MicroProfile, the Java EE Guardians, EE4J,…. I decided to give myself a “Java EE break”. Since last year I’ve started working with Spring Boot and Angular (thanks to JHipster). Two technologies I wasn’t very familiar with, and I have to say, I’m enjoying it !

So let me rephrase this again: I’ve mostly worked with Java EE, but I haven’t touch a Java EE project for one year, instead, I’ve been working with Spring Boot. That’s why, when I got back to Arquillian recently, it was a bit painful.

Testing

Let’s be plain here: when your code is executed in a managed environment (Java EE, Spring…) Unit Test are (mostly) useless! I have lived this for so long in my projects, that, as a joke, I launched the No Mock Movement back in 2012. Today, in the world of containers and micro services, this is even more true. What is the percentage of your unit tests compare to integration tests? And yes, by unit tests I mean “testing in isolation” with no memory database, no memory web server, no memory message broker… just pure unit tests.

Unit tests in a managed environment are useless. Integration tests are useful, therefore, they should be easy to write.

Testing with Java EE

So, what do you do when you find out that Unit Tests in a managed environment are useless? You write integration tests. And in Java EE, you can choose between Arquillian (my post from 2012), Payara APIs, TomEE APIs… only proprietary APIs actually. So when I read this Tweet I was a bit surprised:

“Proprietary specless”. Interesting. In 2013 I wrote a post on a Java EE 8 Wishlist (basically, about what I wanted to see happening in Java EE 8). Despite the fact that not much from this wishlist has happened, this is what I wrote:

Integration tests should be easier to write. Arquillian already does a fantastic job for integration testing, but we could go further. If we manage to have a single container, and therefore a single container API, then it would be easier to write tests that start a container, lookup for components, check available services, invoke code, check that interceptors have been applied, that Servlets have been constructed, that a Singleton has been initialized at startup by the container…

There is no Java EE specification for managing a container through APIs. If you want to bootstrap the container to do some integration testing, you need to go proprietary (Arquillian, TomEE APIs, Payara APIs…)… like Spring.

I wrote Arquillian tests for years (talked about it, wrote about it). For the Tweets I received, it looks like it is easier to use TomEE or Payara. I have never used those so I can’t tell. But again, this shows that testing in Java EE is not standard.

Testing with Spring

In 2002 when Rod Johnson wrote his famous Expert One–on–One J2EE Design and Development book, his chapter 3 was titled “Testing J2EE Applications“. Yes, in 2002, someone was already looking at how to test a J2EE application. And this same person created Spring, with testing in mind. If you look for the word “cluster” in the Java EE 7 specs (2013), you will only find it once. If today, in 2018, you look for the word “test” in all the Java EE 8 specs, you will not find it very often (if you take out “test” for TCK, and “test” for if statements).

I actually committed a couple of integration tests using Spring Boot recently. There is not much to say when you look at the code. I have a Spring Boot application with a main class that looks like this:

I won’t go into too much technical detail, but if you compare with the code I wrote for my Arquillian post in 2012, testing with Spring Boot is way simpler. With Arquillian you need to package your classes into an archive with all the dependencies (introspecting the pom.xml), and then, deploy this archive into a container. As I said to a friend back at Devoxx BE “Arquillian will be great once we get rid of Shrinkwrap“. What this means is, I don’t want to package code to test it anymore (BTW, I love Shrinkwrap ;o) With Spring, behind the scene it’s more or less the same (no packaging, but classpath scanning and bootstrapping the Spring container), but it’s way simpler.

Java EE is Dead

No, it’s not a Java EE Game Over for me (more on this blog in a few weeks ;o) but I have to say that I am enjoying my new development’s life around JHipster Microservice architecture (Spring Cloud + Angular + Netflix OSS). So I’ll stick to it for now, and I’ll keep a close look at the MicroProfile. And I really hope that EE4J / Microprofile will make sure to have a specified API to control the container (à la Wildfly Swarm), so integration testing will be easy and portable.

Conclusion

Integration tests are important in a managed environment. Even if they are difficult to maintain, they should be as easy to write as possible. Integration tests in Java EE are complex, not portable, still rely on packaging (when using Arquillian), and on a running container (even if Arquillian uses managed container). Spring was created with testing in mind, therefore, integration test were already easier to write years ago. Today, with Spring Boot, it is even easier than ever.

Coming back to the topic of this post: I’ve worked with Spring Boot for more than a year now. I can now write a Spring Boot test in a couple of minutes. Recently I had to use Arquillian. Because I hadn’t use it for a year, and due to the packaging and deployment complexity, it took me a couple of hours.

AsciiDoc is a great way to write technical documentation. It is text based, can be committed and versionned in your VSC with your project, has a rich syntax, has a huge ecosystem, integrates with several tools (such as PlantUML that I love) and, if there is still something missing, you can use extensions or create your own. And if you use the asciidoctor-maven-plugin to automatically generate all your documentation, you end up customizing it.

In this post I explain a few configurations that I use in the asciidoctor-maven-plugin.

Including remote code in your documentation

If like me you code and write documentation about your code with AsciiDoctor, you might use the source code block with syntax highlighting. So, you copy/paste your code into an Asciidoc [ source ] block and you can even give it a code highlighter (here, coderay), a language (Java), a title and so on. Then, you realize that you don’t want to copy/paste code, you just want to point at the code itself. So, if the Java file Book.java is sitting on your drive, you use the include:: directive and pass it the relative or absolute path. If you don’t want to include all the file, you can add just a portion of it, by using tags (eg. here the tag is called snipped).

== Using include with a local file
[ source ]
----
include::/Users/agoncal/Code/src/main/java/Book.java[tag=snippets]
----

This is much better than copy/paste as you can change your code, and it will get updated in your documentation once you re-generate it. Then, you realize that you don’t want to point at a file on your disk, but on a remote file on GitHub. Asciidoctor allows you to put a URI in the include directive:

== Using include with a remote file
[ source ]
----
include::https://raw.githubusercontent.com/agoncal/agoncal-book-javaee7/master/src/main/java/org/agoncal/Book.java[tag=snippets]
----

This will not work because you need to allow Asciidoctor to access remote URIs by passing the <allow-uri-read/> attribute. For security reasons you can’t just add this attribute to the document itself (with a : :allow-uri-read:), but instead, pass it on the pom.xml:

Including Diagrams

If, like me, you use PlantUML, you either have the choice to create your diagrams in separate .puml files and include them in your document, or embed them straight into your .adoc file. Asciidoctor has a nice integration with PlantUML. You can add a diagram straight into the document:

Notice the data-uri attribute. You can either set it on the pom.xml or straight into the document itself with :data-uri:. This allows you to embed the image straight into the HTML itself using the <img src="data:image/png;base64> syntax.

But before using these extensions, you need to copy the Ruby files (here, under ${project.basedir}/src/main/asciidoc-extensions) and configure them in the pom.xml as follow (notice the allow-uri-read attribute to allow reading from external URIs for Gist in this example):

Installing Gems

When you use extensions, sometimes, you hit an error because a Gem is not installed. For example, if you need a Lorem Ipsum in your document, there is an extension for generating it: lorem-block-macro. But if you use it as explained above, you will get an exception because this extension needs the Midlleman Gem. You can install this Gem in different ways… or leave your good old Maven to do it.

We got the Gem into our Maven repository, but now we need to install it. For that we use the Maven Gem plugin and set it up on the Maven initialize phase. A simple mvn initialize will install all the needed Gems for Middleman in your target/rubygems directory:

Then, it’s just a matter of using the Lorempsum extension straight into your document:

== Lorem Extension (lorem-block-macro.rb)
lorem::sentences[5]

Conclusion

Because I am mostly Maven centric, I like to automatically generate documentation from Maven, so I use extensively the asciidoctor-maven-plugin. But I have to say, sometimes I get lost in the documentation to find what I need. The Asciidoctor examples don’t always give the Maven configuration, so you have to spend some time trying around (check the asciidoctor-maven-examples).

References

]]>https://antoniogoncalves.org/2017/08/22/configuring-the-asciidoctor-maven-plugin/feed/0asciiagoncalDownload the codeDownload the codeTalks I Gave at Conferences and Meetupshttps://antoniogoncalves.org/2017/07/03/talks-i-gave-at-conferences-and-meetups/
https://antoniogoncalves.org/2017/07/03/talks-i-gave-at-conferences-and-meetups/#respondMon, 03 Jul 2017 14:00:34 +0000http://antoniogoncalves.org/?p=3076Here is the list of talks I gave at conferences and meetups (and I try to keep this list up to date):

With my friend Sebastien Pertus, we decided to create a 3 hours university on “how an Angular front-end could communicate with an Enterprise Java Micro Profile back-end”. So we spent a few months organizing the slides and the code, giving the talk at several conferences and JUGs…. and here are the videos and slides finally public.

In this talk, I play my role (a back-end guy using Enterprise Java MicroProfile and exposing a JAX-RS API) and Sebastien plays the role of a front-end developer (TypeScript, Angular consuming a REST API). We divided the talk into two parts :

The one thing I hate the most is wasting time in administrative tasks. When you have a company, deal with customers, invoice, pay taxes, sign contracts, etc. you end up spending a lot of time doing admin instead of your real work. What do you do? Well, you don’t have the choice. At the beginning it’s so scary, that you print paper, sign it with a pen, scan it, send it via email, or worth, via post, and do it again and again…. until you go “Ok, let’s automate these boring tasks with code!“.

In this blog post I show you how to generate a PDF with iText and how to send it via DocuSign so your partners can sign it electronically. All through Java code and APIs. This means you don’t have to touch MicroSoft Word to create a document nor log on into DocuSign to send it to someone to be signed.

Generating a PDF with iText

PDF has been around for a long time (1990) and iText was created in 1999 to allow developers to generate PDF files using a Java API (then, iText was ported to other languages). Today it has a dual licence, and the latest version is iText 7 (announced in May 2016 and quite different from iText 5).

Maven Dependencies

First, let’s setup our Maven dependencies. iText 7 is divided into several Maven artifacts. So it just depends on what you need. Here is what I’ve used (I could have avoid kernel as most artifacts depend on it):

Synchronize PDF while Developing

Before coding with iText, let me give you a piece of advise. One annoying thing when generating a PDF, is that it takes time to have it the way you want. So, you change your code, generate the PDF, open the file (this usually opens Adobe Reader), look at it, close the PDF, and so on. I couldn’t find any way for Acrobate Reader to automatically reload the PDF once the file changes, so I looked around and discovered Skim (there are other solutions, but this one worked for me). Skim automatically synchronizes the PDF when the file changes.

I installed it, tweaked it a bit, and set the automatic reload (open Skim -> Preferences -> Synchro). Now I’m ready to code my PDF document.

Three pages to Generate

The contract I want to generate is three pages long. As you can see in the thumbnails below, these three pages have a few images, text, lists, tables, a footer, A4 portrait, A4 landscape… All this text and graphics is doable with the iText APIs.

I won’t go through all the code that generates these 3 pages (just have a look at the GitHub repo), but I want to highlight something important: the zone where the customer needs to sign:

If you look at the iText code below, this zone is modeled as a table. Basically, it is just a 3×2 table, where the text name, date and signature is displayed on the left, and empty cells on the right. That’s all. There is no extra data to tell the customer where to sign. This is actually done with the DocuSign APIs.

What I usually do, is that I go to the DocuSign web interface, I upload a PDF, I add what they call “tags” (basically, the zones on the PDF where the customer signs), the name and email address of the person signing the document, and click send. This sends an email to the customer, he/she opens it, and signes the document automatically. When I need to send the same document for several people, I can upload a PDF that will act as a template. That saves a bit of time, but still, there’s a lot of time wasted.

I discovered recently that DocuSign has a set of APIs and even a SDK for Java. It allows me to send a document, but also to add tags to my generated PDF. So let’s see how to use DocuSign from Java code.

Creating a Developer Sandbox

The first thing you need to do to start using the DocuSign APIs is to create a developer’s sandbox (this way you are sure your documents are not legally signed while developing). Enter your name and email address, you then receive a validation email, log on and generate an integrator key.

The Integrator Key identifies your application and is also used as your client id during OAuth token requests. It looks like this:

4765b81-3558-4289-bf9d-e40977653c4

Maven Dependencies

The DocuSign SDK is quite easy in terms of dependencies, only one is needed.

Adding Tags to the PDF

I won’t go through all the Java code used to send a PDF. Check the code, I hope it is easy to read. Basically, we Base64 encode the PDF file to put it inside a DocuSign envelop, add the recipients, and send the envelop. The interesting part is how to add tags on the needed zones.

As you can see above, the DocuSign tags are the place holders that will hold the information about the customer (name) and his/her signature (date of signature, and signature itself). These tags cannot be added to the PDF itself with iText, you need the DocuSign API.

Below, the getTabFullName method sets the FullName tag at a X Y position on the PDF (on page 1). Look at the code, you will see a getTabDateSigned and a getTabSignHere methods. Based on the same logic, these two methods add the date of the signature and the signature itself.

Conclusion

Here I just gave you a simple example, but I use this technic for more complex things, such as resource bundle (so I can swap languages in the contract) or more advanced DocuSign tags. Basically, with the iText and DocuSign APIs, the sky is the limit ;o)

Administrative tasks are boring and they take a lot of our precious time. Generating PDF with code is fun, using DocuSign APIs to legally sign documents if also fun.

In this blog post I’ll show you how to use the JJWT library to issue and verify JSon Web Tokens with JAX-RS endpoints. The idea is to allow an invocation when no token is needed, but also, be able to reject an invocation when a JWT token is explicitly needed.

Let’s say we have a REST Endpoint with several methods: methods that can be invoked directly, and methods that can be invoked only if the caller is authenticated. There are several ways to authenticate, authorize, encrypt… REST endpoints invocations. Some complex, some easier. Here I will use JWT, or JSon Web Token. The idea is that when authorization is needed, the caller needs to get a JWT token and then pass it around. I won’t go into too much details on JSon Web Token as you can find plenty of resources. I just want to show you some code so you see how easy it is to setup with JAX-RS.

Use Case

In this example we have two REST Endpoints:

EchoEndpoint: this is just a Echo endpoint with two methods: one accessible by everyone (echo), another one accessible only if you pass a valid JSon Web Token (echoWithJWTToken), meaning you identified first using UserEndpoint

UserEndpoint: this endpoint returns information about the users of the application(the User JPA entity), but more important, has a method to authenticate (authenticateUser) using login/password. Once authenticate, you get a JSon Web Token (and can then pass it around)

Securing an Invocation

Below is the code of the EchoEndpoint. As you can see, this basic JAX-RS Endpoint has two GET methods both returning a String:

one on /echo accessible by everyone

one on /echo/jwt only accessible if the client passes a token. How do we check that the token is needed? Using the JWTTokenNeededname binding and the JWTTokenNeededFilter (see below)

Filter Checking the JSon Web Token

The magic hides behind JWTTokenNeeded. Well, not really, it hides behind the JWTTokenNeededFilter. JWTTokenNeeded is just a JAX-RS name binding (think of it as a CDI interceptor binding), so it’s just an annotation that binds to a filter.

The filter itself is the one doing all the work. It implements ContainerRequestFilter and therefore allows us to check the request headers. Basically, when the EchoEndpoint::echoWithJWTToken method is invoked, the runtime intercepts the invocation, and does the following:

Gets the HTTP Authorization header from the request and checks for the JSon Web Token (the Bearer string)

As you can see on line 22, the JJWT library is very simple as it checks if the token is valid in only 1 line. Validation is made depending on a Key. Here I just use a String to make the example easy to understand, but it could be something safer, like a keystore.

Issuing a JSon Web Token

Ok, now we have a filter that checks that the token is passed in the HTTP header. But how is this token issued? The user needs to login, invoking an HTTP POST and passing a login and password (here, login and password are passed in clear for sake of simplicity, but this part should use HTTPs). Once authenticated, JJWT is used to create a token based on the users’ login and the secret key (the same key used in JWTTokenNeededFilter).

Notice that the JSon Web Token sets a few claims (in the issueToken method): subject (the principal’s login), a issuer (the one who issued the token), an issued date, a signing algorithm, and very important, an expiration date for the token. Now, the client is authenticated, and it has a token that it needs to pass to be able to invoke the Echo endpoint again.

Conclusion

In this blog I wanted to show you how easy it is to issue and validate a JSon Web Token withJAX-RS. Here I’m using the external JJWT library as this is not standard in JAX-RS. I find JJWT easy to use but you can find other librairies that do more or less the same (jose.4.j, Nimbus or Java JWT). I didn’t use any security for authentication (security is complex and not very portable in Java EE) so the login/password are not encrypted and no realm is setup.

Application Servers have changed a lot: consuming less memory, being faster at startup time… Now it’s time to change the way we package our applications: from Ears, to Wars, and now to executable Jars.

This is what I explained in this “Just Enough App Server” talk I gave at few conferences lately. So if you want to know more about Application Servers today, you can watch my talk in English, in French, download the slides or play with the code using WildFly Swarm.

Talk in English

Talk in French

Talk in Portuguese

Slides

Quicky on WildFly Swarm

]]>https://antoniogoncalves.org/2016/07/13/just-enough-app-server-with-wildfly-swarm/feed/1WildflySwarmagoncalDownload the code“Micro Profile in Enterprise Java” Announced !https://antoniogoncalves.org/2016/06/27/micro-profile-in-enterprise-java-announced/
https://antoniogoncalves.org/2016/06/27/micro-profile-in-enterprise-java-announced/#commentsMon, 27 Jun 2016 17:44:30 +0000http://antoniogoncalves.org/?p=2926The developers’ world is a mixture of evolutions and reinventing the wheel. When I was doing EJBs 1.0 back in 1998, I was doing Micro Services. But I had to wait 2014 for someone to give it a name, and 2016 to see it officially arriving in Enterprise Java. So here we are: a Micro Profile has been announced for Enterprise Java.

And because it’s built on top of Java SE, we can also have access to JAX-B and JAX-P when dealing with XML instead of JSon. And that’s it! No more specifications for now! We wanted to make it micro. Bean Validation could come along in the next version of the Micro Profile or/and Concurrency.

The Micro Profile Didn’t Come out of the Blue

As I said, the idea of Micro Service in Java EE comes from its genesis: the EJB component model was a Micro component: just embed the needed business code, as small as you can, package it as a unit in a single jar, link it with other components/EJBs via RMI/IIOP and the container will look after the rest. The EJB model was micro… not the container ;o)

But since then, Java EE containers have evolved. They have been lighter, smaller, faster… so it was time to have a Micro Profile: micro components running in a micro container.

And remember that Java EE is modular by design: Java EE is just a bunch of specifications, each evolving at their own pace. Just use the ones you need.

What’s next?

The Micro Profile just got announced. There is still a lot of work to be done. First of all, it has to be implemented by the vendors (I’m sure we can expect a WildFly, Websphere Liberty, Payara or TomEE Micro Profile very soon). Then it will have to be standardized by the JCP. Being just a profile, this can be quick. And then add more functionalities and APIs to this profile. We might think of a BootStrap API allowing the container to boot and execute the deployed Micro Services. And I’m sure a few extra specifications, APIs and patterns will be added up in future versions.

To build the future of the Micro Profile, make sure to answer the survey and tell us if you are more interested in Startup time, Disk space, Memory, Uber-jar, Metrics, Circuit Breakers, Bulkheads, Reactive or Client-side load balancing.

So it’s up to you now. Help us optimize Enterprise Java for a microservices architecture by joining the MicroProfile Google Group.

Conclusion

If you already have a Java EE expertise. If you know JAX-RS and CDI by heart. If you need micro services in your architecture, jump into the Micro Profile. You will be able to quickly develop Micro Services re-using your Java EE expertise.

I’ll be writing a few technical blogs on how to develop Micro Services in Java EE. Stay tuned.

Thanks for your comments. I’ve added Undertow to the test ( “Servlet-Only Distribution” on the WildFly download page), corrected some mistakes, but more important, I’ve changed the memory benchmark: I now take a memory usage measure at startup, I then perform GC and wait a bit for memory to stabilize. This gives a nice min and max memory usage.

Damn, I’ve been waiting so long to publish this blog (which is the successor of the same post on Java EE 6 app servers). The idea is to do some “benchmarking”(basically, startup time, disk and memory usage) on application servers implementing Java EE 7. Java EE 7 came out in may 2013 and we have now many application servers that have passed the TCK. So let’s see where we are in terms of Java EE 7 implementations and production support.

State of Java EE Application Servers

Before diving into each implementation, let’s quickly see what happened in the application server land in these last 3 years.

Geronimo is no longer supported and will not run the race of Java EE 7. Not sure about JonAS but it hasn’t been updated since. GlassFish is still developped by Oracle and still is the Java EE reference implementation, but has no longer production support. You will have to look into Payara which is the supported version of GlassFish. Instead, Oracle is focusing on Weblogic which has Java EE 7 production support. JBoss 7 EAP is still in Beta but WildFly (the community edition of JBoss AS) has been supporting Java EE 7 since version 8. Websphere has now a Liberty and a Classic version. Jeus, from TmaxSoft and uCosminexus from Hitachi are still running the Java EE 7 race but are not as known as the other app servers.

The main editors also have a Java EE 7 Web Profile implementation (GlassFish, Payara, Websphere) and some only implement the Web Profile such as Resin, Siwpas, or TomEE. Last but not least, the Servlet containers also run the Java EE 7 race as they implement Servlet 3.1, such as Jetty, Tomcat or Undertow.

The Benchmark

Disclamer: This is not a real benchmark !

The idea of this benchmark is to download a Java EE 7 application server, install it, start it, launch the admin console if any, and take some measures : size of download, ease of installation, size on disk once installed, startup time, memory usage... That’s all. I do not deploy any application, I don’t do fancy twists to gain performance… I’m just concerned about the usability of an application server for a developer in 2016. I’m doing all my tests on a Retina OS X (16 Gb or RAM, SSD), no Docker! I use JDK 1.8.0_66 (when it’s not bundled with the server). No optimization at all is made (I haven’t twisted the JVM, or any application server parameter… everything comes out of the box).

To calculate the startup time, I don’t do any fancy rocket science either. I just start the server a few times, check the logs to see how long it takes and use the best startup time. Also remember that some servers do not load any container at startup, making them very fast to start. That’s why I trigger the web admin console (when there’s one) so I’m sure at least one web application is deployed. To calculate the memory footprint, I use JConsole and take two measures: memory used at server startup, and memory used after few seconds (after the GC calms down a bit).

GlassFish 4.x

GlassFish 4 is the open source reference implementation for Java EE 7. Being the reference implementation, GlassFish 4.x implements both Java EE 7 and the Web Profile 7, that’s why you have two different bundles you can download. Oracle has dropped commercial support for GlassFish and Payara is now the supported version.

JBoss 7 EAP

Unfortunatelly JBoss 7 EAP is still not final at the time of writing this blog, so no commercial support at RedHat. JBoss 7 EAP is based on WildFly, which is the community edition and evolves at a faster pace.

WildFly

WildFly is the community name for the JBoss application server. It has been supporting Java EE 7 since version 8.x. Both WildFly and EAP support the Full and the Web profile but there is no separate bundles, only one.

*Weblogic doesn’t show milliseconds in the logs so I had to change the starting script to add a few date +%s%3N.

Websphere and WebSphere Liberty 8.5.x

Today IBM has two versions of Websphere 8.5 : Classic and Liberty, both implementing Java EE 7. As for the Classic, you need to get lost in the IBM website maze and ask for help on Tweeter, to understand that you can’t install it on Mac OS X. So I just concentrated on WebSphere Liberty which is a completely different beast: download, unzip, run. There is no administration console but.

Java EE Web Profile 7 Application Servers

If GlassFish, Payara and Websphere have two different distributions (Full and Web profile), some application servers only implement the Web Profile. That’s the case of TomEE and Resin.

TomEE 7.x

Apache TomEE is the perfect success story for the Web Profile. TomEE is no more than Tomcat + OpenWebBeans + OpenEJB + OpenJPA + MyFaces + other bits. It really shows that Java EE is a jigsaw puzzle where you can take open standards, bundle them together and become a certified Web Profile application server. This TomEE version is bundled with Tomcat 8.0.29.

(*) To be able to log on to the admin console, you need to change the $TOMCAT_HOME/conf/tomcat-users.xml configuration files and add a manager-gui role to a user.

Resin 5.x

Resin was one of the first servlet containers (such as Jetty or Tomcat) to move to the Web Profile. Again, it’s another success story that confirmed that profiles in Java EE were needed. Resin is very CDI centric and based on the Caucho implementation called CanDI.

Servlet 3.1 Servers

You can see Java EE 7 application servers implementing everything (the Full profile), a bit (Web profile) or just the web server portion (Servlet 3.1). That’s the case of Tomcat, Jetty or Undertow. Unfortunately I do not benchmark Undertow because it doesn’t have a standalone installation: you need to start it up in your code.

Summary

As a summary I will show you two graphs with startup time and memory consumption so it’s easier to compare.

Startup time

When you are a developer, startup up time matters (even when you move your app server to the Cloud). Thanks to tools like JRebel, or hot deployment in our IDEs, we tend to restart our app servers less and less. But still, it’s important to have fast startup time. Something to stress out, is that most of the app servers bootstrap the minimum set of services and then, lazy load services when needed.

From left to right we have Full profile, Web profile and Servlet containers. Of course, Servlet containers start faster. But it’s interesting to notice that most app servers boot in less that 3 seconds, and half, in less than 2 seconds (remember that these measures are taken on a Mac OS X 16Gb RAMP SSD). Weblogic is by far the slowest (> 8 seconds) and WildFly the fastest of the Java EE Full profile app servers. TomEE 7 (Web profile) is slightly slower than Tomcat (Servlet).

Memory Footprint

The memory consumption is also important. In this era of Micro Services, we want fast startup time and low resource consumption. Without any JVM or server twists, most app servers use less than 150Mb of RAM. Here I take the measure at startup, perform a GC and wait a few seconds so it stabilize. So you get a min and a max memory usage.

Conclusion

First of all, as you can see there is no Websphere Classic. Despite being a big player, it is difficult to find, difficult to install, and has no Mac OS X support. I hope one day I’ll be able to add it to the benchmark.

What did I want to show in this blog ? Well, that application servers have changed. Most app servers take few resources, are modular and have nice tricks to start up fast (like lazy loading services).

And why did I want to show this ? Just to show that the “Tomcat is light, Java EE app servers are not” line is outdated. Use the app server that fits your need. You need Servlet ? Go for Jetty or Tomcat. You need some JAX-RS with CDI and JPA ? Choose TomEE, WildFly or Websphere Liberty. You need the full monty ? Use WildFly, Weblogic or GlassFish. You need production support ? Use JBoss EAP (you’ll have to wait a few extra month). You need to create Micro Services ? Wait for my next blog ;o)