Sunday, December 28, 2014

We saw that JACC offered a clear standardized hook into what's often seen as a completely opaque and container specific process, but that it also had a number of disadvantages. Furthermore we provided a partial (non-working) implementation of a JACC provider to illustrate the idea.

In this part of the article we'll revisit JACC by taking a closer look at some of the mentioned disadvantages and dive a little deeper in the concept of role mapping. In part II we'll be looking at the first element of a more complete implementation of the JACC provider that was shown before.

To refresh our memory, the following were the disadvantages that we previously discovered:

Arcane & verbose API

No portable way to see what the groups/roles are in a collection of Principals

No portable way to use the container's role to group mapper

No default implementation of a JACC provider active or even available

Mixing Java SE and EE permissions (which protect against totally different things) when security manager is used

JACC provider has to be installed for the entire AS; can not be registered from or for a single application

As it later on appeared though, there's a little more to say about a few of these items.

Role mapping

While it's indeed the case that there's no portable way to get to either the groups or the container's role to group mapper, it appeared there was something called the primary use case for which JACC was originally conceived.

For this primary use case the idea was that a custom JACC provider would be coupled with a (custom) authentication module that only provided a caller principal (which contains the user name). That JACC provider would then contact an (external) authorization system to fetch authorization data based on this single caller principal. This authorization data can then be a collection of roles or anything that the JACC provider can either locally map to roles, or something to which it can map the permissions that a PolicyConfiguration initially collects. For this use case it's indeed not necessary to have portable access to groups or a role to groups mapper.

Building on this primary use case, it also appears that JASPIC auth modules in fact do have a means to put a specific implementation of a caller principal into the subject. JASPIC being JASPIC with its bare minimum of TCK tests this of course didn't work on all containers and there's still a gap present where the container is allowed to "map" that principal (whatever this means), but the basic idea is there. A JACC provider that knows about the auth module being used can then unambiguously pick out the caller principal from the set of principals in a subject. All of this would be so much simpler though if the caller principal was simply standardized in the first place, but alas.

To illustrate the basic process for a custom JACC provider according to this primary use case:

In the second use case the role mapper and possibly knowledge of which principals represent groups is needed, but since this JACC provider is the one that ships with a Java EE container it's arguably "allowed" to use proprietary techniques.

Do note that the mapping technique shown maps a subject's groups to roles, and uses that to check permissions. While this may conceptually be the most straightforward approach, it's not the only way.

Groups to permission mapping

An alternative approach is to remap the roles-to-permission collection to a groups-to-permission collection using the information from the role mapper. This is what both GlassFish and WebLogic implicitly do when they write out their granted.policy file.

The following is an illustration of this process.
Suppose we have a role to permissions map as shown in the following table:

Role-to-permissions

Role

Permission

"admin"

[WebResourcePermission ("/protected/*" GET)]

This means a user that's in the logical application role "admin" is allowed to do a GET request for resources in the /protected folder.
Now suppose the role mapper gave us the following role to group mapping:

Role-to-groups

Role

Groups

"admin"

["admin-group", "adm"]

This means the logical application role "admin" is mapped to the groups "admin-group" and "adm".
What we can now do is first reverse the last mapping into a group-to-roles map as shown in the following table:

Group-to-roles

Group

Roles

"admin-group"

["admin"]

"adm"

["admin"]

Subsequently we can then iterate over this new map and look up the permissions associated with each role in the existing role to permissions map to create our target group to permissions map. This is shown in the table below:

Group-to-permissions

Group

Permissions

"admin-group"

[WebResourcePermission ("/protected/*" GET)]

"adm"

[WebResourcePermission ("/protected/*" GET)]

Finally, consider a current subject with principals as shown in the next table:

Subject's principals

Type

Name

com.somevendor.CallerPrincipalImpl

"someuser"

com.somevendor.GroupPrincipalImpl

"admin-group"

com.somevendor.GroupPrincipalImpl

"architect-group"

Given the above shown group to permissions map and subject's principals, a JACC provider can now iterate over the group principals that belong to this subject and via the map check each such group against the permissions for that group. Note that the JACC provider does have to know that com.somevendor.GroupPrincipalImpl is the principal type that represents groups.

Principal to permission mapping

Yet another alternative approach is to remap the roles-to-permission collection to a principals-to-permission collection, again using the information from the role mapper. This is what both Geronimo and GlassFish' optional SimplePolicyProvider do.

Principal to permission mapping basically works like group to permission mapping, except that the JACC provider doesn't need to have knowledge of the principals involved. For the JACC provider those principals are pretty much opaque then, and it doesn't matter if they represent groups, callers, or something else entirely. All the JACC provider does is compare (using equals() or implies()) principals in the map against those in the subject.

In the code fragment above rolePermissions is the map the provider created before the mapping, principalRoleMapping is the mapping from the role mapper and principalPermissions is the final map that's used for access decisions.

Default JACC provider

Several full Java EE implementations do not ship with an activated JACC provider, which makes it extremely troublesome for portable Java EE applications to just make use of JACC for things like asking if a user will be allowed to access say a URL.

As it appears, Java EE implementations are actually required to ship with an activated JACC provider and are even required to use it for access decisions. Clearly there's no TCK test for this, so just as we saw with JASPIC, vendors take different approaches in absence of such test. In the end it doesn't matter so much what the spec says, as it's the TCK that has the final word on compatibility certification. In this case, the TCK clearly says it's NOT required, while as mentioned the spec says it is. Why both JASPIC and JACC have historically tested so little is still not entirely clear, but I have it on good authority (no pun ;)) that the situation is going to be improved.

So while this is theoretically not a spec issue, it is still very much a practical issue. I looked at 6 Java EE implementations and found the following:

JACC default providers

Server

JACC provider present

JACC provider activated

Vendor discourages to activate JACC

JBoss EAP 6.3

V

V

X

GlassFish 4.1

V

V

X

Geronimo 3.0.1

V

V

X

WebLogic 12.1.3

V

X

V

JEUS 8 preview

V

X

V

WebSphere 8.5

X

X

- (no provider present so nothing to discourage)

As can be seen only half of the servers investigated have JACC actually enabled. WebLogic 12.1.3 and JEUS 8 preview both do ship with a JACC policy provider, but it has to be enabled explicitly. Both WebLogic and JEUS 8 in their documentation somewhat advice against using JACC. TMaxSoft warns in its JEUS 7 security manual (there's not one for JEUS 8 yet) that the default JACC provider that will be activated is mainly for testing and doesn't advise to use it for real production usage.

WebSphere does not even ship with any default JACC policy provider, at least not that I could find. There's only a Tivoli Access Manager client, for which you have to install a separate external authorization server.

I haven't yet investigated Interstage AS, Cosminexus and WebOTX, but I hope to be able to look at them at a later stage.

Conclusion

Given the historical background of JACC it's a little bit more understandable why access to the role mapper was never standardized. Still, it is something that's needed for other use cases than the historical primary use case, so after all this time is still something that would be welcome to have. Another huge disadvantage of JACC, the fact that it's simply not always there in Java EE, appeared to be yet another case of incomplete TCK coverage.

Tuesday, November 25, 2014

One of the new specs that will debut in Java EE 8 will be MVC 1.0, a second MVC framework alongside the existing MVC framework JSF.

A lot has been written about this. Discussions have mostly been about the why, whether it isn't introduced too late in the game, and what the advantages (if any) above JSF exactly are. Among the advantages that were initially mentioned were the ability to have different templating engines, have better performance and the ability to be stateless. Discussions have furthermore also been about the name of this new framework.

This name can be somewhat confusing. Namely, the term MVC to contrast with JSF is perhaps technically not entirely accurate, as both are MVC frameworks. The flavor of MVC intended to be implemented by MVC 1.0 is actually "action-based MVC", most well known among Java developers as "MVC the way Spring MVC implements it". The flavor of MVC that JSF implements is "Component-based MVC". Alternative terms for this are MVC-push and MVC-pull.

One can argue that JSF since 2.0 has been moving to a more hybrid model; view parameters, the PreRenderView event and view actions have been key elements of this, but the best practice of having a single backing bean back a single view and things like injectable request parameters and eager request scoped beans have been contributing to this as well. The discussion of component-based MVC vs action-based MVC is therefore a little less black and white than it may initially seem, but of course in it's core JSF clearly remains a component-based MVC framework.

When people took a closer look at the advantages mentioned above it became quickly clear they weren't quite specific to action-based MVC. JSF most definitely supports additional templating engines, there's a specific plug-in mechanism for that called the VDL (View Declaration Language). Stacked up against an MVC framework, JSF actually performs rather well, and of course JSF can be used stateless.

So the official motivation for introducing a second MVC framework in Java EE is largely not about a specific advantage that MVC 1.0 will bring to the table, but first and foremost about having a "different" approach. Depending on one's use case, either one of the approaches can be better, or suit one's mental model (perhaps based on experience) better, but very few claims are made about which approach is actually better.

Here we're also not going to investigate which approach is better, but will take a closer look at two actual code examples where the same functionality is implemented by both MVC 1.0 and JSF. Since MVC 1.0 is still in its early stages I took code examples from Spring MVC instead. It's expected that MVC 1.0 will be rather close to Spring MVC, not as to the actual APIs and plumbing used, but with regard to the overall approach and idea.

As I'm not a Spring MVC user myself, I took the examples from a Reddit discussion about this very topic. They are shown and discussed below:

CRUD

The first example is about a typical CRUD use case. The Spring controller is given first, followed by a backing bean in JSF.

As can be seen from the two code examples, there are at a first glance quite a number of similarities. However there are also a number of fundamental differences that are perhaps not immediately obvious.

Starting with the similarities, both versions are @Named and have the same service injected via the same @Inject annotation. When a URL is requested (via a GET) then in both versions there's a new Appointment instantiated. In the Spring version this happens in getNewForm(), in the JSF version this happens via the instance field initializer. Both versions subsequently make this instance available to the view. In the Spring MVC version this happens by setting it as an attribute of the model object that's passed in, while in the JSF version this happens via a getter.

The view typically contains a form where a user is supposed to edit various properties of the Appointment shown above. When this form is posted back to the server, in both versions an add() method is called where the (edited) Appointment instance is saved via the service that was previously injected and a flash message is set.

Finally both versions return an outcome that redirects the user to a new page (PRG pattern). Spring MVC uses the syntax "redirect:/appointments" for this, while JSF uses "/appointments?faces-redirect=true" to express the same thing.

Despite the large number of similarities as observed above, there is a big fundamental difference between the two; the class shown for Spring MVC represents a controller. It's mapped directly to a URL and it's pretty much the first thing that is invoked. All of the above runs without having determined what the view will be. Values computed here will be stored in a contextual object and a view is selected. We can think of this store as pushing values (the view didn't ask for it, since it's not even selected at this point). Hence the alternative name "MVC push" for this approach.

The class shown for the JSF example is NOT a controller. In JSF the controller is provided by the framework. It selects a view based on the incoming URL and the outcome of a ResourceHandler. This will cause a view to execute, and as part of that execution a (backing) bean at some point will be pulled in. Only after this pull has been done will the logic of the class in question start executing. Because of this the alternative name for this approach is "MVC pull".

Over to the concrete differences; in the Spring MVC sample instantiating the Appointment had to be explicitly mapped to a URL and the view to be rendered afterwards is explicitly defined. In the JSF version, both URL and view are defaulted; it's the view from which the bean is pulled. A backing bean can override the default view to be rendered by using the aforementioned view action. This gives it some of the "feel" of a controller, but doesn't change the fundamental fact that the backing bean had to be pulled into scope by the initial view first (things like @Eager in OmniFaces do blur the lines further by instantiating beans before a view pulls them in).

The post back case shows something similar. In the Spring version the add() method is explicitly mapped to a URL, while in the JSF version it corresponds to an action method of the view that pulled the bean in.

There's another difference with respect to validation. In the Spring MVC example there's an explicit check to see if validation has failed and an explicit selection of a view to display errors. In this case that view is the same one again ("appointments/new"), but it's still provided explicitly. In the JSF example there's no explicit check. Instead, the code relies on the default of staying on the same view and not invoking the action method. In effect, the exact same thing happens in both cases but the mindset to get there is different.

Dynamically loading images

The second example is about a case where a list of images is rendered first and where subsequently the content of those images is dynamically provided by the beans in question. The Spring code is again given first, followed by the JSF code.

Starting with the similarities again, we see that the markup for both views is fairly similar in structure. Both have an iteration tag that takes values from an input list called thumbnails and during each round of the iteration the ID of each individual thumbnail is used to render an image link.

Both the classes for Spring MVC and JSF call getThumbnails() on the injected DAO for the initial GET request, and both have a nearly identical thumbnail() method where getThumbnail(id) is called on the DAO in response to each request for a dynamic image that was rendered before.

Both versions also show that each framework has an alternative way to do what they do. In the Spring MVC example we see that instead of having a Model passed-in and returning a String based outcome, there's an alternative version that uses a ModelAndView instance, where the outcome is set on this object.

In the JSF version we see that instead of having an instance field + getter, there's an alternative version based an a producer. In that variant the data is made available under the EL name "thumbnails", just as in the Spring MVC version.

On to the differences, we see that the Spring MVC version is again using explicit URLs. The otherwise identical thumbnail() method has an extra annotation for specifying the URL to which it's mapped. This very URL is the one that's used in the img tag in the view. JSF on the other hand doesn't ask to map the method to a URL. Instead, there's an EL expression used to point directly to the method that delivers the image content. The component (o:graphicImage here) then generates the URL.

While the producer method that we showed in the JSF example (getThumbnails()) looked like JSF was declarative pushing a value, it's in fact still about a push. The method will not be called, and therefor a value not produced, until the EL variable "thumbnails" is resolved for the first time.

Another difference is that the view in the JSF example contains two components (ui:repeat and o:graphicImage) that adhere to JSF's component model, and that the view uses a templating language (Facelets) that is part of the JSF spec itself. Spring MVC (of course) doesn't specify a component model, and while it could theoretically come with its own templating language it doesn't have that one either. Instead, Spring MVC relies on external templating systems, e.g. JSP or Thymeleaf.

Finally, a remarkable difference is that the two very similar classes ThumbnailsController and ThumbnailsBacking are annotated by @Controller respectively @Model, two completely opposite responsibilities of the MVC pattern. Indeed, in JSF everything that's referenced by the view (via EL expressions) if officially called the model. ThumbnailsBacking is from JSF's point of the view the model. In practice the lines are bit more blurred, and the backing bean is more akin to a plumbing component that sits between the model, view and controller.

Conclusion

We haven't gone in-depth to what it means to have a component model and what advantages that has, nor have we discussed in any detail what a RESTful architecture brings to the table. In passing we mentioned the concept of state, but did not look at that either. Instead, we mainly focussed on code examples for two different use cases and compared and contrasted these. In that comparison we tried as much as possible to refrain from any judgement about which approach is better, component based MVC or action-oriented MVC (as I'm one of the authors of the JSF utility library OmniFaces and a member of the JSF EG such a judgement would always be biased of course).

We saw that while the code examples at first glance have remarkable similarities there are in fact deep fundamental differences between the two approaches. It's an open question whether the future is with either one of those two, with a hybrid approach of them, or with both living next to each other. Java EE 8 at least will opt for that last option and will have both a component based MVC framework and an action-oriented one.

Monday, November 24, 2014

After a poll regarding the future dependencies of OmniFaces 2.0 and tworelease candidates we're proud to announce that today we've finally released OmniFaces 2.0.

OmniFaces 2.0 is a direct continuation of OmniFaces 1.x, but has started to build on newer dependencies. We also took the opportunity to do a little refactoring here and there (specifically noticeable in the Events class).

The easiest way to use OmniFaces is via Maven by adding the following to pom.xml:

A detailed description of the biggest items of this release can be found on the blog of BalusC.

One particular new feature not mentioned there is a new capability that has been added to <o:validateBean>; class level bean validation. While JSF core and OmniFaces both have had a validateBean for some time, one thing it curiously did not do despite its name is actually validating a bean. Instead, those existing versions just controlled various aspects of bean validation. Bean validation itself was then only applied to individual properties of a bean, namely those ones that were bound to input components.

With OmniFaces 2.0 it's now possible to specify that a bean should be validated at the class level. The following gives an example of this:

Using the existing bean validation integration of JSF, only product.item and product.order can be validated, since these are the properties that are directly bound to an input component. Using <o:validateBean> the product itself can be validated as well, and this will happen at the right place in the JSF lifecycle. The right place in the lifecycle means that it will be in the "process validation" phase. True to the way JSF works, if validation fails the actual model will not be updated. In order to prevent this update class level bean validation will be performed on a copy of the actual product (with a plug-in structure to chose between multiple ways to copy the model object).

Thursday, November 20, 2014

After an intense debugging session following the release of OmniFaces 2.0, we have decided to release one more release candidate; OmniFaces 2.0 RC2.

For RC2 we mostly focused on TomEE 2.0 compatibility. Even though TomEE 2.0 is only available in a SNAPSHOT release, we're happy to see that it passed almost all of our tests and was able to run our showcase application just fine. The only place where it failed was with the viewParamValidationFailed page, but this appeared to be an issue in MyFaces and unrelated to TomEE itself.

To repeat from the RC1 announcement: OmniFaces 2.0 is the first release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Our Servlet dependency is now Servlet 3.0 from Java EE 6 (used to be 2.5, although we optionally used 3.0 features before). The minimal Java SE version is now Java 7.

Sunday, November 16, 2014

Authentication is a topic that comes up often for web applications. The Java EE spec supports authentication for those via the Servlet and JASPIC specs, but doesn't say too much about how to authenticate for JAX-RS.

Luckily JAX-RS is simply layered on top of Servlets, and one can therefore just use JASPIC's authentication modules for the Servlet Container Profile. There's thus not really a need for a separate REST profile, as there is for SOAP web services.

While using the same basic technologies as authentication modules for web applications, the requirements for modules that are to be used for JAX-RS are a bit different.

JAX-RS is often used to implement an API that is used by scripts. Such scripts typically do not engage into an authentication dialog with the server, i.e. it's rare for an API to redirect to a form asking for credentials, let alone asking to log-in with a social provider.

An even more fundamental difference is that in web apps it's commonplace to establish a session for among others authentication purposes. While possible to do this for JAX-RS as well, it's not exactly a best practice. Restful APIs are supposed to be fully stateless.

To prevent the need for going into an arbitrary authentication dialog with the server, it's typically for scripts to send their credentials upfront with a request. For this BASIC authentication can be used, which does actually initiates a dialog albeit a standardised one. An other option is to provide a token as either a request parameter or as an HTTP header. It should go without saying that in both these case all communication should be done exclusively via https.

Preventing a session to be created can be done in several ways as well. One way is to store the authentication data in an encrypted cookie instead of storing that data in the HTTP session. While this surely works it does feel somewhat weird to "blindly" except the authenticated identity from what the client provides. If the encryption is strong enough it *should* be okayish, but still. Another method is to quite simply authenticate every time over again with each request. This however has its own problem, namely the potential for bad performance. An in-memory user store will likely be very fast to authenticate against, but anything involving an external system like a database or ldap server probably is not.

The performance problem of authenticating with each request can be mitigated though by using an authentication cache. The question is then whether this isn't really the same as creating a session?

While both an (http) session and a cache consume memory at the server, a major difference between the two is that a session is a store for all kinds of data, which includes state, but a cache is only about data locality. A cache is thus by definition never the primary source of data.

What this means is that we can throw data away from a cache at arbitrary times, and the client won't know the difference except for the fact its next request may be somewhat slower. We can't really do that with session data. Setting a hard limit on the size of a cache is thus a lot easier for a cache then it is for a session, and it's not mandatory to replicate a cache across a cluster.

Still, as with many things it's a trade off; having zero data stored at the server, but having a cookie send along with the request and needing to decrypt that every time (which for strong encryption can be computational expensive), or having some data at the server (in a very manageable way), but without the uneasiness of directly accepting an authenticated state from the client.

Here we'll be giving an example for a general stateless auth module that uses header based token authentication and authenticates with each request. This is combined with an application level component that processes the token and maintains a cache. The auth module is implemented using JASPIC, the Java EE standard SPI for authentication. The example uses a utility library that I'm incubating called OmniSecurity. This library is not a security framework itself, but provides several convenience utilities for the existing Java EE security APIs. (like OmniFaces does for JSF and Guava does for Java)

One caveat is that the example assumes CDI is available in an authentication module. In practice this is the case when running on JBoss, but not when running on most other servers. Another caveat is that OmniSecurity is not yet stable or complete. We're working towards an 1.0 version, but the current version 0.6-ALPHA is as the name implies just an alpha version.

A server auth module (SAM) is not entirely unlike a servlet filter, albeit one that is called before every other filter. Just as a servlet filter it's called with an HttpServletRequest and HttpServletResponse, is capable of including and forwarding to resources, and can wrap both the request and the response. A key difference is that it also receives an object via which it can pass a username and optionally a series of roles to the container. These will then become the authenticated identity, i.e. the username that is passed to the container here will be what HtttpServletRequest.getUserPrincipal().getName() returns. Furthermore, a server auth module doesn't control the continuation of the filter chain by calling or not calling FilterChain.doFilter(), but by returning a status code.

In the example above the authentication module extracts a token from the request. If one is present, it obtains a reference to a TokenIdentityStore, which does the actual authentication of the token and provides a username and roles if the token is valid. It's not strictly necessary to have this separation and the authentication module could just as well contain all required code directly. However, just like the separation of responsibilities in MVC, it's typical in authentication to have a separation between the mechanism and the repository. The first contains the code that does interaction with the environment (aka the authentication dialog, aka authentication messaging), while the latter doesn't know anything about an environment and only keeps a collection of users and roles that are accessed via some set of credentials (e.g. username/password, keys, tokens, etc).

If the token is found to be valid, the authentication module retrieves the username and roles from the identity store and passes these to the container. Whenever an authentication module does this, it's supposed to return the status "SUCCESS". By using the HttpMsgContext this requirement is largely made invisible; the code just returns whatever HttpMsgContext.notifyContainerAboutLogin returns.

If authentication did not happen for whatever reason, it depends on whether the resource (URL) that was accessed is protected (requires an authenticated user) or is public (does not require an authenticated user). In the first situation we always return a 404 to the client. This is a general security precaution. According to HTTP we should actually return a 403 here, but if we did users can attempt to guess what the protected resources are. For applications where it's already clear what all the protected resources are it would make more sense to indeed return that 403 here. If the resource is a public one, the code "does nothing". Since authentication modules in Java EE need to return something and there's no status code that indicates nothing should happen, in fact doing nothing requires a tiny bit of work. Luckily this work is largely abstracted by HttpMsgContext.doNothing().

Note that the TokenAuthModule as shown above is already implemented in the OmniSecurity library and can be used as is. The TokenIdentityStore however has to be implemented by user code. An example of an implementation is shown below:

This TokenIdentityStore implementation is injected with both a service to obtain users from, as well as a cache instance (InfiniSpan was used here). The code simply checks if a User instance associated with a token is already in the cache, and if it's not gets if from the service and puts it in the cache. The User instance is subsequently used to provide a user name and roles.

Installing the authentication module can be done during startup of the container via a Servlet context listener as follows:

As shown in this article, adding an authentication module for JAX-RS that's fully stateless and doesn't store an authenticated state on the client is relatively straightforward using Java EE authentication modules. Big caveats are that the most straightforward approach uses CDI which is not always available in authentication modules (in WildFly it's available), and that the example uses the OmniSecurity library to simplify some of JASPIC's arcane native APIs, but OmniSecurity is still only in an alpha status.

Saturday, November 8, 2014

We are happy to announce that we have just released OmniFaces 2.0 release candidate 1.

OmniFaces 2.0 is the first release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Our Servlet dependency is now Servlet 3.0 from Java EE 6 (used to be 2.5, although we optionally used 3.0 features before). The minimal Java SE version is now Java 7.

Friday, October 31, 2014

When we normally talk about the Java EE cycle time, we talk about the time it takes between major revisions of the spec. E.g. the time between Java EE 6 and Java EE 7. While this is indeed the leading cycle time, there are two additional cycles that are of major importance:

The time it takes for vendors to release an initial product that implements the new spec revision

The time it takes vendors to stabilize their product (which incidentally is closely tied to the actual user adoption rate)

In this article we'll take a somewhat closer look on the time it takes vendors to release their initial product. But first let's take a quick look at the time between spec releases. The following table lists the Java EE version history and the delta time between versions:

Java EE delta times between releases

Version

Start date

Release date

Days since last release

Days spent on spec

1.2

-

12 Dec, 1999

-

-

1.3

18 Feb, 2000

24 Sep, 2001

653 days (1 year, 9 months)

584 (1 year, 7 months)

1.4

22 Oct, 2001

12 Nov, 2003

779 days (2 years, 1 month)

751 (2 years)

5

10 May, 2004

11 May, 2006

911 days (2 years, 6 months)

731 (2 years)

6

16 Jul, 2007

10 Dec, 2009

1310 days (3 years, 7 months)

878 (2 years, 4 months)

7

14 Mar, 2011

28 May, 2013

1266 days (3 years, 5 months)

806 (2 years, 2 months)

8

17 Aug, 2014

~May, 2017 (*)

1461 days days (4 years) (*)

1015 (2 years, 9 months) (*)

*(estimated)

As can be seen the time between releases has been steadily increasing, but seemed to have been stabilized to approximately three and a half years. The original plan was to release Java EE 8 using the same pace, meaning we would have expected it around the end of 2016, but this was later changed to H1 2017. "H1" not rarely means the last month of H1 (often certainly not the first 3 months, or otherwise Q1 would be used). This means around May 2017 is a likely release date, pushing the time to a solid 4 years.

It may be worth emphasizing that the time between releases is not fully spend on Java EE. Typically there is what one may call with respect to spec work The Big Void™ between releases. It's a period of time where there is no spec work being done. This void starts right after the spec is released and the various EGs are disbanded. The time is used differently by everyone, but typically it's used for implementation work, cleaning up and refactoring code, project structures, tests and other artifacts.

After some time (~1 year for Java EE 6, ~5 months for Java EE 7) initial discussions start where just some ideas are pitched and the landscape is explored. After that it still takes some time until the work really kicks off for the front runners (~1 year and 5 months for Java EE 6, ~1 year and 3 months for Java EE 7).

Those numbers are however for the front runners, a bunch of sub-specs of Java EE start even later than this, and some of them even finish well before the release date of the main umbrella spec. So while the time between releases seems like a long time, it's important to realize by far not all this time is actually spend on the various specifications. As can be seen in the table above, the time actually spend on the specification has been fairly stable at around 2 years. 1.3 was a bit below that and 6 a bit above it, but it's all fairly close to these two years. What has been increasing is the time taken up by The Void (or uptake as some others call it); less than a month between 1.3 and 1.4 to well over a year between 5 and 6, and 6 and 7.

As mentioned previously, finalizing the spec is only one aspect of the entire process. With the exception of GlassFish, the reference implementation (RI) that is made available at the same time that the new spec revision becomes available, the implementation cycle of Java EE starts right after a spec release.

A small complication in tracking Java EE server products is that various of these products are variations of each other or just different versions taken from the same code line. E.g. WASCE is (was) an intermediate release of Geronimmo. JBoss AS 6 is obviously just an earlier version of JBoss AS 7, which is itself an earlier version of JBoss EAP 6 (although JBoss markets it as a separate product). NetWeaver is said to be a version of TomEE, etc.

Also complicating the certification and first version story is that a number of vendors chose to have beta or technical preview versions certified. In one occasion a vendor even certified a snapshot version. Obviously those versions are not intended for any practical (production) use. It's perhaps somewhat questionable that servers that in the eyes of their own vendors are very far from the stability required by their customers can be certified at all.

The following two tables show how long it took the Java EE 6 Full- and Web Profile to be implemented for each server.

As we can see here, excluding GlassFish and the tech preview of JEUS, it took 1 year and 6 months for the first production ready (according to the vendor!) Java EE 6 full profile server to appear on the market, while most other servers appeared after around two and half years.

Do note that "production ready according to the vendor" is a state that can not easily be quantified with respect to quality. What some vendor calls 1.0 Final, may correspond to what another vendor calls 0.5 Beta. From the above table it doesn't mean that say WebLogic 12.1.1 (production ready according to its vendor) is either more or less stable than e.g. JEUS 7 Tech Preview 1 (not production ready according to its vendor).

The Java EE 7 spec was released at 28 May, 2013, which is 522 days (1 year, 5 months) ago at the time of writing. So let's take a look at what the current situation is with respect to available Java EE 7 servers:

With quite a few entries present already we can see that those largely follow the same pattern as the Java EE 6 implementation cycle. (I'll be updating the table above as new certified servers come in)

GlassFish is by definition the first release, while JEUS is again the second one with a developer preview (a pattern that goes all the way back to J2EE 1.2). There's unfortunately no information available on when JEUS 8 developer preview was exactly released, but a blog posting about this was published at 26 aug, 2003 so I took that date.

For JBoss the situation for Java EE 7 compared to EE 6 is not really that much different either. WildFly 8 was released after 259 days (the plan was 167 days), which is not that different from JBoss AS 6 which was released after 322 days. One big difference here though is that AS 6 was only certified for the web profile, while in fact practically implementing the full profile. The similarities don't end there, as just as with Java EE 6 the eventual production version (JBoss EAP 6) wasn't based on JBoss AS 6.x, but on the major new version JBoss AS 7. JBoss EAP 7 is not based on JBoss WildFly 8.x, but on JBoss WildFly 10.0. Where JBoss EAP 6 took 2.5 years to be released, JBoss EAP 7 took a little longer, but not that much; nearly exactly 3 years.

Hitachi AS was the first implementation of Java EE 7 that is commercially supported by its own vendor, but outside Japan Hitachi AS is not that well known. For the rest of the world IBM's Liberty was the first one, relatively speaking closely followed by Oracle's flagship server WebLogic.

This means that currently (mid 2016) all three big vendors have a supported Java EE 7 product available.

If history is anything to go by, we may see one or two additional Java EE 7 implementations in a few months, while after a little more than a year from now most servers should be available in a Java EE 7 flavor. At the moment of writing it looks like Web Profile implementation TomEE 7.0 indeed isn't that far away. It took TomEE 2 years and 4 months for Java EE 6. For Java EE 7 that will be at least 3 years, but hopefully not that much longer.

Sunday, September 14, 2014

In the Java EE platform programmers have a way to reference values in beans via textual expressions. These textual expressions are then compiled by the implementation (via the Expression Language, AKA EL spec) to instances of ValueExpression.

E.g. the following EL expression can be used to refer to the named bean "foo" and its property "bar":

#{foo.bar}

Expressions can be chains of arbitrary length, and can include method calls as well. E.g.:

#{foo.bar(1).kaz.zak(test)}

An important aspect of these expressions is that they are highly contextual, specifically where it concerns the top level variables. These consists of the object that starts the chain ("foo" here) and any EL variables used as method arguments ("test" here). Because of this, it's not a totally unknown requirement for wanting to resolve the expression when it's still in context in order to obtain the so-called final base and the final property/method, the last one including the resolved and bound parameters.

For any of the five methods, the ELResolver.getValue[...] method is used to resolve all properties up to but excluding the last one. This provides the base object.

There is no mention here that ELResolver.invoke is used as well if any of the intermediate nodes in the chain is a method invocation (like bar(1) in #{foo.bar(1).kaz.zak(test)}).

The fact that there's a ValueReference only supporting properties and no corresponding MethodReference is extra curious, since method invocations in chains and ValueExpressions and the ValueReference type were both introduced in EL 2.2.

So is there any hope of getting the final base and method if a ValueExpression happens to be pointing to a method? There appears to be a way, but it's a little tricky. The trick in question consists of using a special tracing ELResolver and taking advantage of the fact that some methods on ValueExpression are specified to resolve the expression "up to but excluding the last [node]". Using this we can use the following approach:

Instantiate an EL context which contains the special tracing EL resolver

Call a method on the ValueExpression that resolves the chain until the next to last node (e.g. getType()) using the special EL context

In the tracing EL resolver count each intermediate call, so when getType() returns the length of the chain is known

Call a method on the ValueExpression that resolves the entire chain (e.g. getValue()) using the same special EL context instance

When the EL resolver reaches the next to last node (determined by counting intermediate calls again), wrap the return value from ElResolver.getValue or ElResolver.invoke

If either ElResolver.getValue or ElResolver.invoke is called again later with our special wrapped type, we know this is the final node and can collect all details that we need; the base, property or method name and the resolved method parameters (if any). All of these are simply passed to us by the EL implementation

The return value wrapping of the next to last node (at call count N) may need some extra explanation. After all, why not just wait till we're called the Nth + 1 time? The issue is that this Nth + 1 call may be for resolving variables that are passed as parameters into the final node if this final node is a method invocation. The amount of such parameters is unknown and each parameter can consist of a chain of arbitrary length.

E.g. consider the following expression:

#{foo.bar.kaz(test.a.b.c(x.r), bean.x.y.z(o).p)}

In such a case the first pass of the above given approach will count the calls up until the point of resolving "bar", which is thus at call count N. If "kaz" was a simple property, our EL resolver would be asked to resolve [return value of "bar"]."kaz" at call count N + 1. However, since "kaz" is not a simple property but a complex method invocation with EL variables, the next call after N will be for resolving the base of the first EL variable used in the method invocation ("test" here).

One may also wonder why we do not "simply" get the textual EL representation of an EL expression, chop off the last node using simple string manipulation and resolve that. The reason is two fold. First, it may work for very simple expressions (like #{a.b.c}), but doesn't work in general for complex ones (e.g. #{empty foo? a.b.c : x.y.z}). A second issue is that a given ValueExpression instance all too often contains state (like an embedded VariableMapper instance), which is lost when we just get the EL string from a ValueExpression and evaluate that.

class InspectorElResolver extends ELResolverWrapper {
private int passOneCallCount;
private int passTwoCallCount;
private Object lastBase;
private Object lastProperty; // Method name in case VE referenced a method, otherwise property name
private Object[] lastParams; // Actual parameters supplied to a method (if any)
private boolean subchainResolving;
// Marker holder via which we can track our last base. This should become
// the last base in a next iteration. This is needed because if the very last property is a
// method node with a variable, we can't track resolving that variable anymore since it will not have been processed by the
// getType() call of the first pass.
// E.g. a.b.c(var.foo())
private FinalBaseHolder finalBaseHolder;
private InspectorPass pass = InspectorPass.PASS1_FIND_NEXT_TO_LAST_NODE;
public InspectorElResolver(ELResolver elResolver) {
super(elResolver);
}
@Override
public Object getValue(ELContext context, Object base, Object property) {
if (base instanceof FinalBaseHolder) {
// If we get called with a FinalBaseHolder, which was set in the next to last node,
// we know we're done and can set the base and property as the final ones.
lastBase = ((FinalBaseHolder) base).getBase();
lastProperty = property;
context.setPropertyResolved(true);
return ValueExpressionType.PROPERTY;
}
checkSubchainStarted(base);
if (subchainResolving) {
return super.getValue(context, base, property);
}
recordCall(base, property);
return wrapOutcomeIfNeeded(super.getValue(context, base, property));
}
@Override
public Object invoke(ELContext context, Object base, Object method, Class<?>[] paramTypes, Object[] params) {
if (base instanceof FinalBaseHolder) {
// If we get called with a FinalBaseHolder, which was set in the next to last node,
// we know we're done and can set the base, method and params as the final ones.
lastBase = ((FinalBaseHolder) base).getBase();
lastProperty = method;
lastParams = params;
context.setPropertyResolved(true);
return ValueExpressionType.METHOD;
}
checkSubchainStarted(base);
if (subchainResolving) {
return super.invoke(context, base, method, paramTypes, params);
}
recordCall(base, method);
return wrapOutcomeIfNeeded(super.invoke(context, base, method, paramTypes, params));
}
@Override
public Class<?> getType(ELContext context, Object base, Object property) {
// getType is only called on the last element in the chain (if the EL
// implementation actually calls this, which might not be the case if the
// value expression references a method)
//
// We thus do know the size of the chain now, and the "lastBase" and "lastProperty"
// that were set *before* this call are the next to last now.
//
// Alternatively, this method is NOT called by the EL implementation, but then
// "lastBase" and "lastProperty" are still the next to last.
//
// Independent of what the EL implementation does, "passOneCallCount" should thus represent
// the total size of the call chain minus 1. We use this in pass two to capture the
// final base, property/method and optionally parameters.
context.setPropertyResolved(true);
// Special value to signal that getType() has actually been called (this value is
// not used by the algorithm now, but may be useful when debugging)
return InspectorElContext.class;
}
private boolean isAtNextToLastNode() {
return passTwoCallCount == passOneCallCount;
}
private void checkSubchainStarted(Object base) {
if (pass == InspectorPass.PASS2_FIND_FINAL_NODE && base == null && isAtNextToLastNode()) {
// If "base" is null it means a new chain is being resolved.
// The main expression chain likely has ended with a method that has one or more EL variables
// as parameters that now need to be resolved.
// E.g. a.b().c.d(var1)
subchainResolving = true;
}
}
private void recordCall(Object base, Object property) {
switch (pass) {
case PASS1_FIND_NEXT_TO_LAST_NODE:
// In the first "find next to last" pass, we'll be collecting the next to last element
// in an expression.
// E.g. given the expression a.b().c.d, we'll end up with the base returned by b() and "c" as
// the last property.
passOneCallCount++;
lastBase = base;
lastProperty = property;
break;
case PASS2_FIND_FINAL_NODE:
// In the second "find final node" pass, we'll collecting the final node
// in an expression. We need to take care that we're not actually calling / invoking
// that last element as it may have a side-effect that the user doesn't want to happen
// twice (like storing something in a DB etc).
passTwoCallCount++;
if (passTwoCallCount == passOneCallCount) {
// We're at the same call count as the first phase ended with.
// If the chain has resolved the same, we should be dealing with the same base and property now
if (base != lastBase || property != lastProperty) {
throw new IllegalStateException(
"First and second pass of resolver at call #" + passTwoCallCount +
" resolved to different base or property.");
}
}
break;
}
}
private Object wrapOutcomeIfNeeded(Object outcome) {
if (pass == InspectorPass.PASS2_FIND_FINAL_NODE && finalBaseHolder == null && isAtNextToLastNode()) {
// We're at the second pass and at the next to last node in the expression chain.
// "outcome" which we have just resolved should thus represent our final base.
// Wrap our final base in a special class that we can recognize when the EL implementation
// invokes this resolver later again with it.
finalBaseHolder = new FinalBaseHolder(outcome);
return finalBaseHolder;
}
return outcome;
}
public InspectorPass getPass() {
return pass;
}
public void setPass(InspectorPass pass) {
this.pass = pass;
}
public Object getBase() {
return lastBase;
}
public Object getProperty() {
return lastProperty;
}
public Object[] getParams() {
return lastParams;
}
}

As seen, the support for ValueExpressions that point to methods is not optimal in the current EL specification. With some efforts we can workaround this, but arguably such functionality should be present in the specification itself.

Thursday, August 7, 2014

A common task when developing Java EE applications is that of converting data. In JSF we convert objects to a string representation for rendering it inside an (HTML) response, and convert it back to an object after a postback. In JPA we convert objects from and to types known by our database, in JAX-RS we convert request parameters strings into objects etc etc.

So given the pervasiveness of this task, is there any common converter type or mechanism in the Java EE platform?

Unfortunately it appears such a common converter type is not there. While rather similar in nature, many specs in Java EE define their very own converter type. Below we take a look at the various converter types that are currently in use by the platform.

JSF

One of the earlier converter types in the Java EE platform is contributed by JSF. This converter type is able to convert from String to Object and the other way around. Because it pre-dates Java SE 5 it doesn't use a generic type parameter. While its name and methods are very general, the signature of both methods takes two JSF specific types. These specific types however are rarely if ever needed for the actual conversion, but are typically used to provide feedback to the user after validation has failed.

JAX-RS

JAX-RS too defines its very own converter type; ParamConverter. Just like the JSF Converter it's able to convert from a String to any Java Object, but this time there is a generic type parameter in the interface. There's also a method defined to convert the Object back into a String, but this one is curiously reserved for future use.

JPA

One of the most flexible converters in terms of its interface is the JPA converter AttributeConverter. This one is able to convert between any two types in both directions as denoted by 2 generic type parameters. The naming of the converter methods are very specific though.

WebSocket

WebSocket has its own converter as well. Architecturally they are a bit different. In contrast with the above shown converters, WebSocket defines separate interfaces for both directions of the conversion whereas the other specs just put two methods in the same type. WebSocket also defines separate interfaces for one of the several supported target types, whereas the other converters support either String or an Object/generic type parameter.

The two supported target types are String and ByteBuffer, with each having a variant where the converter doesn't provide the converted value via a return value, but writes it to a Writer instance that's passed into the converter method as an extra parameter.

Another thing that sets the WebSocket converters apart from the other Java EE converters is that instances have an init and destroy method and are guaranteed to be used by one thread at a time only.

PropertyEditor (Java SE)

Java SE actually has a universal converter API as well, namely the PropertyEditor. This API converts Objects from and to String, just as the JSF converter. As demonstrated before this type of converter is often used in Java EE code as well.

A PropertyEditor converter is almost always registered globally and inherently stateful. You first set a source value on an instance and then call another method to get the converted value. Remarkable for this converter type is that it contains lots of unrelated methods, including a method specific for painting in an AWT environment: paintValue(Graphics gfx, Rectangle box). This highly unfocused set of functionality makes the PropertyEditor a less than ideal converter for general usage, but in most cases the nonsense methods can simply be ignored and the ubiquitous availability in Java SE is of course a big plus.

Other

There are some specs that use a more implicit notion of conversion and could take advantage of a platform conversion API if there happened to be one. This includes remote EJB and JMS. Both are capable of transferring objects in binary form using what is essentially also a kind of conversion API: Java SE serialization. Finally JAXB has a number of converters as well, but they are build in and only defined for a finite amount of types.

Conclusion

We've seen that there are quite a number of APIs available in Java EE as well as Java SE that deal with conversion. The APIs we looked at differ somewhat in capabilities, and use different terminology for what are essentially similar concepts. The platform as a whole would certainly benefit from having a single unified conversion API; this could eventually somewhat reduce the size of individual specs, makes it easier to have a library of converters available and would surely give the Java EE platform a more consistent feel.

An interesting aspect is the "Community driven improvements", which means it's basically up to the community what will be done exactly. In practice this mostly boils down to issues that have been entered into the JSF issue tracker. It's remarkable how many community filed issues JSF has compared to several other Java EE specs; clearly JSF always has been a spec that's very much community driven. At ZEEF.com we're more than happy to take advantage of this opportunity and contribute whatever we can to JSF 2.3.

Taking a look at this existing issue tracker we see there are quite a lot of ideas indeed. So what should the community driven improvements focus on? Improving JSF's core strengths further, adding more features, incorporating ideas of other frameworks, clarifying/fixing edge cases, performance? All in all there's quite a lot that can be done, but there's as always only a limited amount of resources available so choices have to be made.

One thing that JSF has been working towards is pushing away functionality that became available in the larger Java EE platform, therefore positioning itself more as the default MVC framework in Java EE and less as an out of the box standalone MVC framework for Tomcat et al. Examples are ditching its own managed bean model, its own DI system, and its own expression language. Pushing away these concerns means more of the available resources can be spend on things that are truly unique to JSF.

Important key areas of JSF for which there are currently more than a few issues in the tracker are the following:

Components

Component iteration

State

AJAX

In this article I'll look at the issues related to components. The other key areas will be investigated in follow-up articles.

Components

While JSF is about more than just components, and it's certainly not idiomatic JSF to have a page consisting solely out of components,
arguably the component model is still one of JSF's most defining features. Historically components were curiously tedious to create in JSF, but in current versions creating a basic component is pretty straightforward.

The simplification efforts should however not stop here as there's still more to be done. As shown in the above reference, there's e.g. still the required family override, which for most simple use cases doesn't make much sense to provide. This is captured by the following existing issues:

A more profound task is to increase the usage of annotations for components in order to make a more "component centric" programming model possible. This means that the programmer works more from the point of view of a component, and threats the component as a more "active thing" instead of something that's passively defined in XML files and assembled by the framework.

For this at least the component's attributes should be declared via annotations, making it no longer "required" to do a tedious registration of those in a taglib.xml file. Note that this registration is currently not technically required, but without it tools like a Facelet editor won't be able to do any autocompletion, so in practice people mostly define them anyway.

Besides simply mimicking the limited expressiveness for declaring attributes that's now available in taglib.xml files, some additional features would be really welcome. E.g. the ability to declare whether an attribute is required, its range of valid values and more advanced things like declaring that an attribute is an "output variable" (like the "var" attribute of a data table).

In the above example there are 4 instance variables, of which 3 are component attributes and marked with @Attribute. These
last 3 could be recognized by tooling to perform auto-completion in e.g. tags associated with this component. Constraints on the attributes could be expressed via Bean Validation, which can then partially be processed by tooling as well.

Attribute value in the example above has as type ComponentAttribute, which could be a relatively simple wrapper around a component's existing attributes collection (obtainable via getAttributes()). The reason this should not directly be a String is that it can now be transparently backed by a deferred value expression (a binding that is lazily resolved when its value is obtained). Types like ComponentAttribute shouldn't be required when the component designer only wants to accept literals or immediate expressions. We see this happening for the dots and var attributes.

Finally, the example does away with declaring an explicit name for the component. In a fully annotation centric workflow a component name (which is typically used to refer to it in XML files) doesn't have as much use. A default name (e.g. the fully qualified class name, which is what we always use in OmniFaces for components anyway) would probably be best.

Creating components is one thing, but the ease with which existing components can be customized is just as important or perhaps even more important. With all the moving parts that components had in the past this was never really simple. With components themselves being simplified, customizing existing ones could be simplified as well but here too there's more to be done. For instance, often times a user only knows a component by its tag and sees this as the entry point to override something. Internally however there's still the component name, the component class, the renderer name and the renderer class. Either of these can be problematic, but particularly the renderer class can be difficult to obtain.

E.g. suppose the user did get as far as finding out that <h:outputText> is the tag for component javax.faces.component.html.HtmlOutputText. This however uses a renderer named javax.faces.Text as shown by the following code fragment:

How does the user find out which renderer is associated with javax.faces.Text? And why is the component name javax.faces.HtmlOutputText as opposed to its fully qualified classname javax.faces.component.html.HtmlOutputText? To make matters somewhat worse, when we want to override the renderer of an component but keep its existing tag, we also have the find out the render-kit-id. (it's a question whether the advantages that all these indirection names offer really outweigh the extra complexity users have to deal with)

For creating components we can if we want ignore these things, but if we customize an existing component we often can't. Tooling may help us to discover those names, but in absence of such tools and/or to reduce our dependencies on them JSF could optionally just let us specify more visible things like the tag name instead.

Although strictly speaking not part of the component model itself, one other issue is the ease with which a set of components can easily be grouped together. There's the composite component for that, but this has as a side-effect that a new component is created that has the set of components as its children. This doesn't work for those situations where the group of components is to be put inside a parent component that does something directly based on its children (like h:panelGrid). There's the Facelet tag for this, but it still has the somewhat old fashioned requirements of XML registrations. JSF could simplify this by giving a Facelet tag the same conveniences as were introduced for composite components. Another option might be the introduction of some kind of visit hint, via which things like a panel grid could be requested to look at the children of some component instead of that component itself. This could be handy to give composite components some of the power for which a Facelet tag is needed now.

Finally there's an assortment of other issues on the tracker that aim to simplify working with components or make the model more powerful. For instance there's still come confusion about the encodeBegin(), encodeChildren(), encodeEnd() methods vs the newer encodeAll() in UIComponent. Also, dynamic manipulation of the component tree (fairly typical in various other frameworks that have a component or element tree) is still not entirely clear. As it appears, such modification is safe to do during the preRenderView event, but this fact is not immediately obvious and the current 2.2 spec doesn't mention it. Furthermore even if it's clear for someone that manipulation has to happen during this event, then the code to register for this event and handle it is still a bit tedious (see previous link).

Something annotation based may again be used to simplify matters, e.g.:

Friday, June 27, 2014

Among the big news is that WebLogic 12.1.3 is now a mixed Java EE 6/EE 7 server by (optionally) supporting several Java EE 7 technologies like JAX-RS 2.0.

Next to this there are a ton of smaller changes and fixes as well. One of those fixes concerns the standard authentication system of Java EE (JASPIC). As we saw some time back, the JASPIC implementation in WebLogic 12.1.1 and 12.1.2 wasn't entirely optimal (to WebLogic's defense, very few JASPIC implementations were at the time).

One particular problem with JASPIC is that it almost can't be any different than that its TCK is rather incomplete; implementations that don't actually authenticate or which are missing the most basic functionality got certified in the past. For this purpose I have created a small set of tests that checks for the most basic capabilities. Note that these tests have since been contributed to Arun Gupta's Java EE 7 samples project, and have additionally been extended. Since those tests have Java EE 7 as a baseline requirement we unfortunately can't use them directly to test WebLogic 12.1.3.

For WebLogic 12.1.2 we saw the following results for the original Java EE 6 tests:

FAILURES:
testUserIdentityIsStateless(org.omnifaces.jaspictest.BasicAuthenticationStatelessIT
java.lang.AssertionError: User principal was 'test', but it should be null here. The container seemed to have remembered it from the previous request.
at org.omnifaces.jaspictest.BasicAuthenticationStatelessIT.testUserIdentityIsStateless(BasicAuthenticationStatelessIT.java:137)
testPublicPageNotRememberLogin(org.omnifaces.jaspictest.BasicAuthenticationPublicIT)java.lang.AssertionError: null
at org.omnifaces.jaspictest.BasicAuthenticationPublicIT.testPublicPageNotLoggedin(BasicAuthenticationPublicIT.java:44)
at org.omnifaces.jaspictest.BasicAuthenticationPublicIT.testPublicPageNotRememberLogin(BasicAuthenticationPublicIT.java:64)
testBasicSAMMethodsCalled(org.omnifaces.jaspictest.AuthModuleMethodInvocationIT)
java.lang.AssertionError: SAM methods called in wrong order
at org.omnifaces.jaspictest.AuthModuleMethodInvocationIT.testBasicSAMMethodsCalled(AuthModuleMethodInvocationIT.java:54)
testResponseWrapping(org.omnifaces.jaspictest.WrappingIT)
java.lang.AssertionError: Response wrapped by SAM did not arrive in Servlet.
at org.omnifaces.jaspictest.WrappingIT.testResponseWrapping(WrappingIT.java:53)
testRequestWrapping(org.omnifaces.jaspictest.WrappingIT)
java.lang.AssertionError: Request wrapped by SAM did not arrive in Servlet.
at org.omnifaces.jaspictest.WrappingIT.testRequestWrapping(WrappingIT.java:45)

FAILURES:
testUserIdentityIsStateless(org.omnifaces.jaspictest.BasicAuthenticationStatelessIT)
java.lang.AssertionError: User principal was 'test', but it should be null here. The container seemed to have remembered it from the previous request.
at org.omnifaces.jaspictest.BasicAuthenticationStatelessIT.testUserIdentityIsStateless(BasicAuthenticationStatelessIT.java:137)

In particular WebLogic 12.1.1 and 12.1.2 didn't support request/response wrapping (a feature that curiously not a single server supported), called a lifecycle method at the wrong time (the method secureResponse was called before a Servlet was invoked instead of after) and remembered the username of a previously logged-in user (within the same session, but JASPIC is supposed to be stateless).

As of WebLogic 12.1.3 the lifecycle method is called at the correct moment and request/response wrapping is actually possible. This now brings the total number of servers where the request/response can be wrapped to 3 (GlassFish since 4.0 and JBoss since WildFly 8 can also do this).

It remains a curious thing that the JASPIC TCK seemingly catches so few issues, but slowly the implementations of JASPIC are getting better. The JASPIC improvements in WebLogic 12.1.3 may not have made the headlines, but it's another important step for Java EE authentication.