While working with different technologies as JEE engineer I have realized that some problems I am facing at a given moment in time I have been already previously dealing with. However, since the solutions to these problems were found somewhere in the past for applications I worked on in some other projects, I couldn't neither retrieve nor remember the actual solution for a particular problem. Therefore, I have started to record and collect various solutions, in order to save myself time and to avoid the situations when I have to do a work that I have already done before. This blog contains some of these records.

December 14, 2012

You have probably heard the word Arquillian, a child of JBoss Community, "a revolutionary testing platform built on the JVM that substantially reduces the effort required to write and execute Java middleware integration and functional tests". I put my fingers on it about a year ago, but have abundond it. It was still young and somewhat clumsy; too may bugs; too many problems. But, it is back! with the new and flashy look! first brand new release! And back am I, curious, excited, willing to found out how much better it will make my life, a life of a technology adict.

At the moment the rumors of Arquillian release have reached my hears, I was busy developing a REST client for a REST resource that did not exist yet. The contracs were defined, the few lines of code were already writtern, unit tests were in place, but the feeling that things may wrong got stuck in my guts. You may guess the cure - it was Arquillian.

Let see what I had on my table. Well, it was SeatClient calling SeatResource to find out whether a seat is available by exchanging JSON messages. SeatClient was backed up by Spring MVC version 3.1.1 and its RestTemplate to make my life easy. It all looked very easy

Unit test makes me happy, since it is simple and green. But, happy I am not. Because, I do not know whether injection of the resources works fine, neither I know whether domain object is correclty marshalled/unmarshalled into/from JSON message and whether resource is called at all.

The first think that comes into my mind is mock of a REST resource. But, I do not want it to be simply plain mock, I want it to be a real mock, the one that receives my real message, the one that umarshalls my real JSON message and sends me a real response. This is when Arquillian smiled.

Now, when I have a real resource I need a real server to run it in. The reference to the Arquillian documentation reveals the following list. It makes me more then satisfied, even impressed. It lets me choose between a number of known servers, letting me to deploy my real resource on either embedded, managed or remote instance. I modestly pick up embedded Tomcat from this festive table. It is more then enough.

Now comes the time to write a test, a real one. This test is more then simple one. Not only will it test the client, it will also inject it into the test with the following Spring application context configuration file

The Content Based Router inspects the content of a message and routes it to another channel based on the content of the message. [1]

As you may already know, the role of a router, in general, is to decouple a message source from a destination of a message. As a result, you have a dedicated component in your system, which sole responsibility is to direct an incoming message into the one of the outgoing channels based on the conditions encoded in that component. You get a single component with a single responsibility - a good separation of concerns.

Use Case

In order to show how both Apache Camel and Spring Integration frameworks can be used to build Content Based Router let us make a use case. In our use case we have a message containing passenger information that we want to route to one of the channels based on certain criteria. This criteria can be age of a passenger and we want to process that message differently depending on whether passenger is infant, child or adult.

We can use any format of a message, but in this example I will use XML:

<passenger> <name>Adult Name</name> <age>28</age></passenger>

The message is rather simple, but it is enough for our example. We are just passing name and age of a passenger.

Apache Camel

Now is the time to implement the above scenario using Apache Camel. Please note, that Apache Camel offers you two possibilities to implement routing and mediation rules either using Fluent Builders of Spring XML Extensions. I will use here Spring XML Extensions to build the route.

<from uri="file:src/test/resources/message?noop=true"/>Starting point of Camle route that starts a route with a given Endpoint. In our case, we use File Component that reads message file from a specific location on a file system.

<choice><when><otherwise>JSTL like switch statements that allows you to choose a route based on a number of alternatives. This, actually, is implementation of the Content Based Router. Under every <when> statement we define a predicate using Xpath Language, which, when evaluated to true, passes message to the underlying Consumer. For example, this statement <xpath>/passenger/age &lt;= '1'</xpath> evaluates to true, when passenger age is less then one year. As a result, this passenger is identified as infant.

<transform><log> Are two comsumers of the mesasge. First, we transform received XML message by extracting passenger's name from XML using XPath expression and second, log the name of a passenger.

Example

For better understanding of the example with Apache Camel I have build an example project which you can access by either running the following comand in the terminal (assuming you have SVN installed)

When I first tried to implement the above scenario using Spring Integration it seemed logical to apply XPath Router, however it did not work out due to the implementation specifics of the router itself. I have tried to do the following:

However, the current implementation does not support predicates. With the current implementation, the XPath expressions are evaluated as NODESET and converter to list of strings, where every value of this list represents a channel name to which this message will be sent. In our case this would be just a list of integers from <age> message element, which cannot be directly mapped to the specific channels.

I have scratched my head for a while, with no success, and addressed my issue to Spring community. Below is what what I was suggested to do.

As you can see, in order to implement our use case I had to use Recipient List Router, which is rather different EAI pattern, even though it falls under the same category of patterns. In addition, I had to use Content Enricher to store in the header a temporary value that is used later to determine route the message should be passed to.

You should read the above route declaration as follows:

<si:channel id="passengerChannel"/>Declaration of the input channel of Content Based Router.

<si:channel id="infantChannel"/> and otherDeclaration of the output channels of Content Based Router.

<si-xml:xpath-header-enricher>Conent Enricher to store age value in the header.

<si:recipient-list-router>A list of recipient list. The message is sent to one of them when a selector expression is evaluated to true.

<si-xml:xpath-transformer>The same as in case of Apache Camel the name of a passenger is extracted from message using XPath expression and plased into the logging channel.

<si:logging-channel-adapter> Used to log received name.

Example

The same as in case with Apache Camel, I have created a project for Spring Integration. Access it either through SVN

To execute the route I have created a JUnit test, therefore you can execute it by typing mvn test from the project. Or, alternativelly, run unit test from IDE.

Conclusion

When it comes to me then I find approach offered by Apache Camel framework more straight forward and more intuitive. I was able to build route rather quickly relying only on the provided documentation.

Implementing the same use case using Spring Integration took me some time and I had to rely on Spring community, which was very responsive btw. Event though, I wasn't able to implement Content Based Router using directly <xpath-router>, the approach with <recipient-list-router> still seems less intuitive to me. This is mostly because, I had to use Content Enricher in order to store a temporary age variable in the header of the message. Using XPath expression in the selector-expression would be more straightforward, e.g. selector-expression="/passenger/age le 1".

April 17, 2012

With regards to the transaction management, apart from Spring and EJB, you may consider using Seam Persistence. This could be applicable in the scenario when you do not use Spring, but rather Weld, and do not want to run you application inside EJB container. As a result, you need to deal with POJOs and need to make you operations to database transactional. However, manual coding of transactional functionality, whether directly using UserTransaction or by means of either aspects or interceptors, is not an options, simply because you are spoiled by the declarative approach introduced by other frameworks, which allows you to configure transactions at a specific places in your application. In this case, as said above, consider Seam Persistence.

In addition of simply being an alternative, Seam Persistence framework, as described on its official page, also solves certain problems recognized in, for example, Hibernate and EJB architectures, such as LazyInitializationException, manual management of EJB stateful session beans or difficulties related to the propagation of the persistence context between stateful EJB components.

Just for background information, the idea behind the optimistic transaction is not to block access to a datastore entities for a duration of a transaction, where transaction may involve several operations, but assume that the data in a datastore will not be updated by another transaction during the duration of this transaction until the actual commit/flush.This is a common scenario, for example, in case of a web application with some flow functionality, where a user has to enter some data on several pages. As a result, a state of an underlying entity changes after every step and you may design your application to persist a state of your entity on a completion of every step or at the end of the whole flow.

Problem Hibernate's LazyInitializationException is related ti its "stateless" architecture, according to which, a persistence context is scoped to an atomic transaction. As a result, when you access a java.util.Collection property of an entity that was previously passed from service to presentation layers, assuming that the transaction starts and ends on invocation of a service operations, you get LazyInitializationException. This happens because Hibernate wraps java.util.Collection objects with its own, which, in turn, require Hibernate session. FetchType.EAGER won't help you either.

To solve the above problem you can either use Open Session In View Filter solution proposed by Hibernate, which would require you to define Servlet Filter that will start and stop a transaction on a request processing, or unwrap Hibernate collection objects before you pass you entity to presentation layer. The first approach is not always desirable, since it may break you design - you may want to explicitly define a boundary for you transactions to begin and end, which, most probably, will be service layer of you application (if it exists). While the second solution requires an additional and unnecessary processing of the returned entity tree, which also creates and additional overhead at run-time.

The above problem was recognized by EJB 3, which introduces a concept of extended persistence context available in case of the statefull session beans. However, as stated by Seam Persistence, it still has problems mentioned above. Seam claims to solve these problems by providing a conversation scope, something in between the request and session scope of you web application. Seam extends persistence context management model introduced by EJB3 and provides conversation scoped extended persistence context. Which solves, at least, the problem of the propagation of the persistence context between the components of your application, e.g. stateless session beans.

In case of conversation scope, you may still need to manually define the boundary of a conversation by marking a beginning and end of a conversation. When it comes to the Context and Dependency Injection (CDI) API, then you would have to use javax.enterprise.context.Conversation begin() and end() methods to achieve the above. In case of Seam API you would use @Begin and @End annotation. The marking of the boundary of a conversation in your application will, most probably, occur on one of the presentation layer classes, e.g. controller or managed bean.

Render the markup for a <script> element that renders the script Resource specified by the optional name attribute and library attributes.

You may already know it, but in case you don't, all resources in JSF should be placed under resources folder in the root of your web application, i.e. in the same level as WEB-INF folder. As a result, every sub-folder of resources folder will be considered by JSF2 as a library and can be referenced from the library attribute on JSF component.

April 11, 2012

Every application is built to provide a specific functionality to a target user. In you daily life you use Gmail for emails, Picasa for photos, Facebook for socializing, Twitter for "spamming" of millions of followers. These are self-contained applications build with a specific purpose. Even though there is no explicit integration between these applications, some level of integration already exists. You may share you Facebook post on Twitter or vice versa, you may post a blog and share it through Like and Tweet buttons on a page, you may make a photo with your phone through Instagram and make it available to other applications.

With the business it is the same, but more heavyweight. You have customer relationship management systems used by sales, accounting system used by finance, order management system used by clients, shipping system used by delivery department or content management system used by the whole company. In the real enterprise these systems never work separately, they are always integrated. What kind of integration it is - is another question? Whether it is a person who manually enters data from one system to another or an automated solution based on the enterprise service bus depends on the integration solution implemented in that particular company.

In most of the cases, when it comes to integration of applications, you deal with either data or functionality. Example of data could be a customer address, while that of functionality - get/change customer address. There is nothing super complex to store that data in some kind of a repository and make the above functionality available to an end user, by either implementing it or invoking an existing one. But this is true only when the above is done in a scope of a single system. As soon as that data and functionality need to be shared across several systems, you need to start thinking about the integration solutions.

Here is an example. Customer data is used by several applications within an enterprise - accounting (to compute a sales tax), shipping (to label a shipment), billing (to send an invoice) and customer relationship system (to modify an address). The first question is where to store data? In each system's own repository or in some shared repository? In case data is owned by each system, how do you keep data in synch? If data is shared what is its format? Is it the same across all systems? If not what changes do you need to do to each system in order to make them compatible with the shared data format? Similar questions can be asked about the functionality to retrieve and modify customer address, since it can be implemented either inside each system or a dedicated business function invoked by the above systems.

Several solutions are known to the above problem:

Data Replication

Shared Business Functions

Service Oriented Architecture

Distributed Business Processes

Data Replication

Data Replication solution would be applicable in the scenario when each of the above systems would have its own repository to store customer address. Therefore, when a customer contacts a customer relationship department to change its address and change is done in the system, the change is also propagated to other systems. Here is an example of several strategies you may implement:

use data replication functionality build into the databases provided by some of the database vendors;

come with a custom solution that exports data from one system into, for example, file and imports it into another system;

use some messaging system to transport data as messages;

Please, note that when data is stored in different repositories controlled by different system, it will, most probably, have different format. This happens because every department owning that system dictates the structure of the storage data. Therefore, if, at some point, you decide to store data in a single repository instead of having it scattered across several ones, you may end up being involved in the political discussions, since now, every department needs to agree on a common format of data. As a result, integration becomes more complex.

Shared Business Functions

In the same way that many business systems store duplicate data, they also tend to duplicate functionality. Taking the example of the above system, the customer relationship system would implement Change Customer Address function, while all the other systems, namely, accounting, shipping and billing systems, would implement Get Customer Address function. As you can see, there is already a duplication of functionality.

What you can do here, in order to remove this duplication, is to externalize that functionality as Shared Business Function, implement it once and make it available to other systems. This is very much similar to the situation with the Data Replication described above. In this case, however, you may also think about storing data in a single repository, instead of having it stored in several.

Service Oriented Architecture (SOA)

SOA is a known concept, but let us look at it as an extension of a concept described in Shared Business Functions. You may look at a service as a business function that has a well-defined interface and is universally available. When you enterprise assembles a decent amount of business functions shared among several systems, the management of these function becomes an important task. In order to properly manage them you need to do two things. First, you make these services discoverable, by creating some kind of service directory that lists all known to it service. Second, you describe an interface of each service so that an external system that sends a request to that service is able to negotiate a contract with it. Thus, the discovery and negotiation are the two main elements that make up a Service Oriented Architecture.

Distributed Business Processes

Let us make a step further. So far, you have shared functions that support your business. You have made them universally available by describing interface to each of your functions and by making them discoverable, thus transforming them to services. However, such business function as Change Customer Address, alone, is, probably, not of much use to your business. The problem is that even a simple business process spans several systems. For example, to verify a status of an order a customer representative may need to check order management system running on a mainframe as well as access another system that manages orders placed over the Internet. Therefore, having well-defined atomic functions is not sufficient to execute your business processes. What is missing is some kind of coordination. What you need is some business process management component that will coordinate execution of business functions across several systems.

Summary

What I wanted to show in this article is that integration has several levels. Integrating two systems at data level may be as simple as exporting data from one system and importing it into another, but when you need to share that data across several systems, either in the predefined data format or through some business function, you may end up creating more complex solution. As a result, your shared business function becomes a discoverable service with the well-defined interface and your business processes are described in some dedicated business process management component that coordinates invocation of your services.

March 27, 2012

In this post I will give an example of generation of Java class model from an XSD using custom adapter for a date element. By default, the date fields of the generated class model are of XMLGregorianCalendar type. However, you do not want to write a custom wrappers to convert java.util.Date to XMLGregorianCalendar, but want directly operate with java.util.Date object when dealing with the model classes.

One way to do it is to use @XmlJavaTypeAdapter annotation on your models date fields. But, you do not want to manually modify generated model classes since, in your situation, you model is regenerated frequently. Thus, you need a way to tell to your JAXB binding compiler (xjc) how exactly it should act upon date fields.

At the bottom of this post you will find a link to the working project. You may use it as you wish and adapt to your problem.

This example is composed of the following component:

java 5

maven 2

jaxb 2.0

maven-jaxb2-plugin 0.8.1

Maven project

To generate class model from XSD I will use Maven plug-in. All you need is to specify the following details in you POM:

As you can see, the date property is of type XMLGregorianCalendar and as you remember, our goal is to have it of type java.util.Date.

Place your schema file under src/main/resources directory. This is the default location the maven-jaxb2-plugin is looking for schemas. You can configure another location by using schemaDirectory element on a plugin.

Create Adapter

The first thing we need to do is to create an adapter class that will be responsible for the conversion of date values from/to XMLGregorianCalendar and java.util.Date. In order to plugin our custom implementation of the adapter we will need to comply with the contracts defined by JAXB. Namely, we'll have to implement javax.xml.bind.annotation.adapters.XmlAdapter interface.

The XmlAdapter JavaDoc specifies the following:

Some Java types do not map naturally to a XML representation, for example HashMap or other non JavaBean classes. Conversely, a XML repsentation may map to a Java type but an application may choose to accesss the XML representation using another Java type... In both cases, there is a mismatch between bound type , used by an application to access XML content and the value type, that is mapped to an XML representation.

This abstract class defines methods for adapting a bound type to a value type or vice versa...

Create JAXB binding

In order to tell JAXB binding compiler how it should deal with the date types I will customize JAXB date binding by means of custom binding declaration made in external file passed to the binding compiler. I will create a binding file - call it date-binding.xsd.

Note: There are two types of custom binding declarations: inine and external file. In case of inline declarion the custom binding declation is made inside a schema, e.g.

Place the binding file under the same directory you have placed your schem, namely src/main/resources.

Few things to note:

to declare date binding I am using <xjc:javaType> customization and not <jaxb:javaType> customization. <xjc:javaType> allows you to specify an XmlAdapter-derived class, instead of parse&print method pair.

I am using <jaxb:globalBindings > to apply binding to all dateTime elements defined withing a schema. Alternative would be to use the following construct:

where schemaLocation is a URI reference to the remote schema and node is an XPath 1.0 expression that identifies the schema node within schemaLocation to which the given binding declaration is associated.

March 17, 2012

The difficulty of integration projects at enterprise level is related not only by the technological challenges, but also to the social aspects within the organisation. Conway's law suggests that ¨the interface structure of a software system will reflect the social structure of the organization(s) that produced it.¨This law is based on the reasoning that in order for two systems to interact correctly, the designers of the these systems should communicate with each other.

For example, if a company consisting of three units e1, e2 and e3 is given an assignment to develop a product X and this assignment is handed over to these three units, then result of their work will be three subsystems - s1, s2 and s3. More to that, the interfaces between these systems will reflect the quality and nature of the real world interpersonal communications between the representatives of these units.

Conway's law was also rephrased by Harrison: "If the parts of an organization (departments, teams and etc) do not reflect the essential parts of a product or, if the interaction between the parts of an organisation do not reflect the interaction between the parts of a product, then product is in trouble. Therefore, make sure that organization is compatible with an architecture of a product."

In other words, in order for an integration project to be successful, the communication should be established not only between the systems, but also between business units of an organisation for which an integration project is executed. This may require a significant shift in organisation´s politics, which will make you to deal not only with the technological obstacles.

March 11, 2012

When working with rhc you communicate with OpenShift from your computer using SSH, which utilizes security keys located in your $HOME/.ssh directory. When you generate new keys using, for example, ssh-keygen -t rsa, you get id_rsa and id_rsa.pub files in your $HOME/.ssh folder. You may decide to give different names to you keys, e.g. myopenshift_rsa and myopenshift_rsa.pub, just to be able to distinguish them from other keys you may already have. In this case you may also want to provide a separate SSH configuration in your OpenShift application in $HOME/.ssh/config file, which will look as follows:

November 03, 2011

Just to give an alternative to the Load Time Weaving with AspectJ, which I have explained in my two previous post (LTW Abstract Aspect, LTW Concrete Aspect), I will describe here how you can achieve the same with the Compile Time Weaving (CTW).

In case of LTW you had to configure the AspectJ weaver in aop.xml file and make it available in the class path. In addition to that, you had to enable a "weaving agent" by providing -javaagent:<pathto>/aspectjweaver.jar option to your JVM. For CTW you may skip the both steps and invoke ApectJ compiler that will take class files as input and produce woven class files as output. Afterwards, on execution of your code, the weaving process output class is loaded into JVM as a normal Java class. The difference with the LTW here is that in case of LTW the weaving is defered until the point that a class loader loads a class file and defines that class to JVM.

Assuming you are using Maven, in order to trigger AspectJ weaving process you will have to use aspectj-maven-plugin and configure it appropriately in your pom.xml. This is how it will look like:

<plugin>

<groupId>org.codehaus.mojo</groupId>

<artifactId>aspectj-maven-plugin</artifactId>

<version>1.4</version>

<configuration>

<complianceLevel>1.6</complianceLevel>

</configuration>

<executions>

<execution>

<goals>

<goal>compile</goal>

<goal>test-compile</goal>

</goals>

</execution>

</executions>

</plugin>

If you develop you aspects using the annotation based development style (the way it was done in the two previous posts mentioned before) then you have to specify an appropriate Java compliance level using <complianceLevel> tag.

In <execution> tag you may also specify the Maven build phases during which the AspectJ weaving of the classes should be triggered. In case of the <goal>compile</goal> configuration the AspectJ weaver will weave all your main classes, while in case of the <goal>test-compile</goal> configuration - your test classes.

Thats actually it. I am not going to explain the details of an actual aspect declaration, since I have already done that in the referenced posts; you may just have a look. Otherwise, you may directly look into the working example, which you may access here: Logging Aspect Implementation using Compile Time Weaving

November 02, 2011

In my previous post I was describing the scenario of using abstract aspect in case of the AspectJ Load Time Weaving. In this post I will just mention the alternative to that approach, namely declaration of concrete aspect. You may compare two approach and pick up the one that better suits your need.

The same as in the previous scenario with the abstract aspect you do the following steps:

Create @Aspect annotated Java class;

On your aspect class declare a pointcut method;

On you aspect class declare desired advices;

In weaver configuration file declare concrete aspect;

Create @Aspect annotated Java class

Straightforward Java class declaration with @Aspect annotation

public @Aspect class LoggingAspect

On your aspect class declare a pointcut method

The difference from the abstract class declaration is that you do not need to declare a simple abstract pointcut, but rather a normal pointcut. By doing that you can use capabilities offered to you by AspectJ, as for example, passing @Loggable annotation to the pointcut method, which later can be passed to the executing advices of you aspect.

You can also declare a static pointcut and perform custom conditional logic in your pointcut method in order to determine whether your aspect should be weaved in the identified by the pointcut join point. Static pointcut method should be public, static and return boolean.

In case of the advices there is also a slight change when comparing to the abstract aspect declaration. Namely, the value of the @Before advice annotation should reflect the fact that your pointcut method accepts Loggable parameter, i.e. @Before(value="logging(loggable)"):

The same as in case of abstract aspect, in order to use LTW you need to enable it in your application. Do that by configuring Java agent using -javaagent:<pathto>/aspectjweaver.jar option and by providing it to your JVM.