Introduction

In today’s world the software industry is going towards continuous integration and test driven development. This brings requirement of testing each piece of code created in an organization and running this as often as possible. At the same time the applications are getting much more complex, relying on different containers to provide their collaborator objects and to care about their life cycle. Unit testing such managed objects becomes very hard and this demotivates even the toughest developers to write and maintain good unit tests. No matter how you call these special classes – services, session beans, data access objects, etc., the problem that needs to be solved is how you get hold of the infrastructure and the various collaborators that are usually provided by the container.

Of course you can always create complicated framework where you can run complex scenarios testing your service code. It’s a good thing to have such integration tests that run every night. But it would be great if during development (virtually on each save) you are able to test your code in as much isolation as possible. Which means to create tests closer to the unit test paradigm.

In this series of blog posts I will try to show you how you can use the test infrastructures provided by Spring framework, Embedded Glassfish and Arquillian in order to easily test complex managed objects like services, EJBs, DAOs, etc.

Let’s first start with our…

Usecase

Suppose you want to build a football statistics application. You want to keep track of all sorts of stuff around the matches played on our planet. One of the most important parts of your domain model is the team. So you want to know what is team’s name, which is its city, what is the name of its stadium and of course which is the country where it competes. In order to keep everything as simple as possible, let’s stop with the domain model right here. As a first step maybe we would like to provide a service that adds a new team to our database and also retrieves from there all the teams from a particular country.

I assume that you have basic knowledge about JPA as it will be the technology that I will use for describing and persisting the domain model. I also hope that you know Maven, because I will use that to setup, build and test our services from the command line. You can download the latest version of Maven from here.

I will use Eclipse 3.6 as development environment throughout the series. It can be downloaded from here. It doesn’t matter which of the packages you will choose. The example is simple enough and you can run it even with the most basic functionality. You will also need the m2eclipse plugin.

Setting the things up

In order to start, you need a project setup. You can both create the directory structure manually or use one of maven’s archetypes to do it for you. Let’s go to the second approach. Just run from the command line:

This will create a default maven directory structure with pom.xml inside. The latter has to be tweaked a little bit. First change the compiler version to 1.6 (by default maven compiles to 1.4). As we are going to create JUnit4 unit tests, we’ll change the JUnit version in pom.xml as well:

Now it’s time to import our project in Eclipse and start adding the source code.

First, let’s create with our domain class – Team. Under src/main create a directory called java and then create our sample package (com.foo). After that create the Team class with the following attributes:

As you can see it is an annotated POJO that will result in the DB table called TEAM, with one column for each of the attributes and also with primary key column called ID, where the values will automatically be generated. We have also defined one named query, which will be used to return all the teams in a championship. I have omitted the getters and setters, which you can leave to your ID to create for you.

Of course in order for this to compile you will need to enter the following dependency in your pom.xml:

Creating the Spring service

The code above will be exactly the same in this and the next two posts. However, when it comes to the reusable service, Spring and pure Java EE have slightly different approaches. What you call a session EJB in Java EE is known as component in Spring framework. The component as the session bean is annotated with one of the available annotations, but is not transactional by default.

In Spring, if you want to create reusable service, then the annotation that you will usually use for decorating your component is @Service. In order to get hold of all the Spring framework specific libraries for your service code to compile, add this to your pom.xml:

First we tell the Spring container that we want that class to be treated as a service and thus to be injectable in other code managed by that container. This is achieved through the @Service annotation. You can see that our service does not necessary implement an interface. However, it is good practice to work against interfaces and not with concrete implementations. As the service will perform basic data access activities, it would need a reference to the JPA entity manager to help it with that. We expect that the Spring container would manage it for us and inject it in our service. This is declared in Java EE standard way – through the @PersistenceContext annotation. The annotation itself is configured to look up the barPU persistence unit from the persistence.xml.

Creating a team should be done inside a transaction. As Spring beans are not transactional by default, the method is decorated with @Transactional. It is also good practice to make getter method transactional as well (even with read-only transactions), but I have skipped that in this example. In order to run the method in transactional context, Spring framework applies AOP techniques. If our team service implemented an interface, Spring would use the dynamic proxy mechanism from the JDK. However, here we need the Code Generation Library (a.k.a. cglib) for the proxying to work. So we add it in our pom.xml’s dependencies:

The rest of the service code is purely calling the entity manager interface to persist or query our objects to/from the database.

Wiring things up

Now we have to make sure that everything that we created so far is glued together. This is assured by two basic artifacts:

JPA persistence context configuration

Spring container configuration

JPA persistence context is configured in the persistence.xml. Its default location is under the META-INF directory of the jars we create, which in terms of Maven directory layout is under src/main/resources/META-INF. Here is our persistence.xml:

The only things that we declare here are the persistence unit name and the list of entities managed by this persistence unit. You noticed that the persistence unit name maps exactly to the name declared in the entity manager declaration’s annotation in our team service. Maybe you also saw that we have not declared here any database connectivity settings. We leave that to the Spring’s application context configuration. And here it comes:

Here is a brief description of the above instructions to the spring container:

<context:annotation-config/> enables the injection of beans inside other beans just with annotating them with @Autowired. We’ll use them later in our unit test

<context:component-scan base-package=”com.foo”/> enables the discovery of all the beans under the com.foo package, that are annotated with @Component, @Service, @Repository or any other such annotation

<context:property-placeholder location=”/META-INF/spring/jdbc.properties”/> tells the container that this properties file contains some substituti0n values for macros defined in this application context (and not only)

The dataSource bean defines a Spring framework data source helper. It is initialized with DB driver, URL, username and password, which values are taken from the file declared in the above entry

The jpaVendorAdapter bean contains vendor-specific JPA settings

The entityManagerFactory bean is another Spring wrapper, this time around the JPA entity manager factory

The transactionManager bean declares Spring’s implementation used by the container to manage transactions whenever is needed

As you can see, these database connectivity settings do not fit a production setup (using non-persistent database with default user and password). This is so, because the above configuration will only be used for our tests. That is why the above properties file is placed under the src/test/resources directory of our project.

Writing the test

Finally we reached our goal. We’ll start writing the test. I guess that the numerous TDD practitioners and evangelists would place this section in front of all the rest, but I decided to leave writing the test as a dessert.

So it will be placed in the com.foo package under the src/test/java project directory. Here is the code:

It is a pure JUnit 4 unit test with some Spring framework spice in it:

JUnit will run this using SpringJUnit4ClassRunner

The Spring test context framework is told that the Spring application context is located under /META-INF/spring/applicationContext.xml in the application’s classpath

The class under test (TeamService) is injected by the Spring container

With the help of the highlighted lines we tell Spring framework to load the application context for us, initialize all the beans that we need and inject them inside our test. Thus we are able to proceed directly to testing our team service without bothering too much about stubbing or mocking the entity manager, which is not the most pleasant task.

The above class is actually more an integration than a pure unit test. It takes some time to load the Spring container. But this is done only once per test suite. So if you create several such test cases with numerous tests each, you will pay this penalty just once.

The test should work fine when you run it both from Eclipse and from the command line with Maven.

Conclusion

In this first part of the series we were able to create step by step an entity and a very simple data access object using Spring and JPA. Finally we created a lightweight integration test with the help of the Spring test context framework. It very much resembled a unit test as all the work for establishing the test environment and creating the collaborators of the class under test was left to the Spring container.

In the next part of the series you will see how to do the same thing but with Java EE session beans and the embedded Glassfish server.

Resources

So, this year’s Java2Days conference finished. The Bulgarian Java community got its JavaOne (with much smaller scale of course). We deserve this. The audience is still shy to actively participate in the presentations, some of the presenters were even shier and looked like students in front of a crowd of professors and the organizers had their small mistakes too. But as a whole at least my opinion is that the content was more important than all the small bugs!

Day two started very promising – a lot of coding and very few slides. Basically the people who were talking abstract yesterday were inside their IDE today. Arun Gupta showed one of his miles-to-go-like sessions. I must confess that I was very happy when I saw his name in the list of presenters last week. And this morning everyone watching had the same feeling. Basically everything that was just presented and slightly touched yesterday was put into play by Arun. We saw CDI, Servlets, EJBs, RESTful web services, NetBeans,.. and maybe I miss something. No ivory tower or Chinese slides. Just NetBeans, Glassfish and the developer! I hope that more and more people think now that Java and Java EE in particular is much better than the various strange combinations like RoR, PHP or some other awkward framework that promises developer heaven.

And the next session that I attended could be an answer to those of the Java haters that say that the language is too verbose. Yes, they are right, but they don’t quite realize that Java language is no the only thing that make the whole Java stack. We have the libraries and we have the Java platform (or JVM more precisely). And if you put on top languages like Scala, then everything becomes much prettier. Scala is a perfect hybrid of a statically typed, object oriented and function oriented language. It has most of the functionalities that the dynamic languages have and in the same time is type safe and has a compiler that guarantees that. The presenter Vassil Dichev proved that he is not only interested in Scala, but has very broad knowledge on what is going on in the whole industry. It was probably due to the lack of time, but I felt that the pace of the presentation was quicker than people could follow (especially when describing the language features). After the session I talked to Vasko about the Scala enthusiasm in Sofia. He agreed with me that there were not too much fans of the language and nearly zero developers using it productively. Maybe a Scala user group is a good idea for bootstrapping everything?

Next I went to the building lightweight SOA applications with Spring. It was actually all about AOP and cross cutting concerns and also a practical sequel to yesterday’s Spring Integration introduction. Oleg showed two of the areas where the Spring platform rocks and finally with nearly no effort he sent a live twitter message!

The next session I attended was one that was again very scarcely using powerpoint. Sasa Slavnic, a Serbian developer, gave us a brief inside into the Java FX development. Java FX script looked great, but unfortunately Oracle decided to discontinue its support. The reason for that is missing tool support and hard times when debugging. In this direction Sasa showed us how the same thing would look like in Swing and I must admit that it wasn’t the shortest code I’ve read in my life. The coolest thing of Java FX is that the script is used just for declaring the UI and it leaves all the business logic and persistence handling to Java. Sasa’s advice was to try to migrate most of the things we do to Java code before it is known what the replacement for Java FX script will be. The good thing is that the community is not sleeping. There was immediate fork of the Java FX script project. It even got its name – project Visage. Let’s see how it will do.

The afternoon sessions started for me with Vlado Pavlov and Dimitar Giormov’s JRuby issues and the pill for it – SAP’s Eclipse memory analyzer tool. The worst thing in debugging OutOfMemmoryError’s occurring in JRuby applications comes from the fact that the stack traces in the heap dump generated by JRuby are too hard to understand. The situation becomes even more complicated when the out-of-memory is caused after a call from JRuby to Java method. Of course the memory analyzer steps in here with its predefined and easily extensible rules for analyzing traces. It was very impressive (again). The best thing is that the tool is free and hopefully it will soon support not only analyzing heap dumps, but thread dumps as well. With the help of the community.

The room was very small for the next session. Andrew Lombardi, the Wicket guy, showed us what is new in the not yet specified HTML 5. In an even more entertaining fashion than yesterday the keen developers got their first (or for some of them not) impression on the cool features of the markup language. The browser support was one of the things that cheered up the crowd.

Peter Peshev from SAP was the next presenter that I watched. He showed his demo from JavaOne about how a big bloated Java EE application can and should be transferred to an OSGi one. We all viewed the prerequisites, potential migration paths and possible errors. Peter had a lot of jokes, which I do not believe were fine for the JavaOne audience two weeks ago. But we Bulgarians are famous for our black and sometimes rude humor. ;-)

The final session Reza Rahman presented us the Java Community Process. This is basically the way how java specifications (or JSRs) are created. It is really an open process, true Java democracy in a sense. Everyone, even individual developers, is invited to participate at least as an observer. I was mostly interested in the Apache boycott of the process due to the strange ‘field of use’ issue that prevents them license their Java SE 6 implementation. And whether this boycott can hinder the release of the Java 7 SE spec (right now all the development goes under the title JDK 7, nobody talks about Java 7 due to lack of a JSR). Fortunately I got reassured by the people on board (all the present JSR Expert Group members) that this is not the issue and nobody can put a veto on releasing a specification just by a single No vote.

I will finish the Java2Days story with my final impression: the content was great, though the organization was not at last year’s merits. The small room on the first floor lacked microphones, there was not support for poor Andrew Lombardy when he tried to fix the beamer’s position and the pretty girls from the fashion agencies that were here last year were missing ;-)

My biggest not-too-hard-skill impression was that the Java EE people are quite self-confident after the release of version 6 of the spec. They are very often mentioning (not with the best of their spirit) Spring in their talks. At the same time, the minds of the Spring people seemed high in the sky (probably because of the clouds that appeared there after the VMware acquisition of SpringSource). I didn’t hear that passion which Rod Johnson had when explaining that Java EE is stopping innovation. I hope that they will surprise us soon. Or maybe they are too busy to follow VMware’s strategy?

Anyway, Java2Day conference and all the speakers proved that visiting such events is at the same time helpful and entertaining. So, see you next month at Devoxx. :-)

Last year I wrote (in Bulgarian) that the Java2Days conference was the best conference that I have been to. Well, this had been the only conference where I had gone at that time. This year the situations is nearly the same – I have not gone to any other conference yet (hopefully this will change for good in a month), but now I can say that I am impressed. The content this edition is great so far.

For those of my (ten) readers that were not there, there are three tiers going on in parallel. I must confess that for some of the slots I had hard times to decide where I would like to go, but sometimes life is harsh. :-)

So, the conference started with a half hour delay and an improvised key note from the Oracle sales director for Bulgaria, Serbia and Montenegro. The only thing that I liked about the guy was that he was speaking half Bulgarian half Serbian language, which is always great fun for me (I pretend that I speak Serbian too :-)).

The first real session that I attended was Alex Moussine Pouchkine’s “Why and how Java EE became popular”. I knew Alex from the last Java2Days, from his blog and of course from the Glassfish podcast. His presentation was perfect for a developer who was interested in what is new in Java EE 6. However, my colleague and friend Vlado Pavlov asked me: “did he explain why and how”? And I realized that even though the presentation was great for me and my colleagues, it was not following too much its title. Anyway, I think the Bulgarian community needed this introduction for many reasons. And it was perfect that the conference, or at least one of its tiers, started with it.

Next I half-visited Spring Summer session (didn’t get too much of it as I tuned in right before its end) and then visited Spring integration. I realized why I like Spring so much – it integrates very well with merely everything. It’s so easy. If you wish, you can keep your Java code independent of any technology/product specific packages and classes and use XML. Or otherwise you can code everything by yourself. With the help of the tooling (and even without it), it is a piece of cake to follow both approaches. I still don’t understand the Java EE evangelists that keep polluting the web with claims that Spring is all about XML. Disclaimer: Spring integration does not have anything to do with integrating Spring with anything. It helps you integrate your code with other systems through different channels.

The next session was the most impressive of the day. Not that the presenter was perfect as such, but it was the content – Pseudo Functional Domain Specific Languages in Java. The guy (BTW coming from Macedonia) presented his work (still in progress) on a Java library that uses static inserts, generics, dynamic proxies and other language and JDK features that can make writing Java seem like writing functional coding (think about Lisp and lambdas). This does not only save a lot of boilerplate, but also makes the code look more elegant. The presenter (Nikolce Mihajlovski is his name) was quite shy and said that he is not yet ready to publish his work, though he will do it soon.

After the lunch break I went to the Apache Wicket session. It was presented by Andrew Lombardi, who was here also last year. He is a great presenter, very entertaining speaker and even a developer (Wicket contributor). He had a very interesting approach to demonstrate Java code – everything was recorded in a video clip. From creating the classes, to writing their content, the tests, running and displaying the result in the browser. However, the guy was very ill disposed to JSF. Before JSF 2 I would agree with him, but now it is quite better. When you want to implement a JSF component, you can do it even easier than in Wicket (at least you don’t have to create a tone of anonymous inner classes). The xhtml code and all the special tags are not quite different from the HTML tags from the designer’s perspective. And Andrew mentioned that HTML should be written by designers. But anyway, it was a great presentation which I hope persuaded a lot of people to come to the bright side of the [web app development] world :-). And he also pointed wicket’s cons…

In the next slot my former colleague Vassil Popovski talked about developing RESTful web services in Java. This is a very important topic, because in my opinion the Java community should leave the WS-* bloat to Microsoft and other mastodons who use their SOAP implementations as apology for lacking integration with the REST of the world. My opinion is that REST is a very simple topic, which only sounds complex. As Vasko mentioned: the specifications sounds like an article in a popular blog rather than an ivory tower paper.

The last session in the agenda (and not in the day hopefully) that I attended was Reza Rahman’s Testing Java EE Applications. Testing and continuous integration is one of my topics of interest so I was wondering how it is done in the Java EE world, where the dependency on containers and the infrastructure they provide seems huge. I was even more intrigued because I am just reading a book on Java EE 6 and alongside all the simplifications there (even there is an EJB container which can be embedded in the client JVM) I felt very disappointed by the fact that it is not possible (or at least for me) to inject EJB’s inside JUnit tests. At the same time you can inject EJBs virtually in every class managed by a Java EE container. And on top of that my experience with Spring has taught me that you can inject everything everywhere. Well, Reza showed us that Java EE can also do it, but not through the specification. JBoss’s Arquillian library comes to help here. And not only here, but in all the cases where you want to test Java EE components (Servlets, JSF, EJBs, JPA, etc.).

Finally the toughest listeners watched Arun Gupta‘s session on bringing Java EE 6 to the cloud. He again scratched the surface of what is new in Java EE 6. The presented four different cloud solutions (Amazon amongst them, I did not keep notes, so I forgot the rest). And he showed how you can install database and several app server instance (Oracle Glassfish of course) on each one of them. He described the monitoring, management and deployment capabilities and finished with the pricing. One of the proposals from Arun’s presentation had a very appealing developer (free) edition, but I forgot which it was. Follow this blog, I promise to publish it tomorrow. Finally Arun promised that Java EE 7 will stress on making the Java enterprise platform more suitable for cloud computing. We’ll see. According to the presentation, we’ll wait until 2012 to find out.

So, this was the first day. Tomorrow I’ll surely visit Arun’s tools show (no ppt, just NetBeans and Glassfish), Vasko Dichev’s Scala session, Vlado Pavlov’s JRuby memory and thread issues (Ruby has also scalability issue, but this is another topic), JavaOne’s star OSGi Migration headaches and finally we’ll all take a look inside the JCP together with Reza Rahman :-)

This article is not from this week, but anyway, it is very interesting and profound, so I cannot help sharing it with you.

What is a data store? Well, this is a repository where you can store data. Author’s idea is to provide a quick overview and benchmark results of a wide range of data stores.

There are certainly several types of data stores – relational databases, object databases, document oriented stores, etc. For each of these types there are different kinds of libraries and even technologies that help the developer work with the data storage. Each of the data stores and the concrete implementations has its strengths and weaknesses. In the beginning the author tries to stress that there is no silver bullet that solves the data storage issues. However, most people (99,99999% according to him) go for the relational database solution and most of those (there is no percentage mentioned) use JPA as the layer between JDBC and the application code. The author calls this no-thought solution SOD – same old data store.

I must admit that I am in the SOD camp – I don’t usually think much when I have to develop a simple application. I go directly to relational DB + JPA. However, at a bigger project where I participated, we had to judge between several persistence representations and technologies, so I had the opportunity to get acquainted with some of them (even a colleague of mine keeps insisting that JCR is the best solution, even though we chose JAXB and XML :-)).

Anyway, the bottom line is that it is good that people like Joe write such reviews so that the next time we’ll take a well-thought decision and not go directly to the SOD.

‘Closures in Java’ is a long discussed topic in the community. And it seems that we are going to have them in JDK 7.

But first, what is a closure? Well, this is can be a very broad and complex area, but the easiest answer is – a function pointer. Hmm, it does not seem quite clear, right? OK, imagine that you can define a block of code and you can pass that as a parameter to normal functions. For example you can define a generic iterate() method over a collection, which receives the algorithm that handles the collection data.

Anyway, we should not wait for JDK 7 to come to start using closures. Most of the dynamically typed languages have them and some of them are built on top of the JVM, which means that they “compile” to Java byte code.

The article described hereby is an example of how you can implement a Groovy method that takes a closure parameter and then create and pass this parameter from withing pure Java code. Short but useful hint! :-)

JDBC was discussed in an earlier article of this blog. So if you liked it, but you are still not very familiar with it, then DZone’s series is just for you.

In the first interview-like installment Daniel Rubio explains the very basic terms of JDBC: how it works, what is a connection and how do you create it, what is a statement, how are connections pooled, what is a DB driver, etc.

IBM developerWorks continues Ted Neward’s series “5 things you didn’t know about…”. Now it is time for the Java Collections API. As this is a very vast topic, there will be several parts devoted to it.

In this first part the author starts with the most obvious things – how you can use the API for common stuff: converting an array into a collection, iteration, the new for-each loop and its usage with collections, new and handy collection algorithms and extending a collection.

I personally thought that using the plain old array is better in terms of optimization. Well, not exactly. Think about all the tedious code that you have to write for a single operation with the array. For example a simple dumping to the console requires at least several lines of code. Not to mention extending it with one or several elements. Well, this is handled by the collections API for you. Not only that, but the API designers have overcome all the traps that you can fall into if you decide to work with the array by yourself – concurrency for example.

So, my advice is: use collections as much as possible. It’s easy, it’s fast and your code will be clean and easy to understand and maintain.

Last but not least, Steven Haines, the editor in chief of the InformIT Java guide created a quick series this week on running JavaScript code inside Java programs using the Rhino library coming from Mozilla.

The usecase presented here is that a certain web application needed same validation to be run on the server, as well as on the client side. The easiest solution would be to develop the validation logic twice – the first time in a language suitable for the browser (e.g. JavaScript) and the second time in Java – our server side language of choice. After a while you can realize that this is not a good idea: developing something twice is one of the worst practices, which we always try to avoid (though I must admit I’m still doing it :-().

So the solution is simple: write the validation code just once and call this same code on both locations. It’s easy to run JavaScript on the client side – all the modern browsers understand it. But how do you do it in Java? Well, Mozilla Rhine does it all and InformIT knows it all.

Don’t forget after reading the first part to go on with the next ones by clicking the Next links at the bottom right of the articles. Thus you will find out how you can combine the result of different functions, how you can build a JavaScript entity using JSON and how you can convert it to a JavaBean with GSON, and finally how you can organize your validation code.