Java8 is already here and it’s different. The new API’s are powerful and have a definite learning curve, but start using lambdas/streams/new APIs right away or regret it later.

even dry-as-the-desert-banks are transforming to agile now (in what seems to be a successful and ‘correct’ fashion), no more excuses for anyone else to stick to age old Business vs IT barriers, waterflows and the like.

the market is exploding with cloud solutions for building, running, maintaining java applications. time to make use of them.

Java EE 7 (and above) is a new breed and much less cumbersome than previous versions. Have a look at it.

… is very focused on Java and surrounding technology (obviously!); there are not many methodology, inspirational or alternative talks. Denise Jacobs talks on the creative mind, were, however, very good.

… has an excellent venue at the Metropolis Cinema with great audio and screens, great seats, and an easily accessible location (tram stops right outside),

… does not have the best quality or supply of food (!) although, there is plenty to drink,

… was jam packed with people that created queues for most things,

… is located in a country with great beer tradition.

All in all, I had a great time and I’d like to thank the Devoxx team for all their efforts, and Hybris for giving me the opportunity to attend. Hoping for a return next year!

Devoxx Upper Floor

Me @ Devoxx

Devoxx 2014 – Infinite Possibilities / Java 8

Adam Bien creating new nomenclature

Denise Jacobs making everyone in the room a Superman

Venkat Subramaniam delivering the most voted talk of the conference using his TODO style

This spring I had weeks where I was in complete overdrive mode. The attention I needed to give to work and personal life peaked simultaneously, and I was all in all pretty much constantly stressed out. This was not sustainable in the long term. Fortunately I managed to deal with the issues and cut down on work load and after a while things settled down. But I still wanted to take something useful out of the experience, and I’ll talk about it here!

Within Extreme Programming, Sustainable Pace is an important practice.
It’s a principle arguing that when developing software, having a consistent workload in each iteration increases measurability, predictability as well as code quality and employee well being. I find this principle to grow more and more worthy of recognition, and not solely out of a software perpsective.

For me, “sustainable pace” can be applied in a broader sense. So let’s first look at how it applies to one of my hobbies: jogging! I think anyone knows that if you sprint, you can’t go very far before you need to stop and catch your breath. Whereas if you jog along in a pace suited to your condition, you can run quite far! Sprinting increases the risk of injury and makes you unfit to handle surprises. Let’s say you’re running in a 10km race, and you believe that you have only 500m left, so you start sprinting. After a while you reach a sign post, indicating that you’ve got 2km left! Since you’ve pushed your body into running near its maximum capacity, how are you going to cope with the extra 20% of track to cover? Another point regarding jogging as a means of keeping fit, is that if you push yourself to your limit in a run, you might simply not enjoy it that much. Maybe you won’t notice the beautiful sights, the good weather or the nice, fresh air, that you would have done if you kept your pace. Thus, at the next scheduled time for a jog, you might decide not to run at all because your previous experience was just too rough.

To complete the jogging analogy we need to include the quality aspect somehow… is it there? Well, as Pheidippides should know, you can run far quickly and and make it in time
to your preset goal, but it always comes at a cost. For the ancient Greek runner who, according to myth, completed a Marathon distance to deliver news of the Greek victory over Persia, it meant his death.

I believe that you can also draw parallels to our daily life. If you’ve managed to fully pack your schedule of working overtime six days a week while taking an evening Spanish course and going to the gym four times a week trying to lose weight, you might just make it. But any unforseen event is much more likely to cause big trouble for your scheme than with a less dense plan, and such an event may actually cause all of your activities to suffer. Had you had more unscheduled time available, that could have been used for crisis management, you could possibly have continued with most of your planned tasks. When looking at it like this, I find that the value of allowing enough slack in your schedule is fairly obvious, and I think most people would agree. Yet very often we overestimate our capabilities and try to achieve too much.

Sustainable pace does not apply only to jogging, obviously, but practically to all practices where there is some finite resource involved (such as stamina, time or money); it is about managing the resource and making sure that supply matches consumption and that there is enough in reserve to handle extremes.

Let’s now focus on software development and why this topic is so important there. Unfortunately many projects are running way too fast for their own good. This includes many Scrum teams that take the ‘Sprint’ concept in the wrong way; they literally sprint, blind folded, during the whole iteration and usually end up crashing into the Sprint Demo Wall at the end! Near deadlines, many teams crunch hours and management demands overtime. Corners are cut, and features dropped. When you’re sprinting in software development, you’re not only risking burning out your core resource: the developers; you are also creating a greater and greater need for a ‘recovery’ period, when the team and code can settle. This recovery period would typically be the time when the built up technical debt is dealt with. However if the team is not allowed such recovery, the time needed for recovery will increase. Further risks that are induced on the project include the incapability to handle unplanned events. If the team is already running at maximum speed, how will you manage a crashed build server or several team members falling ill?

You may wonder how one determines that one’s running at an optimal pace. It’s easy to imagine situations where you are running much too slow, or where you have an abundance of slack. This would cause your competitors not only to overtake you, but to stay ahead of you! Hm, what about setting out at a reasonable pace and adjust your pace as you go? After a couple of complete races, sprints or releases, you’d probably have adjusted your pace several times and be very near optimal pace. E.g. you noticed that during the last Scrum Sprint, the team was feeling quite some pressure to finish the last tasks in time, and there was a test server that crashed that no one had time to fix. So when planning the next Sprint the team’d make sure to commit to less work.

As a summary I argue that for any activity, by keeping a pace where there are margins, you’re more likely to:

increase the quality of the activity,

increase the length of time that the activity can be performed,

increase the predictability of the activity being performed (on time and in time),

increase the capability to handle extreme situations that directly, or indirectly affects the activity,

increase participant satisfaction in the activity.

Now it’s time for all of us, I hope, to go and spoil ourselves with a Christmas spent doing things at an unsustainably low pace with an unsustainable intake of fat and liquids!

It just so happens, that one of my favourite pastimes is following the proceedings of a little sport called football. For esoteric reasons, I happen to support a British team called Queens Park Rangers. Just recently I came across a forum post with a link to
this 1988 documentary on the team’s mental training at the time. In it, viewers are introduced to how a sports psychologist works with the team during six weeks. Players learn to focus on matters they wish to be good at, and to visualize themselves doing them.
They also get to do team sessions where they discuss problematic areas during the past season, and where they get the chance to express their feelings about situations on the football pitch. The impact of this mental training is deemed a success, with several of the players improving their focus areas. From the manager’s viewpoint, the team also started to communicate on a whole different level than what was ever imaginable before the training.

Fascinating, isn’t it: an activity based on strength, stamina, genes, hard training and ball juggling talent can be improved by psychology! Oh, and that’s why I came to think that the act of mental training is greatly underappreciated in the world of software development, and is frankly something that I don’t think many teams or companies are doing. Sure, the Agile retrospective is something along the lines of what I’d like to see, but it encompasses so many areas not related to how we feel or communicate with each other.

I don’t have a Ph.D. in Psychology, but I reckon that just by applying some commonsense we can immediately identify some things:
1. When we visualize ourselves doing hard things, or talk about things we fear, we inadvertantly improve in these areas. With continued mental training we unlearn unwanted habits while discovering how not to fear our most dreaded scenarios. I argue that visualizing writing clean code actually makes you write cleaner code, or maybe more indirectly, sets your mind in a state more open to learning things you need to know to write cleaner code!
2. When we get the chance to express our feelings about softer topics such as why you are hesitant to refactor legacy code, or why you fret about pair programming, or what you really think about the new process being pushed upon you from above, we immediately feel better about them. As a group, we bond together instead of building mental silos. Opportunities for adaptation arises instead of slipping through our fingers!
3. In reference to my previous review of The Two Second Advantage, when doing this kind of training, you are in effect building mental models of your particular focus area. These mental models include knowing how other members of your team will react in certain situations.

Maybe my argumentation is a bit abstract, so let’s just think about a few things I’ve noticed “out there”: People realize pair coding, peer code reviews, TDD, etc, are indeed very useful practices, but for one reason or another they repeatedly ignore following them. Or what about people sitting at their desks/cubicles/work stations deciphering a cryptic email from someone from another department across the hall for 30 minutes, instead of walking over to that person and resolving the confusion. Most of the time this has to do with fear; fear of not knowing how to do something, fear of not knowing what something will be like, or fear that something would be too much work or too boring! Venting such fears opens up for peer understanding and relief which in turn might push a team to try some of the practices out. The team and its members will grow, as a unit as well as individually.

As for how you can accomplish this in practice, well, unfortunately I don’t have much experience on the matter. But I suggest introducing the topics of mental training in the company, and encourage people to get together and talk about them as often as they want; on Lean Coffes before work, or during team meetings, or on extended or separate retrospectives, or preferably over a beer or two after work!

I’ve previously told you about my experiences of Personal Kanban (PK), at work and at home. I recently got the opportunity to borrow the book, Personal Kanban by Jim Benson and Tonianne DeMaria Barry, from a good friend and it did not disappoint.

Immediately notice the book’s subtitle “Mapping Work | Navigating Life”, because that gives an excellent insight into the mindset of the authors! The book takes you through the basics of PK such as its keystones of visualizing your work and limiting your work in progress. It continues with describing how to create your own Kanban board, with a value stream and backlog, and sends you off ready to start pulling tasks!

Further topics include optimizing flow vs. capacity, slack, push vs. pull, the horrors of to-do lists and the need to continuously refine your own PK process. You’ll learn about swim lanes, additional value streams, the PEN, and more. But if you only take these concrete subjects with you, you’ve missed the whys of the book.

Existential overhead for example, is something you will immediately recognize: You know “in the back of your head” that there are a million things you need to do. You don’t know exactly what they are or when you’ll do them, but they are there, right? PK helps you minimize existential overhead by visualizing and mapping the work ahead of you into something the authors like to call a narrative. Could it be possible that your organization suffers from existential overhead? How do you think your developers feel about that technical debt they know exist all across the code base?

Another interesting subject, is the Ziegarnik Effect, which is the strong propensity for humans to remember incomplete tasks over completed ones. We simply need to have closure on those things we have started, and with PK we get this in a very strong manner: by moving the PK cards to that final column on the board that emanates success: DONE!

This book is full of life and examples of how PK helped people in various situations and that gives it a very humanistic tone. Not only does it teach you everything you need to know about PK as a method, it follows up with why it is such a good method and how it will affect your life. This book is a must have for Agile exponents and anyone who likes to get things done with effectiveness.

So, in this final part of my excursion into new Spring territory, I’ll talk about a few trends and tools noted in the “3.2 era”. This blog post will go high and low while trying to follow a red line, let’s see how it goes!

First of all, let’s take a look at some of the tools that leverage the ongoing boom in REST-ful API:s.

Spring HATEOAS is a framework helping out with writing HATEOAS compliant REST-interfaces (which is really a fundamental principle of REST!). Basically, a client should only take actions on a resource that were described in representations previously received from the server. You could say that the client should be able to be agnostic in terms of what resources are available, and how they are linked. So, not surprisingly, this Spring framework has support for adding this type of information to the representations the server sends back to the client. It can help you keep track of how to create the necessary relative links to other resources in your application. I believe this principle of REST is pretty fantastic, yet not fully leveraged by many teams, and I encourage you to take look!

Spring REST-shell is a tool used to help REST developers test and explore their interfaces. The command line-style shell have several commands and features such as discovering resources, following paths within the interface, setting the HTTP request headers and posting JSON (although that is somewhat clunky). An intended use of this shell is together with the HATEOAS framework, fully leveraging HATEOAS-style links. An example would be that a resource which has a “rel” linking to another resource, can easily be followed via commands instead of having to manipulate the resource URL.

The above tools can actually be used together with Spring Data Rest to, with very little boilerplate code, create a HATEOAS compliant REST-interface on top of your model.

Finally, a small reflection that most of the Spring projects on Github now seem to use Gradle instead of Maven. It’s not particularly new that Gradle is the new kid on the block, however you should take notice that things are happening here. Apparently, ThoughtWorks goes so far as to put Gradle in the ‘Adopt’ quadrant in their 2013 technology radar, whilst Maven falls back to ‘Hold’.

Another one of the new Spring 3.2 features is better support for content negotiation. I’ll introduce the topic briefly here, and share some thoughts.

Content negotiation is used to indicate either what type of content a client would like to receive, or to indicate what type of content the server will return. This is useful for various reasons, some being:

Enabling servers to deliver content of the client’s preferred type.

Enabling better error handling, when there is a mis-match in client-server content types.

Serving content on the same location to different kinds of clients, such as browsers or mobile apps.

Anyway this is hardly news for anyone (!) and the technology is widely used, particularly in its most classic way, via HTTP headers. But with Spring 3.2, there are new features that help us with making the most out of a variety of content negotiation techniques:

To minimize Internet waste, I’ve decided to not go more in depth here, and I will simply recommend that you take a look at this Spring blog which does a great job of explaining the topic. Or you can skip the reading and jump straight to the author’s demo!

Spring 3.2 comes with new, improved support for testing within the Spring MVC domain. The gist of the new MVC test feature, is that you can test your controllers in ways such that requests are routed via a proper MVC infrastructure, i.e. via the DispatcherServlet.
Previously, you may have used various mocks to test the code inside each controller method. With the new functionality, you can also test and verify specifics of the controller method, such as the mapping, HTTP headers, request data, content types and more.

So how would you accomplish this? Let’s start by turning our attention to the spring-test module, and the org.springframework.test.web.servlet package where we find the new MockMvc class. MockMvc acts as an entry point to the MVC structure under test. It’s a tool you can use to simulate requests against your controllers, with support for all kinds of assertions on the results. As indicated above, everything about your controller should work as it would at runtime, including servlet filters, view templates and the like. However, the requests will not be running in an actual servlet container, which means that JSPs will not work properly (although you can still assert some things like what model objects were set).

First, you can create a standalone MockMvc for one or more of your controller objects (annotated with @Controller). A “minimum” infrastructure required by the DispatcherServlet is then created, which may be customized as needed. This option is specifically intended for unit testing one controller at a time, leaving testing of the MVC infrastructure itself, the Web Application Context, to other test classes.

Second, you may create a MockMvc given a particular WebApplicationContext (a class which is also new in 3.2). This allows you to load a full Spring MVC configuration in which the controller under test will exist during tests. This option should be considered for integration tests, where the focus isn’t solely on the controller’s code, but the effective behaviour of the controller in the scope of the tested application.

Now it’s time to look at an example, which will demonstrate a standalone controller test and how to perform assertions. First, a very simple controller:

As you can see, MockMvc allows for a fluent-API way of testing the controller, including assertions.

When you call “perform” on MockMvc, the result is a ResultActions, on which you can “expect” a wide variety of criteria via the “andExpect” method. The criteria here, are instances of ResultMatcher, for which there exist a large number of helper classes. For example, MockMvcResultMatchers can be statically imported to support matchers for asserting content, view, model, urls, response status, headers and more!

As we saw, there’s also a static “print” method, used as parameter to “andDo” on the ResultActions object, which will dump the result of the performed request to standard out:

this.mockMvc.perform(......).andDo(print());

Hopefully, this blog is enough to get you going with MVC testing. If you want to go more in depth in this topic, take a look at the Spring blog, consult the reference manual, and check out the Spring MVC Showcase, which includes a large amount of samples of what is possible with the new MVC testing features. Also check out this Spring blog which discusses other new, general testing features such as how you can set up and customize mocked application contexts.

These great new features mean there’s really no excuse not to test controllers, so now we can finally stop starting web servers to find out whether our code works!

One of the features in the latest minor release of the Spring framework (3.2), is support for Servlet 3.0 “async” processing.

In this blog I’ll show you how Spring let’s you do wizardry things like long polling in a straightforward way!

In a normal web application, the client (browser) sends HTTP requests to the server, which processes information and delivers an HTTP response. While waiting for the response, the browser normally pauses and indicates that it is waiting for a response. To alleviate the client becoming “unresponsive”, this is nowadays overcome mostly by sending the request as an AJAX request. This solves the client’s problems to a great degree.

However, a situation where the processing on the server side takes an extended amount of time, may cause problems for the server. Each (AJAX) request
that clients make will “hang” on the application server and lock up resources (such threads), while the server is trying to process it. Thus, this is where the
async functionality in Servlet 3, and Spring 3.2 comes to the rescue.

Spring async support basically comes in two variations. Both methods work in a similar fashion, namely that when a request is received, the executing thread is immediately released. Processing is done in a separate thread, before the HTTP response is written back to the client. Thus from a client side, it’s not possible to distinguish that async functionality is being used.

I built a sample application for the purpose of showing these variations, including the “normal” HTTP request/response call, and I’ll show you here how to set this up yourself.

First of all, you’ll need a project setup. Look no further than the Spring MVC Showcase project on github! In my own demo project I used a similar, but simpler, setup that I won’t show here. What I will do is highlight a few important configurations:

Make sure your application server supports the Servlet 3.0 spec.

Set the <async-supported> flag to ‘true’ on your Spring dispatcher servlet.

Specify that the web application is a “3.0” web application in web.xml:

When the first, called ‘normal’, is clicked, an MVC controller method is hit, which basically puts the thread to sleep for 9 seconds, before returning a view name.

When the second, called ‘async with callable’, is clicked, an MVC controller method is hit, which returns a java.util.Callable. The callable implementation sleeps for 9 seconds before returning a view name. In this case the response is written after the Callable returns the view.

When the third, called ‘async with deferred’, is clicked, an MVC controller method is hit, which returns a Spring DeferredResult typed to ModelAndView. The result is also stored in a local Collection. A separate method, scheduled by Spring to run every five seconds, is run concurrently. The thread simulates an external process or thread, by sleeping for 9 seconds before calling “setResult” on the DeferredResult stored in the collection. In this case, setting the result causes the response to be written!

To understand the differences between these methods, I took screenshots of the running Tomcat threads after having clicked each of the above links:

In the normal case, you can see that the Tomcat exec thread is sleeping during execution of the controller method.

In the async case, you can see that the Tomcat exec thread releases execution of the controller method, and a separate thread (known by Spring) performs the “processing” before returning the response.

The deferred case is almost the same as the async one. The only difference is where the processing occurs, which in this case is the Spring initiated ‘pool-1-thread-1′ executor thread. Note that this thread could have been started from anywhere, it wouldn’t have to be known to Spring.

So in conclusion, I think it’s fair to say that Spring makes this new functionality very easy to implement. Now you can start doing ‘long polling’ and stop hogging too much resources!

If you want to try these things out, I suggest that you take a look at the three Spring blogs on the topic and use the MVC Showcase project as your starting point.

For reference, below is the controller code used to demonstrate the above:

The new version of Spring, dubbed 3.2, went “GA” in December, and I’ve taken a look at the new features and what you can do with them. This will be a rundown of selected features, with comments and examples.

First off, I’d recommend you to refer to the blog post, this video briefing and the 3.2 reference, in increasing order of detail of what the 3.2 release brings. Or you can stick around here for a distilled version! I’m going to comment and provide examples of the following topics in this blog series:

In my previous post on personal kanban, I showed how I used a simple piece of cardboard to implement the flow at work. Having accustomed myself to the method, I tried to give it a go also at home:

The picture above depicts my bedroom door, and you might notice (I suppose it might not be obvious) that my Kanban flow is vertical and not horizontal. It doesn’t really matter though – I just thought that the three areas separated by the wood working and the visibility of the door made for a splendid Kanban board for my purposes.

There is nothing new here, the method is the same as the one I used at work, but I’d still like to point out two things.

I’ve actually tried this exact method on a couple of weekends. You know the feeling you get when you have a couple of days off (perhaps even a few weeks of vacation), and in the back of your head is a long backlog of various things you need to get done. BUT, instead of organizing these tasks in your mind and start ticking them off, you let them slip and you go watch a game of football or some other completely unrelated thing.

Now, with this extraordinarily simple method, I usually get 80-90% of the tasks done, which is a vast improvement over a “no method” approach and oddly enough, also an improvement over writing all these tasks down on a piece of paper! In addition, some of the tasks that I couldn’t complete were impossible, or very hard (because they required to be done on a weekday for a certain retailer to be open, or similar impediments), which makes the statistics even better.

The second thing is, again, transparency. I suspect the reason why this method is better than “no method” or “paper method”, is the visibility of the tasks glaring at me from the top row of the door. Not to mention my cohabitant’s reaction to having a part of our home filled with little, yellow, beautiful(?), post-it notes :-). She even took a picture and shared it on a particular social networking site… Furthermore, she was WELL aware that I hadn’t completed certain, “important” tasks on the list and kept bugging me about them.