Every hour deploy and test the latest successfully build artefacts to dev #1

Every day deploy and test the latest artefacts which have been successfully tested in dev #1 to dev #2

On demand do a Maven release of the latest artefacts that have passed testing in dev #2

The tricky part of this pipeline comes in the dev #2 environment because it needs a way to select the latest artefacts which have been tested in dev #1 instead of just the latest artefacts produced. The same goes for making the release, I need some way to identify which artefacts have been tested successfully in dev #2.

This means I cannot rely on just picking the latest snapshot deployed to the internal repository.

We have considered different options:

Publish the snapshot to a internal repository and carry around the timestamp of the specific snapshot version. This would allow us to pin out the version, but gives us a problem with regards to cleaning up unused snapshots as Nexus can only do keep X days or X versions, but not remove a set of artefacts based on a timestamp. Also it has the downside of people questioning why we should even do Maven releasing if we already have a unique identifier.

A second approach would be to use the "copy artefacts" Hudson feature, but then the way we get artefacts would be different between dev and the upper environments.

The approach we have settled on is to not deploy the snapshots to the nexus repo, but use the hudson maven repository plugin. This plugin exposes each build as its own repository. In order to get the right artefacts we use a custom settings.xml to mirror the snapshot repository to the URL of the specific build as exposed by the Hudson plugin. This plugin only works with either the jobs Maven 2 jobs or the new Maven 3 integration, since it relies on Hudson understanding the build and the artefacts. We use the promoted builds plugin to identify the correct build, and we use the promotion status for easy clean up of non used artefacts.

We haven't looked too much into using nexus pro's features such as staging repositories or adding metadata, since the above approach works fairly well for us.

The new challenge:
The reason for writing this post and asking for help is a possible change to our process which I am not sure how to best integrate.
There is a wish to integrate a component (playframework based webapp) build using ivy into this framework and specifically into this project. I can see this causing some integration pains

Firstly the project is one big Maven multi module project and we prefer to keep it that way. As a very least we want to keep things as one build i.e one Hudson job for building. Is it possible to have a Maven submodule defer execution to Ivy ?

If we get the component built using ivy only, then Hudson will not be able to see the outcome as a Maven artefact, and as thus wont be able to expose it as part of the "repository per build". That is at least my strong suspicion.

What to do ?
So does anyone have a suggestions on how to resolve this ?

Can we cleanly integrate Ivy into our current build, if so how ?

Do we need to find another approach than using the "repository per build" plugin ? if so, what is the suggested alternative.

Would Nexus/Artifactory paid editions make this easier ?

Just to be clear my requirements are:

From the outside it must appear as one build i.e. one Hudson job.

For a developers desktop build it is okay to be a multi step process

We need to be able to specify a particular build to deploy in dev #2 and for releasing.

We would like to continue the use the promoted builds plugin to visualize good builds

Wednesday, 26 October 2011

This is a review of Apache Maven 3 Cookbook written by "Srirangan". I got a free copy from Packt publishing for the purpose of the review. I have been using Maven for some years now and this book is a introductory book, so it was clear from the beginning that I am not in the target audience.

The style of splitting the book into 50 recipes makes for a good format which is easy to read and breaks the book into small achievements for the reader.

Instead of focusing solely on how Maven is configured, the author tries to tie some of the subjects to software development practices e.g. covering Nexus and Hudson while explaining team collaboration. It serves the book well to put Maven into a development perspective, but it doesn't always fit with the recipe format. For instance in the Nexus case from above the "How it Work" section becomes more a "why it is good" section.

I like the fact that the book covers a wide range of different project types and topics. Many times when you read tutorials or other documentation only the simplest project types are covered leaving the reader to add plugins as needed. This book covers many project types and framework and some non-Java areas. It also covers things like setting up Nexus,Hudson, various IDEs. It even has a single chapter on plugin development.

In the first chapter the level of information in each recipe is appropriate, but as the chapters get more complex the level of information does not. This results in many of the recipes being too simplistic. A prime example the is set-up of remote repositories, it describes in great many screen shots how to install tomcat 7 and deploy nexus, but only has a single line of information on how to set-up the remote repository, plus only mentions (incorrectly) changing the settings.xml and not the required changes to the project object model. So it fails to help the reader define and use the remote repository possible leaving the reader with a broken set-up

This simplistic approach has another side effect. In many of the recipes there is clear copy-paste'able examples but very little explanation of why things are the they are. An example would be the first recipe which introduce multi-module projects. In the top level project definition the dependencies are placed in "<dependencyManagment>" section instead of the normal "<dependencies>" section without an explanation of why.

Conclusion:
While I like the style and long list of topics covered in this book, I think the decision not the explain the details of why things work like they do e.g. <dependencies> vs <dependencyManagment> or how repositories works, does the book a big disservice.

I would not recommend learning Maven from this book alone, since I think explaining the "why" is an essential part of learning a new tool. If you want to learn Maven use the free sonatype book Maven: The Complete Reference and buy a copy of this book if you like a quick introduction to the various project types and plugins.

Thursday, 13 October 2011

The tool itself is pretty good and makes it very easy to test that our soap based web-services are working as intended. The fact that is actually provides Maven and Junit integration out of the box is even better and fits very nicely with our CI environment.

There is however a few things that are not obvious when using the plugin.

The documentation page is really old, e.g. it refers to an old (2.5.1) version of the plugin. The trick here is that you should in general use the same version, as the desktop version you are running. In my case that is 4.0.0

It isn't documented on the page but there here is both a "maven-soapui-plugin" and "maven-soapui-pro-plugin" version of the plugin. In order to fully use project created using the pro version you need the pro version of the plugin

The version 4.0.0 of the plugin has a misconfiguration so you will need to manually add some dependencies to the plugin. The version I got working looks like this.

Sunday, 31 July 2011

One of the things I highlighted under "the bad" during my review of the new Maven 3 Integration in Hudson 2.1 was the fact that the "Jenkins Maven repository Server" did not work with it.

This is of course still true but now the first version of the Hudson equivalent has been released. Since it is the first version, my focus has been on making it work in our own pipeline leaving the extra stuff for later. This means

It only supports normal free-style jobs not multi-configuration

There is no repository chain or git commit functionality, only direct build by number

metadata.xml is not created.

The last item can cause you some issues, as I have seen maven think its local version is newer than the repository version when it cannot find the metadata, so I advice you to manually remove/clean the old artifacts away.

I hope to support both multi-configuration projects and metadata files in the next version, but they might require core changes.

The plugin has gone through the staging in Sonatype's OSS nexus, so it should appear in maven central and the hudson update site shortly.

Friday, 29 July 2011

I have been looking forward to the new Maven3 integration in Hudson for some time now, and with the release of Hudson 2.1 it is finally here.

My main interest in the new integration is because we rely heavily on the ability to expose a maven build as a maven repository but the current (jenkins) plugin only works with the Maven 2 job type. That setup works nicely, but the Maven 2 job type has some performance and scalability issues

So we have been looking forward to getting the (hopefully) best of both worlds, fast as the freestyle job type, but still enough metadata to expose the build as a Maven repository.We mostly got it, see the details below.

The good

Our build time dropped from around 35 minutes to 12 minutes.

Our Hudson was acting really slow with regards to page rendering etc, but after switching to the new style integration it has become significantly faster. I think it is because of changes to the metadata stored in memory.

I like the the Maven information browser on the build pages. Right now I have only used it to see how far a build has come, but I imagine it is really useful if you need to look into individual modules.

The new document storages is a great idea which hasn't really come into its own yet. I think its potential is bigger than its current usage. I mean this in a good way because it is already great for managing settings.xml files, but there is so much more you could do with it.

You can now build a Maven job in a matrix build without loosing the metadata.

Many of the Maven options are now exposed as easy to use switches.

The bad

Some places it is evident in the user interface that the underlying presentation framework is different from the rest of Hudson. Some of the interface elements feel a bit "out of place", prime examples are the document storage screen and scrollbars.

The underlying API exposing the metadata is a bit confusing, and there isn't any documentation yet. To be fair it also has to deal with more things than the original Maven job type APIs,The problem space is simply bigger now that an arfifact can be created in multiple build steps or combinations in a matrix job.

There are a few missing items in the API, e.g. there is no way to get a file link to a archived artifact.

You loose the functionality of the m2release plugin. I quite liked the fact that there was a manual button you could press on a job and have it perform a Maven release of the same job. Then it was visible to all in the build history when the release was done. Now you need to create 2 build jobs making the releases less visible

As far as I can tell you also loose the ability to easily see which modules where previously in a build but has now been removed. I have used that functionality more than once to catch when someone has removed a module but not all dependencies to it causing the release to fail.

You loose the use of "Jenkins Maven repository Server " plugin, but that went "Jenkins only" sometime after version 0.4 anyways and there is already a replacement underway.

The potential

Once people start extending the Maven integration there is a lot of potential e.g. why archive war and ejb files if they are included in ear files. Adding that will save tons of network traffic.

The document storage can become a real killer feature once someone implements that the documents can be given to other build steps than just maven3 steps.

And let us not forget the collateral benefits this features has brought with it, there is a new REST api, the GWT frontend framework, JSR 330 based plug-ins.

Verdict
The new Maven integration has provided us with a much needed and very impressive performance improvement both in build time where it went down from 35 to 12 minutes, and in responsiveness of the user interface.

For us this is the killer and the rest of the features are reduced to "icing on the cake" even though that might be a bit unfair. As you can see from the negative list there is still room for improvement, and I hope to see many of those shortcomings addressed now the rush to integrate is over.

In conclusion, the new maven integration is a solid improvement, but it is even better as a stepping stone.

Monday, 25 July 2011

Ran into a tricky problem today after switching a couple of our Hudson jobs from subversion to Git. The git plugin was able to clone the repository in a clean workspace, but after that simply hung in the fetch step.

Looking at the slave workspace, it was clear that it had actually managed to clone the repository, as there was both .git folder and the working directory content. So in some sense it must be able to contact the server.

A Google search for the problem returned some results but in the majority people got a error message and not a hanging job. It did however reveal that a incorrect HOME environment variable could be the cause.

In order to find out our current environment variables for the node I went to "Manage Hudson" -> "Manage Nodes" -> (name of the node) -> "System information". Here you can find both system properties, environment variables and thread dump.

Besides confirming that we did not have a HOME environment variable, the thread dump revealed something very interesting. One of the thread had a thread name showing the exact git command line being executed and this thread was waiting for input reading on a socket.

Trying this command on the slave in a normal command promt, confirmed the fact that the git command could not find the key and thus failed to talk to the remote repository. Don't use the git bash you get with msysgit, Hudson does not use this. Use a standard "cmd" promt opened from the start menu.

The solution was to set the HOME environment variable and restart the Hudson slave, but at least one question remains....

How was Hudson able to clone the repository when the keys could not be found. Is this another git plugin weirdness or does our gitolite setup have a error ?

Sunday, 8 May 2011

It has been a very interesting week in Hudson/Jenkins land with the announcement of the Eclipse proposal. The responses have been very varied ranging from the thoughtful and constructive to lets-start-another-mudslinging match. Luckily the mudslinging part is much less this time, and seem to have died down quickly.

Before I get into the details, let me just say that nothing in the proposal changes my overall perception, I remain fork neutral and any plug-in I develop will work on both if at all technically possible.

From my point of view I am mostly positive towards the proposal,

I think this is a good move, bringing it to eclipse can hopefully reduce some of the bad blood and knee-jerk reactions when people mention Hudson/Oracle, and in time this will hopefully manifest as better corporation between the two projects.

I am very pleased that this has made sonatype contribute even more to the open source version instead of hudson pro. :-)

It is nice to see that more companies are investing in Hudson, for people like me that tries to remain fork neutral it is good to see that the effort is not "wasted".

but it does leave me with some concerns and questions:

One of mentioned reasons for going with Eclipse is the better governance model, but I haven't seen any explanation on how this governance will be performed except that 2 people are listed as project lead. Maybe it is just following a standard eclipse governance ? (which I haven't found the link for yet). Some basic clarification would be nice, like: Will it be the comiters who are in control, or will there be some sort of governance board ? if a board how is the election process?

I can see why Oracle want to move to a neutral 3. party, but I haven't understood why eclipse and not something like Apache ?

How will the re-licensing affect the possibility to port fixes between the forks ? my initial understanding is that since MIT code can be re-licensed as EPL but not the other way, fixes can be ported only from Jenkins to Hudson. Am I correct ?

One of my major concerns with this move is the requirement to use eclipse infrastructure. I haven't been a comitter to eclipse based projects so I am not overly familiar with it, but looking at it does appear somewhat dated compared to java.net + github. I am especially concerned about loosing the easy fork and pull workflow. Are we really loosing that functionality entirely, and if so what is the replacement?

Another of my concerns is that contributing to the core seems more geared towards company sponsored development, leaving less room for individual free-time contributors like me to help out or get a say in the development. Maybe it is just caused by my own lack of knowledge, but perhaps someone can clarify ?

I haven't seen any discussion on the developer mailing list or JIRA on how to replace the LGPL components. This discussion should be started.

Monday, 18 April 2011

After you have done the release there is a couple of tasks left in order for everything to work cleanly.

Create a stub page on the Hudson wiki.
Go here and create a child page
- Make sure to add the hudson-plugin-info and excerpt macro.
- Add labels to the page for correct groupin
- Add a reference to where the plugin is hosted.

Create a stub page on the Jenkins wiki
Go here and create a child page. You should follow the same convention as on the Hudson wiki, except that the project info macro is now called jenkins-plugin-info. An example is here.

Announce the plug-in
Announce the plug-in on the hudson and jenkins developer list. Be sure to mention it is hosted elsewhere because both projects need to adjust their update centre script.

This is because their scripts try to match the wiki url from the pom with their respective wikis in order to provide a link and description in the update centre. When it can't find the wiki page, the information will be blank.

The release process is actually fairly simple even though it is not a one click process. I normally does the Hudson release first because then I can validate things in the nexus staging repository.

My development environment is set up so that I have my SCM (bitbucket) credentials, and the repository credentials, such that I do not need to provide them in the process.

Step Zero: Prerequisites
Make sure that everything you have is committed and you have performed a "mvn clean release:clean"

Step One: Running release prepare
Simply run "mvn release:prepare". I usually run it in interactive mode and answers the version questions, but you can add parameters to run it in batch mode.

Step Two: Save the release.properties file
Now you actually need to run release:perform twice once per profile, but Maven cleans up the directory after the first release perform. Therefore you must provide extra parameters to the second release:perform.

You will need to look at the documentation on which parameters to specify, maybe you can get lucky and only specify "tag". I however use another short cut, the release.properties that release:prepare generated contains all the needed information, so I save this by making a copy outside the project directory

Step Tree: Do the Hudson release
Now that you have taken note of the needed properties or simply saved the release.properties file, run "mvn release:perform -Phudson-publish"
Now the plug-in should be in the nexus OSS staging repository. Verify and release it as described here

Step Four: Do the Jenkins release
Now copy the release.properties back into the project directory and run "mvn release:perform -Pjenkins-publish"

Sunday, 10 April 2011

Overview:
The main strategy for supporting both Hudson and Jenkins is to make the main pom structure neutral and then use two profiles to handle the version specifics. These profile which I have called hudson-publish and jenkins-publish is both represented in the settings.xml and the pom.xml.

pom.xml:
In order to create a pom that works for both sides, I have done the following things.

Specified the parent version as 1.392. This is the latest pre-fork parent which do not suffer from the bug which prevents publishing of "requiresCoreVersion". Full specification is org.jvnet.hudson.plugins:plugin:1.392

The groupId of the plug-in must be org.jvnet.hudson.plugins otherwise the publishing to maven central wont work.

The basic items like name, description and url must be set. The Hudson update centre uses the URL and description.

Remember to add license and developer information.

Remember to override scm and issueManagment section.

Don't specify distributionManagment outside the profiles.

Configure the release plugin to only make deploy not site-deploy and use forked-path which is needed by the gpg plug-in.

settings.xml:
There are some things that need to go into the settings.xml which are not profile specific. This is the list of server definitions, which you must add in order to provide username/passwords. I have server definitions for the following list

sonatype-nexus-snapshots

sonatype-nexus-staging

maven.jenkins-ci.org

jenkins-publish profile:
In the settings.xml you need to specify the Jenkins maven repository

Sunday, 3 April 2011

When I write Hudson and Jenkins plug-ins , I want to support both of them and remain as neutral to the fork as possible. In order to do this I have chosen to host my plug-ins in a neutral place (bitbucket), but I still want my plug-in to show up in both update centres. This series of blog entries is my attempt at documenting how that it done, and this part is about which accounts you need.

My plugin:
My plugin is hosted at bitbucket.org, I use it both for source code, wiki and issues. In effect I am not relying on Hudson or Jenkins infrastructure except for publishing to their update centres.

Jenkins:
In order to publish to Jenkins you only need one account. This account is used to access the jenkins Wiki, Jira, Fisheye and publish to their Maven repository. You create it here: https://jenkins-ci.org/account

Hudson:
Hudson is a bit more complicated as they use Maven Central repository for hosting their plug-ins, so you will need both access to Hudson and to push to maven push to Maven central.
- First you need access to the Hudson wiki.
- Secondly you will need to have PGP key in order to sign your plug-in. I used Gnomes built in feature but the hudson pages have a guide as well.
- You need an account with Nexus OSS in order to use their staging facility and publish to Maven central repository. Create the account according to this page, and make sure to create the Jira ticket. YOu need the later part in order to get the hudson-deployer role.