(Java) Dev Environments by the Assembly Line

In consulting we have to deal with context switches and changing environments quite often. While for one project it might require to develop on JBoss EAP with MySQL, the next one can run on Apache TomEE with a JCR backend on Oracle DB. Managing all this software locally on your laptop can easily become a version nightmare with conflicts here and there (beside wasting resources when you don’t need things anymore). To tackle this problem, we started moving our development environments into virtual machines, which gave us following benefits:

It nicely isolates project setups from each other

Switching from one project to another can be easily done without re-configuring tons of environment variables (JAVA_HOME, MAVEN_HOME to name a few)

[Managers don’t read this] You have a good justification to get your machine upgraded (VMs without SSDs and lots of RAM are just useless, everybody knows that!)

Distribute a consistent environment setup across your team

Consistency

Especially this last point turns out to be very useful: The number of “but it works on my machine” reduces, and it makes it rather simple to on-board new team members. Did the “Setup” wiki entry say to install Maven? Now it’s clear it’s Maven 3.3.1 and it’s already pre-installed. Or the 10 steps to get your Eclipse settings with all those plugins, formatter options, type filters, favorites, editor settings (to mention the most common ones) right… Not overall fun to do this manually – not to mention forgetting one step and realizing this in the first code review (“Tabs, seriously?!?”).

Distribution

While you can configure an (almost) perfect development environment on a VM, the distribution mechanisms of such VMs is usually less than perfect. Something we often run into in client environments is the “Master VM” setup: One VM to rule them all – stored on an external drive, which team members can copy from. While this works fairly ok for a first shot, the problems arise after a while:

No feedback loop: The team decided to change coding conventions? Or to upgrade to the latest JBoss EAP 6.4? The Master VM gets forgotten. Found this nifty little tool and want to share it with the team? You’re back at the “Setup” wiki. Or if you go the hard way: Updating the Master VM first, and have the team copy from Master again? See the next point.

Personalized settings: IDE shortcuts are very individual (and some individuals tend to be religious about them). Or where does your private SSH key come from? Having a uni-directional update path will force you re-configuring this over and over again. Your team will not like that.

Tool Dependency: While the choice of VM players is limited, you might still want to change the tool somewhen later. Having a master VM locks you pretty much to it.

Our solution to those problems has been a configuration management tool: Using Puppet allows us to deal with the major feedback loop problem. If you have not heard of Puppet so far: The tool allows you to declaratively describe the configuration of nodes and to apply this configuration on your machine(s). Going deeper into Puppet or similar tools is not scope of this article, but I warmly recommend to do so (to name e.g. Chef or Ansible as alternatives here as well). The configuration can be put under version control and be packaged nicely into modules. So whenever your local setup gets updated, a Puppet module is adapted, resp. one team member can apply the changes locally using Puppet, commit the changes to the module and then each team member can re-apply the change by pulling the module – we will see later that this goes quite convenient. As Puppet mainly works declaratively, making those configurations readable and re-executable is quite a breeze (compared to bash scripts – you know what I mean in case you’ve written lots of those).

Let’s see how this setup works in a little bit more detail.

DISTRIBUTION

Feedback loops for clones?

Bootstrapping the VM

You get started with downloading the ISO image of your Puppet supporting OS of choice (we assume an Ubuntu here) and create a new VM using your tool of choice. This is the first personalization here, e.g. using your regular username also for the VM allows later to use this information for the VM setup. Alternatively you can also use tools like Vagrant, which give you a speedy VM boot and management (the majority of boxes available are targeted to headless server environments though).

Once your VM is ready, you can bootstrap it with a simple small script which does not much more than a minimal preparation for your environment. Concretely: Installing Git (or a similar SCM) to checkout your version controlled setup script. This can be as simple as this:

1

$source&lt;(wget-O-http://git.io/[your_init_script.sh])

There are no big secrets in here so this script can be public if you want to spare users from remembering credentials. We have ours on GitHub. Note that:

you need to use source here as piping it into bash will fail for interactive scripts

What we have cloned here is a small repository containing the following:

A script to do more initialization work (we’ll come to that)

Basic information about our Puppet modules: Which one do we need and where can we get them?

If you have more than one setup: Which setup profiles do we have and which Puppet modules belong to them?

Personalization

The setup script we called above has two major jobs to accomplish:

Installing Puppet and related tools like Librarian. Note that already here we should be re-executable or even better make sure our re-executions are reasonably performant (mainly for the sanity of the script developer).

Gathering user information to personalize the VM further. This can include the username and password for the Maven repository manager, your email address, API keys (e.g. for using GitLab) and more.

To apply this personalization later, we create so called custom facts for Puppet. Facts are globally accessible in Puppet and can be computed (usually this includes things like your OS, architecture type, hostname and much more). We can register our own custom facts with a simple Ruby script:

user_facts.rb:

1

2

3

4

5

6

7

8

require'facter'

Facter.add("maven_user")do

setcode do

"_maven_user_"

end

end

# add more facts

Our setup script copies this file into the appropriate location where Puppet picks up the fact – of course we have to replace the “_maven_user_” with our real value first:

1

2

3

4

5

read-e-p"Your Maven repository username: "user_name

FACTS="`sudo puppet config print factpath | sed 's/[:].*$//'`"

sudo cp user_facts.rb$FACTS

sudo sed-i"s;_maven_user_;${user_name};g"$FACTS/user_facts.rb

For convenience, you can also store the user inputs to source them in a later run. Those runs might be necessary as facts can change (like the Maven repository password).

All those facts can now be accessed by our Puppet modules, e.g. as $::maven_user. The Maven module is then likely going to use that in a templated settings.xml file.

So much for personalization. We have now all the info about users to start installing tools!

Puppet Module Dependencies and Installation

Puppet modules are the main distribution artifacts of Puppet and bundle so called manifests to install typically a self-contained piece of software (like an Apache server). You can find a lot of them in Puppet Forge, a public module repository with a lot of community contributed modules. You might want to use some of them, others you have to create yourself to reflect very specific needs (like installing an SSH key into your GitLab instance – at least when we wrote our Git module, this was not possible with any public module). So a script to just download all your modules? It’s unfortunately not that easy: Modules can have dependencies which you need to resolve too. Sounds like too much continuous manual work? The tool to the rescue here is Librarian. Librarian let’s you define a simple descriptor, telling Librarian what modules you need and where they are:

Almost done! With Puppet you typically define some site descriptor which defines what has to be installed on your machine. This might look like

site.pp:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

nodeubuntu{

include git

include java

include maven

include eclipse

eclipse::plugins::install{['mechanic',

'jbosstools',

'jenkins',

'sonar',

'atlassian',

'colortheme',

'gitlab']:}

# Extend as you need

}

The site can also contain multiple nodes, which are by default resolved to your hostname – which is actually a fact we explained before. By overriding this fact, you can create multiple node definitions which resolve to different developer profiles, e.g. a Java developer might have a completely different tool set than a frontend developer.

Applying such a setup with Puppet will likely run for an extended coffee break (e.g. while downloading and installing Eclipse plugins). But afterwards you will be rewarded with a fully configured development environment at your fingertips. Note that, if you write your Puppet modules well, applying updates will be much faster.

BTW, if you run on a Mac or just want to see more code, GitHub has created something very similar to update their developer’s machines. Have a look at GitHub Boxen, you will recognize many of the concepts presented here again.

Conclusion

Automating development environments has shown a lot of benefits: Consistency over your team, fast onboarding of new members, simple sharing of productivity additions or upgrades to infrastructure. Even though the initial effort of writing Puppet modules or to figure out how to deal with them best has been relatively high, the long term ROI will definitely not make you regret it! And: Learning about Puppet or a similar tool might also get you new ideas when it comes to continuous delivery or DevOps. Stay tuned for more posts around this topic!

Never miss an update by following us and subscribing to our monthly newsletter!

Tech nerd, OSS contributor and JVM aficionado. Coder at day, gamer at night. The rest of my time belongs to my family.

I have more than 15 years experience building distributed systems, and also know how to deploy and run them. Fluent in Java, and currently polishing again my language skills in Kotlin, C#, JavaScript and Python (in that order of preference).