the specifics of the plugin, goal and project involved dont matter and have bee stripped out of the error.

basically what maven is telling us is that it tried to connect to a repository called “codehaus-snapshots” and failed because it could not resolve “nexus.codehaus.org”

tracking down exactly where your project got this repository reference from is problematic, as mvn dependency:list-repositories may not always show them, but maven is telling us the repository name – codehaus-snapshots. all we need to do now is “blacklist” this repository (sort of) by telling maven not to look for either releases or snapshots from it. to do that, you add the following bit of xml to your <repositories> section:

this repository that we’ve just defined overrides the “real” repository (because we’ve used the same name) and maven will never access it for anything. you may have to repeat this process several times if your dependencies refer to codehaus by various other names. in my project i had to define “codehaus.org”, “codehaus-snapshots” and “snapshots.codehaus.org” (for 6 xml elements in total).

only then did i discover that this was masking some other real issue, but such is life 🙂

Maven made it easy to manage your dependencies. long gone are the days of /lib folders and chasing after various obscure *.jar file you never directly used. these days you just declare the set of libraries you know you need and most of the time maven handles the rest through transitive dependencies and dependency version resolution.

most of the time.

sometimes though, things can get very ugly and very hard to diagnose.

there are a few common problems that can ruin your day:

directly using a transitive dependency: if my code depends on library A, and library A in turn depends on library B then B is on my compile and runtime paths. this means that i can directly use classes from B in my code without ever declaring a direct dependency on it. this becomes a problem when the maintainers of A later decide to drop B, at which point your code stops compiling after you upgrade A.

dependency bloat: a lot of times during development you’ll bring in various libraries (because its so easy) only to discard them later on. multiply this by the number of modules and developers working alongside you and you end up with a lot of no-longer-used dependencies that everyone’s afraid to remove because theyre not sure who’s using them.

version uncertainty: suppose your code depends on libraries A and B, and both of thede libraries depend on library C, but on different versions of it. it is not always clear which version of C will end up on your runtime classpath and it might depend on things like the exact version of maven used to compile the project.

duplicate class declarations on the classpath: its possible that among the dozens of libraries your project may depend on, two will define the exact same class/file and which copy gets used is at the hands of the classloader. in javaEE environments all copies might get used. and if youre especially unlucky the copies wont even be identical code. dont believe me? have a look here.

cross-build injection attacks. this is for the more paranoid among us, but i’ve included it here for completeness’ sake. it basically comes down to how much do you trust code brought in by maven from outside – it might be intercepted and/or replaced between builds. a solution to this problem is presented by gary rowe on his blog.

now that we’ve covered what could go wrong, lets see how we can defend against it.

1st of all, i’d like to cover an invaluable tool in tracking down these issues – mvn dependency:tree. this maven invocation prints out your entire dependency tree, everything included, and is very useful in tracking down the source of any dependency issues you may have.

issues #1 and #2 are addressed by the maven-dependency-plugin invocation. the invocation itself is tucked away under a profile because there may be scenarios where it’ll give false positives – for example if youre running a build without tests (-Dmaven.test.skip=true) it’ll complain about a bunch of unused test libraries. so when you know youre doing a full build, you can invoke this check by adding -PStrictDependencies to your maven command line.

hosting maven repositories on github for is a common problem. the best solution I came across was posted by Michael Burton on StackOverflow, and is an excellent solution for most cases. unfortunately for me i was looking for something like a “central” maven repository for multiple (possibly unrelated) projects of mine, and not a repository-per-project, as his solution creates.

so, based his approach, I managed to arrive at a solution that allows me to have a single parent maven project that all my projects will use which makes “mvn clean deploy” just work.

the deployment solution is slightly more complex than michael’s solution since we dont want to overwrite the entire repository – we want to add our build’s artifact to all other artifacts already in the repository. so this is how I did it:

use the maven-scm-plugin to checkout the repository project to a location under /target

3.2 and 3.3 above are taken from michale’s original solution. the main different is in step 3.1 where we checkout the current state of the maven repository so that we add to it and merge it back instead of simply deploying to a temp location and overwriting the repository with that location (which would leave only our newly-built project in the repository).

you will also need to specify your github credentials in your maven setting.xml file, in your user’s home directory.
to actually use all this configuration in any of your other github projects, all you need are 2 things:

for example, i’ve converted my jrgoups cluster lock demo project to use this mechanism, as you can see in the project’s pom file.

the main downside to this approach is the lack of cleanup. the maven deploy plugin can deploy to a maven repository (or any directory laid out as a maven repository), but cannot clean up older copies of a project for the repository. The repository is a simple git project, however, so when it gets too large for you you can simply check it out, clean it up, and push it back.

another drawback is that you clone the whole repository project every time. as long as you dont deploy giant binaries to it or hundreads of projects this probbaly isnt a big issue, but it might be. maybe some day i’ll learn some bit of git magic to only check out the “head” of the repository instead of cloning the whole thing complete with history…

sometimes when building a distributed/clustered application you want to have a piece of code executed on only a single cluster node at the same time. standard solutions to this (the synchronized keyword, j2ee @Singleton, java concurrent locks …) dont cover clustered scenarios (any scenario where you want to ensure locking over multiple virtual machine instances), so something a little more complicated is needed. in my case the simplest solution i could find was jgroups.

jgroups is a very popular library for multicast communication and is the basis for the clustering capabilities of many other applications (jboss/infinispan for example) and, as will be shown in a minute, is very useful on its own as well.

ok then, on to code. the locking API we’re after is a very simple one:

this allows multiple locks. locking and unlocking is by lock name and method calls block.

jgroups is built around the concept of protocol stacks – you build a protocol stack starting from a transport layer at the bottom (udp in our case, tcp if you’re forced off udp) and pile on higher level functions on top of it: peer detection, packet fragmentation, retransmission, locking – which is what we’re after – etc.

here’s a simple jgroups stack, based off the default udp stack, that supports locking:

if you want to play around with this yourself, i put the complete code for the demo application (with gui) on github. its a maven project that produces an executable jar. if you run it a window pops up with a big lock icon that allows you to tuggle a lock. you can run several instances on the same machine or on several machines and see for yourself how only one instance can obtain the lock at the same time. another nice feature of jgroups is that if you kill any of the instances the lock is released after a short while (2 seconds):

A lot of development machines these days have more memory then they need, but not enough of them pack SSDs. And even those that do might be encumbered by all sorts of workplace-mandated annoyances like full drive encryption, hyperactive antivirus software, that utterly useless backup app that IT set to run every day at mid-noon, those sorts of things.

Introducing the RAM drive – take a chunk of unused memory and turn it into a hard drive. It may not store data after a reboot, but it’s still quite useful (as i’ll strive to demonstrate in a few posts).

The software I like best for this is ImDisk – its lean, and its open source to boot. Once its installed its really simple to configure it to create and format a 2GB Z: drive each boot, using the windows task scheduler and a script along the lines of