JVM Performance Magic Tricks

HotSpot, the JVM we all know and love, is the brain in which our Java and Scala juices flow. Over the years, it’s been improved and tweaked by more than a handful of engineers, and with every iteration, the speed and efficiency of its code execution is nearing that of native compiled code.

At its core lies the JIT (“Just-In-Time”) compiler. The sole purpose of this component is to make your code run fast, and it is one of the reasons HotSpot is so popular and successful.

What does the JIT compiler actually do?

While your code is being executed, the JVM gathers information about its behavior. Once enough statistics are gathered about a hot method (10K invocations is the default threshold), the compiler kicks in, and converts that method’s platform-independent “slow” bytecode into an optimized, lean, mean compiled version of itself.

boolean nemoFound = false;
for (int i = 0; i < fish.length; i++) {
String curFish = fish&#91;i&#93;;
if (!nemoFound) {
if (curFish.equals("Nemo")) {
System.out.println("Nemo! There you are!");
nemoFound = true;
continue;
}
}
if (nemoFound) {
System.out.println("We already found Nemo!");
} else {
System.out.println("We still haven't found Nemo :&#40;");
}
}&#91;/java&#93;
What both these loops have in common, is that in both cases the loop does one thing for a while, and then another thing from a certain point on. The compiler can spot these patterns, and split the loops into cases, or “peel” several iterations.
Let’s take the first loop for example. The <code>if (i > 0)</code> line starts as <code>false</code> for a single iteration, and from that point on it always evaluates to <code>true</code>. Why check the condition every time then? The compiler would compile that code as if it were written like so:
[java]StringBuilder sb = new StringBuilder("Ingredients: ");
if (ingredients.length > 0) {
sb.append(ingredients[0]);
for (int i = 1; i < ingredients.length; i++) {
sb.append(", ");
sb.append(ingredients&#91;i&#93;);
}
}
return sb.toString();&#91;/java&#93;
This way, the redundant <code>if (i > 0)</code> is removed, even if some code might get duplicated in the process, as speed is what it’s all about.
<h3>Living on the edge</h3>
Null checks are bread-and-butter. Sometimes null is a valid value for our references (e.g. indicating a missing value, or an error), but sometimes we add null checks just to be on the safe side, and sometimes we don't even check.
Some of these checks and dereferences may never fail (null denoting a failure, for that matter). One classic example would include an assertion, like this:
[java]public static String l33tify(String phrase) {
if (phrase == null) {
throw new IllegalArgumentException("phrase must not be null");
}
return phrase.replace('e', '3');
}

If your code behaves well, and never passes null as an argument to l33tify, the assertion will never fail.

Another example would be a dereference without an explicit null check. Even though we don’t always check for null ourselves — especially in cases in which we know (or assume) that null isn’t a possibility — the JVM must always perform the check internally, as it must throw a NullPointerException and not come crashing down, if null is eventually encountered.

After executing the above code many, many times without ever entering the body of the if statement, the JIT compiler might make the optimistic assumption that this check is most likely unnecessary. It would then proceed to compile the method, dropping the check altogether, as if it were written like so:

Not testing for null in the above code can result in a significant performance boost, which may be a pure win in most cases.

But what if that happy-path assumption eventually proves to be wrong?

Since the JVM is now executing native compiled code, a null reference would not result in a fuzzy NullPointerException, but rather in a real, harsh memory access violation. The JVM, being the low-level creature that it is, would intercept the resulting segmentation fault, recover, and follow-up with a de-optimization — the compiler can no longer assume that the null check is redundant: it recompiles the method, this time with the null check in place.

Virtual insanity

One of the main differences between the JVM’s JIT compiler and other static ones such as C++ compilers, is that the JIT compiler has dynamic runtime data on which it can rely when making decisions.

Method inlining is a common optimization in which the compiler takes a complete method and inserts its code into another’s, in order to avoid a method call. This gets a little tricky when dealing with virtual method invocations (or dynamic dispatch).

The method perform might be executed millions of times, and each time an invocation of the method sing takes place. Invocations are costly, especially ones such as this one, since it needs to dynamically select the actual code to execute each time according to the runtime type of s. Inlining seems like a distant dream at this point, doesn’t it?

Not necessarily! After executing perform a few thousand times, the compiler might decide, according to the statistics it gathered, that 95% of the invocations target an instance of GangnamStyle. In these cases, the HotSpot JIT can perform an optimistic optimization with the intent of eliminating the virtual call to sing. In other words, the compiler will generate native code for something along these lines:

Since this optimization relies on runtime information, it can eliminate most of the invocations to sing, even though they are polymorphic.

The JIT compiler has a lot more tricks up its sleeve, but these are just a few to give you a taste of what goes on under the hood when our code is executed and optimized by the JVM.

Can I help?

The JIT compiler is a compiler for straightforward people; it is built to optimize straightforward writing, and it searches for patterns which appear in everyday standard code. The best way to help your compiler is to not try so hard to help it — just write your code as you otherwise would.