JITWatch is a free, open source tool that analyzes the complex compilation log file output generated by Java HotSpot VM and helps you visualize and understand the optimization decisions it made.

JITWatch is a free, open source tool that analyzes the complex compilation log file output generated by Java HotSpot VM and helps you visualize and understand the optimization decisions it made. Chris Newland developed JITWatch as part of the Adopt OpenJDK project, and it is available for download from GitHub. Follow Chris on Twitter @chriswhocodes for updates.

In Part 2of this three-part series, we explored how to make Java HotSpot VM produce the information JITWatch needs. In this article, we walk through some of the more advanced Java HotSpot VM features and explore how you can use the JITWatch Sandbox to test the effects of code changes and configuration settings on Java HotSpot VM behavior.

JITWatch Refresher

You can download the JITWatch binary from GitHub, or you can build it from source code using the following command:

mvn clean install

Then, you can launch JITWatch using ./launchUI.sh (for Linux or Mac OS) or ./launchUI.bat (for Microsoft Windows). Full instructions for getting started are detailed in the wiki.

As we look at more-advanced Java HotSpot VM features, you will find it useful to have the disassembly binary (called hsdis) installed in your Java runtime environment (JRE) so that you can view the disassembled native code the JIT compilers produce. Instructions for building hsdis can be found here.

Enter the Sandbox

The Sandbox is a JITWatch feature that lets you edit code and then compile, execute, and analyze the Java HotSpot VM JIT logs, all from within the JITWatch application. It’s intended to help developers understand what goes on “under the hood” of Java HotSpot VM when a program is run and to see the effects of small source code changes and Java HotSpot VM switches.

Note that isolated testing of algorithms inside the Sandbox might not result in the same Java HotSpot VM compiler decisions that would be made when running the full application. That’s because Java HotSpot VM will have less profiling information on which to base its optimization decisions.

Once you’ve launched JITWatch, click the Sandbox button in the top left of the main window to open a Sandbox window like the one shown in Figure 1.

As well as supporting Java source code, the Sandbox has support for Scala and Groovy, because these languages compile to bytecode that executes on the Java Virtual Machine (JVM). Other JVM languages—such as Kotlin, Clojure, JRuby, and JavaScript (using Nashorn)—will be supported in the future.

Behind the scenes, JITWatch uses a java.lang.ProcessBuilder to compile and execute your code.

Experimenting with Java HotSpot VM Switches

Let’s begin by looking at which Java HotSpot VM switches can be used to control JIT compilation.

Click the Configure Sandbox button to open the Sandbox Configuration window shown in Figure 2. You can set up each VM language you wish to use in the Sandbox by specifying its home directory.

Next, select Show Disassembly to instruct the Java HotSpot VM to produce human-readable assembly language from the JIT-compiled native code. This capability requires the hsdis binary.

You can override the default setting for the Tiered Compilation option, which controls when code is optimized. Optimization is done first quickly with the C1 client compiler and then sometimes again with the more advanced C2 server compiler after more runtime statistics have been gathered. The Tiered Compilation option is disabled by default in Java SE 7 and enabled in Java SE 8.

Java HotSpot VM manages a pointer to an object, called an oop (ordinary object pointer), that is usually the same size as the native machine pointer. So on a 64-bit system, the required heap space will be larger than on a 32-bit system. Java HotSpot VM is able to save heap space by representing 64-bit pointers as 32-bit offsets from a 64-bit base; this is known as compressed oops. Disabling the Compressed Oops option simplifies the inspection of the disassembled native code, because it eliminates the pointer arithmetic code needed to support compressed oops. Make sure you understand the implications before altering this setting in a production environment. For more information, see “Compressed Oops”.