Topics

Featured in Development

Peter Alvaro talks about the reasons one should engage in language design and why many of us would (or should) do something so perverse as to design a language that no one will ever use. He shares some of the extreme and sometimes obnoxious opinions that guided his design process.

Featured in AI, ML & Data Engineering

Today on The InfoQ Podcast, Wes talks with Katharine Jarmul about privacy and fairness in machine learning algorithms. Jarul discusses what’s meant by Ethical Machine Learning and some things to consider when working towards achieving fairness. Jarmul is the co-founder at KIProtect a machine learning security and privacy firm based in Germany and is one of the three keynote speakers at QCon.ai.

Featured in Culture & Methods

Organizations struggle to scale their agility. While every organization is different, common patterns explain the major challenges that most organizations face: organizational design, trying to copy others, “one-size-fits-all” scaling, scaling in siloes, and neglecting engineering practices. This article explains why, what to do about it, and how the three leading scaling frameworks compare.

The book hits the ground running where the authors thoroughly define Java optimization and performance in the opening chapter. They explain why Java performance tuning may have been incorrectly applied throughout the lifespan of most applications. It is important to note:

The execution speed of Java code is highly dynamic and fundamentally depends on the underlying Java Virtual Machine. An old piece of Java code may well execute faster on a more recent JVM, even without recompiling the Java source code.

Related Vendor Content

Related Sponsor

There are a number of performance metrics defined and used in this book:

Throughput - the rate of work a system or subsystem can perform.

Latency - the time required to process a single transaction and observe a result.

Capacity - the number of units of work, such as transactions that can be simultaneously running in a system. This is related to throughput.

Utilization - a CPU should be busy doing work rather than be idle.

Efficiency - calculated as throughput divided by utilization.

Scalability - a measure of the change in throughput as resources are added.

Degradation - an observed change in latency and/or throughput as a function of an increased load on a system.

A number of performance tuning antipatterns have plagued developers over the years. These have derived possibly due to boredom, resume padding, lack of understanding, or an attempt to find solutions for non-existent problems. The authors define some of these antipatterns and debunk the myths associated with them.

There are plenty of practical heuristics and code-level techniques, but the authors warn that these may do more harm than good if not properly applied. Readers are encouraged to fully understand performance tuning concepts before experimenting with any code or applying advanced command-line parameters that invoke garbage collectors and JVM parameters. There are also plenty of code examples that demonstrate the concepts.

Relevant performance tuning topics include how code is executed on the JVM, garbage collection, hardware and operating systems, benchmarking and statistics, and profiling. The authors conclude the book with a chapter on Java 9 and beyond.

Evans and Newland spoke to InfoQ for insights on their experiences, learnings and obstacles on authoring this book.

InfoQ: What inspired you to co-author this book?

Ben Evans: I had the original idea, and I knew I wanted to write with a co-author. James and I are old friends, and he'd helped with "Java in a Nutshell" so I knew that we could write well together, so he was a natural choice.

The book was in development when we felt we wanted to dive deeper into JIT compilation and cover some of the tools in the space. I'd previously worked with Chris and was really impressed with his JITWatch tool, so we asked him to join us towards the end of the project.

Chris Newland: For the past five years I've worked on an open-source project called JITWatch which is a tool for helping Java developers understand the compilation decisions made at runtime by the JVM. I've given a few presentations about the JVM's JIT compilers but felt that it would be useful for Java developers if there was a book that explained how the technology works.

InfoQ: Did you have a specific target audience in mind as you wrote this book?

Evans: Intermediate to advanced Java developers that want to do performance analysis well, using sound methods and not by guesswork. A secondary audience includes those developers that want to dig deeper into the JVM and understand how it really works, in terms of bytecode, garbage collection and JIT compilation.

Newland: I think we tried to make the book accessible for developers of all levels and there are plenty of examples to illustrate the points we are making. There are parts of the book aimed at mid-level to senior developers who are looking to gain a deep understanding of JVM performance.

InfoQ: While every aspect of Java performance in this book is important, is there a topic that is most important for readers to fully understand?

Evans: That there are no magic spells or "tips and tricks". Performance is not the same as software development - instead it is its own set of skills and it can be best thought of as a separate discipline.

That discipline is data-driven - it is about the collection and interpretation of observable data. There are difficulties about collecting, analysing and understanding the data and the behaviour of an entire application is totally different to the behaviour of a small amount of toy code or tiny benchmark.

Newland: I think the most important idea to take from the book is that everything your program does has a cost, be it allocating new objects, invoking methods, or executing algorithms. In order to improve program performance, you must first understand how to reliably measure it and only then are you ready to begin experimenting with the different performance improvement techniques.

InfoQ: How will Java performance be affected, if at all, with the new Java SE release cycle especially in terms of new features (such as the new var reserved type introduced in Java 10) and improvements in garbage collection?

Evans: One of the things that the book tries to teach is an understanding of what features are potentially performance-affecting. Things like Java's generics or the new var reserved type name are purely compile-time language features - they only really exist while javac is crunching your code. As a result, they have no runtime impact at all!

Garbage collection, on the other hand, is something that affects every Java program. It is a very deep and complex subject, and often badly misunderstood by programmers. For example, I often see it asserted (on certain online forums) that "garbage collection is very wasteful compared to manual memory management or reference counting" - usually without any evidence whatsoever, and almost always without any evidence beyond a trivial benchmark that appeals to a specific edge case.

In reality, modern GC algorithms are extremely efficient and quite wonderful pieces of software. They are also extraordinarily general purpose - as they must remain effective over the whole range of Java workloads. This constraint means that it can take a long time for a GC algorithm to really become fully battle-tested and truly production-quality.

In the Java world, backwards compatibility and lack of regressions - including performance regressions - is taken very seriously. For example, in the Parallel collector (which was the default until Java 9), a bare minimum of work is done by the JVM when allocating new memory. As that work contributes to the observed runtime of your application threads, this keeps program execution as fast as possible. The trade-off for that (there are always trade-offs) is more work that needs to be done by the collector during a stop-the-world (STW) pause.

Now, in Java 9, the default collector becomes G1. This collector needs to do more work at allocation time, so the observed runtime of application code can increase just because of the change in collection algorithm.

For most applications, the increase in time required for this allocation will be very small, in fact small enough that application owners won't notice. However, some applications, due to the size and topology of their live object graph are badly affected.

Recent releases of G1 have definitely helped reduce the number of applications that are negatively affected, but I still run across applications where moving from Parallel to G1 is not a straightforward process. Things like this mean that garbage collection is still a field that requires some careful attention when thinking about performance.

So the new advances in GC - the ongoing maturation of G1 as well as new collectors, such as ZGC and Shenandoah, are starting to come to the attention of developers - are definitely a live topic once again for people that care about the performance of their applications.

Newland: The new release cadence allows for a more incremental approach to performance improvements. Rather than waiting 2+ years for major new features, we are seeing more experimental collectors like ZGC and Epsilon along with a steady stream of improvements to the core libraries such as String and Collection performance.

InfoQ: Are there plans for updated editions of this book, if necessary?

Evans: I think that over time, there would definitely be scope for a 2nd edition, assuming that the folks at O'Reilly would like to publish one! The big areas that I think would drive that new edition would be:

First, the arrival of new GC choices into the mainstream - which we've already talked about above.

Second, the whole story around new compilation technology - the Graal JIT compiler is, I think, a bigger deal than many developers realize right now. The work being done to bring private linkage, i.e., jlink custom runtime images, and also ahead-of-time (AOT) compilation to Java is huge, especially for people who want to do microservices and cloud with the architectural changes (including a focus on app startup time) that come with that. Those changes to Java's compilation technologies will inevitably have a major impact on the performance story.

Third, there's a goal that (Java language architect) Brian Goetz refers to as "To align JVM memory layout behavior with the cost model of modern hardware." This is all the Project Valhalla work (including value types) but also the folks looking at vectorization and GPU-based compute. This area - not just value types but so many other pieces of the puzzle - will have a large impact on the way that we build Java/JVM applications and, of course, on their performance.

Newland: I think Graal is a very exciting new JVM technology and as its adoption grows it would be useful to cover it in the book.

InfoQ: What are your current responsibilities, that is, what do you do on a day-to-day basis?

Evans: I'm a Director at jClarity, a Java performance company that I founded with Kirk Pepperdine and Martijn Verburg a few years ago. When I'm not busy there, I also have my own private consulting, research and teaching practice. The aim of that business is to help clients tackle their performance and architecture problems and help grow their developer teams. I'm always happy to hear from people who I might be able to help - ben@kathik.co.uk.

I spend some time helping with a charity called Global Code that runs coding summer schools for young software engineers and entrepreneurs in West Africa. I also do some interim/consulting CTO work, as well as a fair amount of speaking and writing, including of course, running the Java/JVM content track at InfoQ.

Newland: I work in market data, using Java and the JVM to process stock market information in real-time. The programs I write tend to have a small number of performance-critical regions where an understanding of the JVM is essential.

About the Book Authors

Ben Evans is an author, speaker, consultant and educator. He is co-founder of jClarity, a startup that delivers performance tools & services to help development & ops teams. He helps to organize the London Java Community and serves on the Java Community Process Executive Committee, helping define standards for the Java ecosystem. He is a Java Champion, JavaOne Rockstar Speaker and a Java Editor at InfoQ. Ben travels frequently and speaks regularly, all over the world. Follow Ben on Twitter @kittylyst.

James Gough has worked extensively with financial systems where performance and accuracy have been essential for the correct processing of trades. He has a very pragmatic approach to building software and experience of working with hundreds of developers to help improve their understanding of performance and concurrency issues over the last few years. Follow James on Twitter @Jim__Gough.

Chris Newland has been working with Java since 1999 when he started his career using the language to implement intelligent agent systems. He is now a senior developer and team lead at ADVFN using Java to process stock market data in real time. Chris is the inventor of JITWatch, an open-source visualiser for understanding the Just-In-Time (JIT) compilation decisions made by the HotSpot JVM. He is a keen JavaFX supporter and runs a community OpenJFX build server at chriswhocodes.com. Follow Chris on Twitter @chriswhocodes.