Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

jkauzlar writes "The standard Oracle JVM has about sixty 'developer' (-XX) options which are directly related to performance monitoring or tuning. With names such as 'UseMPSS' or 'AllocatePrefetchStyle', it's clear that Joe Schmo Code Monkey was not meant to be touching them, at least until he/she learned how the forbidding inner recesses of the JVM work, particularly the garbage collectors and 'just-in-time' compiler. This dense, 600-page book will not only explain these developer options and the underlying JVM technology, but discusses performance, profiling, benchmarking and related tools in surprising breadth and detail. Not all developers will gain from this knowledge and a few will surrender to the book's side-effect of being an insomnia treatment, but for those responsible for maintaining production software, this will be essential reading and a useful long-term reference." Keep reading for the rest of jkauzlar's review.

Java Performance

author

Charlie Hunt and Binu John

pages

693

publisher

Addison Wesley

rating

9/10

reviewer

Joe

ISBN

0-13-290525-6

summary

Java performance monitoring and tuning

In my experience, performance tuning is not something that is given much consideration until a production program blows up and everyone is running around in circles with sirens blaring and red lights flashing. You shouldn't need a crisis however before worrying about slow responsiveness or long pauses while the JVM collects garbage at inconvenient times. If there's an opportunity to make something better, if only by five percent, you should take it, and the first step is to be aware of what those opportunities might be.

First off, here's a summary of the different themes covered:

The JVM technology: Chapter 3 in particular is dedicated to explaining, in gory detail, the internal design of the JVM, including the Just-In-Time Compiler and garbage collectors. Being requisite knowledge for anyone hoping to make any use of the rest of the book, especially the JVM tuning options, a reader would hope for this to be explained well, and it is.

JVM Tuning: Now that you know something about compilation and garbage collection, it's time to learn what control you actually have over these internals. As mentioned earlier, there are sixty developer options, as well as several standard options, at your disposal. The authors describe these throughout sections of the book, but summarize each in the first appendix.

Tools: The authors discuss tools useful for monitoring the JVM process at the OS level, tools for monitoring the internals of the JVM, profiling, and heap-dump analysis. When discussing OS tools, they're good about being vendor-neutral and cover Linux as well as Solaris and Windows. When discussing Java-specific tools, they tend to have bias toward Oracle products, opting, for example, to describe NetBean's profiler without mentioning Eclipse's. This is a minor complaint.

Benchmarking: But what good would knowledge of tuning and tools be without being able to set appropriate performance expectations. A good chunk of the text is devoted to lessons on the art of writing benchmarks for the JVM and for an assortment of application types.

Written by two engineers for Oracle's Java performance team (one former and one current), this book is as close to being the de facto document on the topic as you can get and there's not likely to be any detail related to JVM performance that these two men don't already know about.

Unlike most computer books, there's a lot of actual discussion in Java Performance, as opposed to just documentation of features. In other words, there are pages upon pages of imposing text, indicating that you actually need to sit down and read it instead of casually flipping to the parts you need at the moment. The subject matter is dry, and the authors thankfully don't try to disguise this with bad humor or speak down to the reader. In fact, it can be a difficult read at times, but intermediate to advanced developers will pick up on it quickly.

What are the book's shortcomings?

Lack of real-world case studies: Contrived examples are provided here and there, but I'm really, seriously curious to know what the authors, with probably two decades between them consulting on Java performance issues, have accomplished with the outlined techniques. Benchmarking and performance testing can be expensive processes and the main question I'm left with is whether it's actually worth it. The alternatives to performance tuning, which I'm more comfortable with, are rewriting the code or making environmental changes (usually hardware).

3rd Party tool recommendations: The authors have evidently made the decision not to try to wade through the copious choices we have for performance monitoring, profiling, etc, with few exceptions. That's understandable, because 1) they need to keep the number of pages within reasonable limits, and 2) there's a good chance they'll leave out a worthwhile product and have to apologize, or that better products will come along. From my point of view, however, these are still choices I have to make as a developer and it'd be nice to have the information with the text as I'm reading.

As you can see, the problems I have with the book are what is missing from it and not with what's already in there. It's really a fantastic resource and I can't say much more than that the material is extremely important and that if you're looking to improve your understanding of the material, this is the book to get.

It seems this kind of volatile deep non-documented black magic might change radically from JVM revision to revision. Although the Oracle "documentation" page [oracle.com] seems to contain a lot of "legacy" options, there still seems a risk that this book would be outdated as soon as the next JVM release.

Oh, well, the tech publishing industry seems to be doing pretty well, even if the rate of technology change means that a tech fact is OBE before it's committed to ink.

Indeed. These sorts of options are so version dependent (not even going to alternative implementations) that I think the overwhelming majority of developers would want to stay far away from this sort of book.

They're not going to throw out the JVM and rewrite it from scratch between releases. If there are 60 options now, there may be 66 in the next release. That means 90% of the book is still useful and the other 10% is just missing.

On top of that, as the reviewer clearly states "Unlike most computer books, there's a lot of actual discussion in Java Performance, as opposed to just documentation of features.... there are pages upon pages of imposing text, indicating that you actually need to sit down and read it...". So this book is already the kind of book that isn't going to be overturned by one more JVM release. It may contain actual wisdom rather than a list of flags.

Exactly. Take even the simplest Linux command, like 'rm'. Now look at this excerpt from the man page:

--no-preserve-root do not treat `/' specially

What does that mean? That little blurb really isn't sufficient to learn what that option does. If you already are familiar with rm, then that blurb will likely remind you of the intended action. Unfortunately, the goal of a lot of online documentation is to refresh yo

The development team at my company is currently reading this book. I'm three chapters in, and am having a hard time following it all. I often have to reread paragraphs, or entire pages, as soon as I finish them just to keep terms and names straight. Some applications which are discussed are five words long, like Microsoft Process Performance Analysis Console, or something head-spinning like that. That's the actual name of the application, so there is no other way to refer to it, so it's just the nature

I can see the use for options to specify heap sizes, and to tweak latency vs. throughput of the GC. These are critical for memory-constrained and timing-constrained applications, respectively. I presume the other options are performance related, but how effective are they really?

You have a point, and the answer isn't any good ether. You specify the Heap size so that the Garbage Collector can be lazy and not clean up memory until the specified heap is at a certain level. I think it's just a Java thing. C# and.net grow as needed up to the systems limit. GoLang even though native is Garbage Collected (sorta) and I've yet to find some arbitrary default if one exists. Java only wants to use 1/4th or less of the available system memory, and I can only think of reasons linked to Garba

Bugger. I had modded a bunch of posts on this thread and now have to chuck them away just to counter your ignorance (lest your incorrect assumption spread to others). It's cool you are taking a stab at why Java might limit memory, but unfortunately that guess is not correct - so I hope to set yourself (and any other readers) straight on why Java has this feature (if you don't understand the feature you'll see it as a limitation, when in fact it is very important for security).

The limit Java imposes on memory is to ensure that critical resources (memory) is preserved for the system - and gives the Java user a way to limit what any application can do. This is a very important protection if you followed Java's internet model where Applets (or these days, WebStart applications) and allow remote applications to run. System administrators can also partition what any application can do, in case one is rogue (they can't trust those damn application developers, like me) and uses up all the system memory shared between many applications when running on Big Iron. I don't know whether any remote running.NET applications have this protection and would be interested to hear if they do - if they don't then that means any.NET application could in theory bring your server (and all the other important services you have running on it) to a crawl (as it swaps memory) or possibly even crash through memory exhaustion.

Double Bugger. I was going to mod you up, but you're sort of missing a piece of the puzzle. Java is supposed to be Write once run anywhere. So, yes a decent operating system should be able to enforce memory limits of applications so they don't bring your server to a crawl, but that's not built into Java, so they had to do it by default with the stupid limitation on all programs.

With the memory limitation built in, you can easily port your application to a server of completely different architecture that lacks the memory limitation feature. I don't see why that's so hard to understand, but apparently two of you do.

The book is discussing run-time tuning, so 'write once run anywhere' does not enter into it.

It seems that a lot of people on here don't 'get' run-time tuning, or they don't think it is important. In the environment that these parameters would normally be used (enterprise software), tuning is critical. All enterprise middleware and most applications have tons of tuning. In an enterprise environment you do want the ability to trade off memory usage for processor usage, for instance. You want to control ho

Ok, I wasn't specifically talking about the context of the book, but rather the rationale behind the run time options. Their "write once, run anywhere" philosophy extends beyond the actual writing of the code to the run time tuning parameters. Comprendes?

No, you are entirely wrong. Please show us any document that states what you said. "Write once, run anywhere" means exactly what it says. The developer does things once, and the user runs it in whatever environment they want. How does a user specifying different tuning parameters when he starts his JVM affect what the developer did? The application was still written (and compiled) once. And the application will run, unchanged, no matter what the users priorities are with regards to things like heap s

" And the application will run, unchanged, no matter what the users priorities are with regards to things like heap sizes, GC techniques, etc. That is pretty much the very essence of "write once, run anywhere"."

No, the application's performance is very much affected by run time switches, obviously. I can cause it to completly not run at all, with a run time switch, by setting the heap to a ridicoulously small number.

But the main point in our disagreement is you have an exceedingly small ability to understan

That particular reason would only apply to applets where the default is set to 64megs. The first and primary reason to go in and change the memory allocation is to decrease Garbage Collections. If Eden is TOO big your garbage collections take TOO long, but if your Eden is TOO small your garbage collections are TOO frequent. Only a lazy and arrogant system admins would think that the memory settings were for them to enforce some form of system policy.

Then learn to be a good Sysadmin and set the memory limits for processes using the OS. Using the Java Xmx setting will only screw with the Garbage Collection, and you'll still not have accounted for the GC and VM overhead. I can actually tell Java to use 128megs and it still take 200. If you're trying to optimize the performance of the Garbage Collector then go ahead start fiddling with them. Other systems have ether determined that it's not worth the hassle or have figured out a better way of managing

In production I have had to change the garbage collector limits to stop it thrashing when the total size of objects was near the default heap size.
You do need to have this ability, and the O/S doesn't provide it (you can't control the garbage collector using O/S switches). Oh, and if the overhead of the VM is 72M you can easily do the math to determine what size to set for you memory limit - again, at least you have the ability to do this with Java. So, the memory limit switch can be used in two ways.

Thus we are back to my original point. The Java memory switches are for Garbage Collection Management. They are not as you put it so the "System administrators can also partition what any application can do, in case one is rogue". They are there so that you can adjust the GC, and your OS has its own controls to stop an application from going Rogue and taking down your system. Using them as memory limits to prevent an application from going rogue is counter productive to using them to optimize garbage co

The JVM really needs to get smarter. 60 different controls and switches is just too much. How hard can it be for the JVM to look at the available number of cores and just turn on the Parallel Garbage Collector. Do I really have to manually turn it on so that Minecraft will use it? Why can't the JVM allocate more memory on its own? Does it really need permission to use more than 1 Gig of memory? It just sits there waiting for the day some user decides to import every single possible datapoint into it, crashes, having used 1 Gig of 8, with a "Out of Memory" error. It's not like Developers know what the Xmx and Xms setting need to be. They just set them arbitrarily high in hopes that some user doesn't try to find out what the maximum datafile it can take is. That just slows it down and makes it so when the GC finally does fire off it has 10x the amount of trash it should have if the value was set lower. Those options are only useful on internal applications that never get into the hands of everyday users. It's probably a great book for server side development, but it is highlighting a major failing of the JVM.

You didn't prove anything except point out yet another option to be set. If that was the best way to set the JVM why isn't that the default. Why is it left up to the user to specify it. Why do users have to figure out how to tweak it so that Minecraft work "optimally" on multicore machines? When you have to figure out this

-Xincgc
Enable the incremental garbage collector. The incremental garbage collector, which is off by default, will eliminate occasional garbage-collection pauses during program execution. However, it can lead to a roughly 10% decrease in overall GC performance.

I think you already answered your own question. There is no "one size fits all" and if you can live with a 10% decrease in overall GC performance, then you can enable -Xincgc and have less GC related pauses. it's just depending on what you want and what your use case is.

If I understand Performance Options Option and Default Value [oracle.com] correctly, then most the options you mentioned are enabled by default already anyway. Except the flags DisableExplicitGC and UseParallelGC, some other you mentioned I didn't found

The major use of Java is server side enterprise stuff, and all those controls are critical there. It is certainly not a major failing of the JVM, it is an important feature. The alternative to the JVM having all those controls is for each and every application to have its own controls.

Some of these directives may be VM specific and might change the way synchronization or allocation works in such a way that it's inconsistent with the "default" and break applications that weren't tested with these options. Additionally, Java has a tendency to pre-allocate memory when it's not needed in preparation for future allocation. Operating systems will likely show this as "memory in use" even though Java will give up this memory when it detects the OS is running low. Users that aren't aware of this

"The JVM really needs to get smarter. 60 different controls and switches is just too much. How hard can it be for the JVM to look at the available number of cores and just turn on the Parallel Garbage Collector. "

Its easy enough to count cores and enable the parallel collector, but the total cpu usage increases when using the Parallel Garbadge Collector. This mean that if you have a lightly loaded system the Parallel Garbadge Collector is a net gain, but if your system is already running close to 100% at al

In my experience, performance tuning is not something that is given much consideration until a production program blows up and everyone is running around in circles with sirens blaring and red lights flashing.

- if production blows up it signals that the underlying problem is not likely to be fixed with 'performance tuning'.

There is performance deficiency and then there is "production blows up" and those are different things and must be addressed by different sets of practices at different times.

Production blowing up means the design is flawed, it means misunderstanding of how the application was going to be used.

Slow response time on the other hand is about tuning, but it's very unlikely that environment tuning can help really to fix this.

Back in 2001 I was working for then Optus that was bought out by Symcor and the main project they brought me to (contract) was for this Worldinsure [insurance-canada.ca] insurance provider, and the project was to do some weird stuff, business wise speaking, allow clients to compare quotes from different insurance providers. Business model was changing all the time, because insurance providers do not want their products to be compared against one another on line (big surprise).

The contract was expensive (5million) and WI wouldn't pay the last bit (a million I think) until the application would start responding at a 200 requests per second, and it was doing 20 or so:)

If anybody thinks that just some VM tuning can fix a problem where application is 10 times slower than expected, well, you haven't done it for long enough to talk about it then.

It took a month of work (apparently I wrote a comment on it before) [slashdot.org] that included getting rid of middleware persistence layer, switching to jsp from xslt, reducing session size by a factor of 100, desynchronising some data generators, whatever. Finally it would do 300 requests per second.

But the point is that when things are crashing or when performance is really a huge issue, you won't be optimising the VM.

VM optimisation is not generally done because I think the application has to do something that is not generic.

Imagine an application that only does one thing - say it only reads data from a file and then runs some transformation on it, maybe it's polygon rendering. Well then you know that your app. is doing only ONE THING, then you can probably use VM optimisation, because you can check that the one thing your app does will become faster or more responsive, whatever.

But if your app includes tons of functionality and tons of libraries that you don't have control over and it runs in some weird container on top of JVM, then what do you think you are going to achieve with this?

You likely will optimise something very specific and then you'll introduce a strange imbalance in the system that will hit you later and you won't see it coming at all.

If your app does one thing, maybe you have a distributed cluster with separate instances being responsible for one type of processing, then you probably can use specific optimisation parameters.

I've worked with Java since 1.0. The only optimization options I've ever used were the heap and stack size adjustments.

Setting your memory heap too high actually degrades performance, oddly enough. I've got 4GB on this box and over 2.5G is normally used by disk cache, but if I allocate more than about 768MB to the heap, the performance suffers.

Maybe some of these options have real effects on certain production code characteristics, but I've found the best performance tuning options are:

Whenever and wherever possible, use intrinsic types, especially extracting a char from a string for evaluation rather than using the object accessors for a String. For whatever perverse reason, the Oracle Java compiler will keep re-fetching the value by re-executing the getChar() rather than realizing it's a constant once the value has first been extracted, because the String isn't changing. Net performance boost for my code: over 30% improvement for a days coding on a multi-year project.

Instead of allocating and destroying objects, consider hanging on to used objects and using your own allocator. This won't help for implicit object construction, but reusing modifiable objects helps performance dramatically. I saw about a 10% performance improvement when I experimented with this. Raw object allocation is EXPENSIVE.

Wherever possible, tighten your loops into a single statement of execution. For unknown reasons, the JVM seems to perform better calling a small function fragment than it does executing inline code in a for/while loop block. This makes no sense based on what I know of C++ tuning, but there you have it: Java likes functions better than inline code when executing loops. Maybe there is some optimization that kicks in for a function that doesn't happen for a code block.

If possible, construct a huge single String assignment using conditional expressions (i.e. ( bool-expr ) ? ret-if-true : ret-if-false ) instead of appending to a string buffer with a sequence of if-then-elses. The code is harder to read, but for anything of moderate complexity you can achieve up to a 30% performance improvement by doing this.

So there you have it -- my favourite REAL WORLD, TESTED, and PROVEN TO WORK performance tweaks.

Actually, re-using objects is an excellent strategy if you are using JavaBeans (where you have symmetric getters and setters). Unfortunately, so many programmers hide setters away from you, supposedly to protect you from yourself, but it makes it impossible to re-use objects even where it would otherwise not only be perfectly possible and valid but also very efficient as well. In short, use JavaBeans when you need complex (non-intrinsic) value-objects and make the setters symmetric with the getters, *alway

Problem with object pools (if you are multi-threaded) is that they you need to synchronize access to them, and THAT adds overheads and potential bottlenecks. You can make your pools thread-local, but then you've got to worry about them getting too large (how many threads do you have, also?). Unless you're doing some really expensive object preparation, if you've got a generational collector, that's usually a good bet -- simpler, fast enough, not a source of confusion six months down the road.

All good points. You don't even need do use a thread local, just have an independent pool per thread (assuming your threads are long lived and do a lot of work). Simple and is nearly as efficient as having a global pool, but without the locking overhead.

I haven't had to do this with Java yet, but I have implemented thread-specific pools with multi-threaded C++ application code, where each thread had it's own pool. It was a critical performance tweak for one system I worked on 12-15 years ago, as we were pushing the hardware so hard that within a year the IO bus wouldn't be able to move the expected data even if it did nothing but shuffle MY module's data 24x7. (Yes, we EXPECTED to need new hardware, but we needed a solution NOW.)

Most likely it uses its own allocation pools for all objects, and may include some localization for non-escaping objects. It is possible that it could spot objects whose interfaces are side-effect free and that are never tested for object equality; those are effectively "values" and subject to loop-hoisting and common-subexpression optimizations.

Practical experience and years of testing are why I recommend the ugly looking conditional statements. The Java compiler seems to build one big string buffer with one allocation when you use that technique, rather than repeated calls to StringBuffer.append(). But go ahead, stick with what's easy to read if you're not concerned with raw performance -- readability is every bit as important as performance when it comes time to enhance or maintain the code by hand.

In the few instances I've seen a co-worker using object pools, removing them has been a benefit in every single way. Even if the objects do cost a little to construct, so does synchronously maintaining a giant linked list of objects.

As to the grandparent post -- just because I started with Java 1.0 doesn't mean I assume that the JVM hasn't changed. I've been redoing my tests with each Java release that came out to see if my old assumptions still applied.

I'm a professional programmer, not some kid hacking code in a basement.

"Performance advice often has a short shelf life; while it was once true that allocation was expensive, it is now no longer the case. In fact, it is downright cheap, and with a few very compute-intensive exceptions, performance considerations are generally no longer a good reason to avoid allocation. Sun estimates allocati

An issue recently came up on my Engineering team where a pig mapreduce job that stores in hbase slowed over the course of completing tasks until all the tasks failed due to timeouts.
What appeared to be happening was a gc failure and pause due to tenure region exaustion and the built in cluster function to kill off the garbage collecting regionserver.
The link below describes the issue and possible workarounds by implementing a custom memory allocation strategy. It's also a must read for anyone who isn't

Maybe you could use JNI for that, in certain very specialized cases, but if you write parts of your application in C/JNI you run the risk of just combining Java's weaknesses (memory, performance) with C's weaknesses (error prone). A nullpointer in a native code part of your application will unceremoniously crash your JVM and everything running in it.

JNI is often used for things that can only be done in native code. An example I can think of are atomic compare and swap operations in the java.util.concurrent package. These are implemented via a JNI method essentially calling just one machine instruction (on Intel, it may be a hand full on some architectures). Yes, this is for performance reasons - an atomic compare&swap is faster that using locks, not because native code runs faster but because it's the only efficient way to implement it.

I'm working on the inverse right now. Porting C code to Java and improving it. It does take longer to run, but it's also doing a lot more. However, setting up a worker thread pool in java is much easier than doing threading in C. With the -server flag and some tuning for max ram usage, it can do a reasonable amount of work. Of course the RAM needed for the program is much larger than it's C counterpart.

If you do not have a performance issue, that is fine. Personally, Prototype in Python and port time-critical parts to C. (Object-oriented C, as far as it makes sense.)

What I do not like about Java is that it is neither a really modern language like Python, nor a really fast and memory-efficient language like C. It is sort of a jack-of-all trades and good at none. It is also a master of syntactic clutter and complex, long code. Still, if you really know what you are doing, you can write good Java code.

There is a _big_ difference between clumsily optimized (or unoptimized) Java and carefully-optimized Java--more, in my experience, than the difference between clumsily optimized Java and clumsily optimized C or C++. So if you are already using Java for some reason (robustness to faults, ease of parallelism of certain kinds (w.r.t. C), library that does exactly what you need, etc.), you should figure out how to optimize it before bailing out and using a different language.

Only if you absolutely must get as much out of your hardware as physically possible should you start using C/C++, and at that point, don't expect to be using ANSI C; you should be issuing SSE4 instructions and such (basically writing targeted assembly, even if you are doing so in a way that looks like C functions) that have been cleverly crafted to do exactly what you need.

(And don't forget that while you are taking extra time to write all this low-level high-performance code, your computers _could_ have been running using the slower code, making progress towards a solution, or serving customers albeit with delays, etc..)

Fair points but I'd say there's a few places that Java just doesn't cut it - strict scheduling and / or real time needs. O, and GUI stuff, too.

So basically, no GUI, no audio, no video. Which only more or less leaves what it currently is king of the hill for - server side business processing.

I've tried real time and Java, and the jitter that garbage collection introduces (yes, even with parallel garbage collection and all that stuff) makes it hugely unpredictable.

Strangely enough, the company formerly known as SUN knew this, and tried to create a "real time" [sun.com] flavour of Java with such extensions - but it takes so much effort to port your code (and the JVM is slightly less than brilliant) that you're far better off going to C/C++.

SWT is quite a popular Java library written by one of the JVM vendors.
Java is the only platform I know that has libaries to write a GUI that runs on Windows, Mac, Linux, Firefox, Chrome, Opera and Internet Explorer

Unfortunately, when you really get down to it you will have similar problems in C or C++. You cannot allocate memory from the system heap (malloc or operator new) in the critical path of real-time code. That means everything is preallocated. That means you could have been using Java anyways, with GC disabled. There may be other reasons to use C or C++ for these systems, but the nondeterminism of dynamic memory allocation really applies across the board. It just hits Java users earlier.

Unfortunately, when you really get down to it you will have similar problems in C or C++.

Not quite - I do realtime audio processing and when you get down to millisecond / submillisecond timing issues, you simply can't do this in standard Java - even when using a no-allocation loop (as you mention). I know, I've tried apples to apples.

You can go down the route of using the special realtime JVM as I mentioned above, but then you get other problems (you're not really writing Java, you can't use any of the standard libraries etc).

I'm not saying it's not fast, I'm saying that tight timing related things (like nano sleeps to wait until the soundcard buffers are ready to be filled again) have too much jitter to be useful.

I can happily get 10 ms audio latency in Java, but going any lower is where it just doesn't cut it. Letting the thread busy wait isn't really an option when the machine has to be doing other things at lower priority too.

In short, using Java's nanosleep with tight timing tolerances seems to randomly get wakeup jitter wh

As long as you get good scheduling responsiveness, C is entirely fine.

Yes; GCC and ICC are quite good at optimizing SSE intrinsics, to the point that hand-coded asm can be largely avoided. I have heard otherwise for Visual Studio; fortunately, Intel's compiler works on Windows as well. As to the rest of the code, well... just keep an eye on the assembler output from every. single. build.

Only if you absolutely must get as much out of your hardware as physically possible should you start using C/C++...

Disagree. You should choose C++ if the speedup will be *noticeable*, not just if you need to squeeze out every last erg of CPU power. In many cases the difference between Java and C++ performance, especially startup time, can be huge. Don't even think about writing light little utilities in Java.

In many cases the difference between Java and C++ performance, especially startup time, can be huge.

Quite apart from the fact that there are applications where startup time really isn't very important, it's important to emphasize that it is possible to write bad code in any language. It's certainly possible to do bad code in both Java and C++. It should be possible for the best of C++ code to beat the best of Java code, but it will be quite difficult to reach that level with either language (both are quite subtle in places) and good code in either will beat bad code in either. (Choosing a good algorithm i

All that shows is that it takes the resources of a Google and an app architecture that acknowledges any large program written in C/C++ will leak in order to create a "snappy" app that doesn't crash. (Although I've seen colleagues crash Chrome plebty of times). Bascially, I've worked for too many companies with labyrinthine, buggy C++ codebases to want to use that boondoggle of a language again. There's only so many times I could bear working on bugs related to thread issues caused by the myriad ways encapsu

The switches and controls described are run-time options, not code optimizations. They are more like the 'economy or performance' transmission setting. Sometimes you want to get the best MPG and don't particularly care how fast you accelerate. Other times you want quick acceleration, and are willing to let mileage suffer. Having a control means you don't have to re-build your car every time you want a change.

These options are the same kinds of things. If you only have one application running on a box y

More like you didn't read it at all. Right in the summary it says: This dense, 600-page book will not only explain these developer options and the underlying JVM technology, but discusses performance, profiling, benchmarking and related tools in surprising breadth and detail.

Are you really that dense? It describes the command line options (which would have to be written into your own application if you wanted to have that level of tuning in C). It describes the technology of the JVM (so add some books describing processors and OS internals to your list). It describes performance, profiling, benchmarking, and the tools used to do those things (add a few dozen more books to your list).

Now obviously C is much easier to do those things with. That is why the first tool for analy

Oh, please, neither Java nor C++ is superior to the other. They both have strengths at certain kinds of programming but altogether support extremely similar semantics in most areas. There are very difficult-to-use portions of both Java and C++. You should get yourself some more experience.

Wow, I'm surprised at the driveby downmods in this thread. I mean, I held this image of Java developers as open minded individuals, but maybe there is a significant minority that just aren't. Guys: it is a fact. If you need your program to go fast, plus be OOP, then you should write it in C++. GCC will hand you somewhere between 10% and 200% speedup "for free" on real life algorithms, plus startup time normally faster by a multiple, and a fraction of the memory footprint. And small binaries, and runtime support always installed by default.

Java has its uses. It is generally faster to develop in and less tricky for the naive programmer. You can effectively build a project faster with less skilled and less expensive, more freely available developers. That is a big deal, not to be underestimated. But if your functional requirements include fast and tight, Java is not the right choice. Well, it's faster than Python or Bash. And sometimes not even that if you include JIT time.

OK, what is it about Java programmers and not being able to tolerate factual statements about their favorite language?

Hmmm, what are you talking about? Java is about as much of a scripting language as z80 assembly. When you run JVM, it essentially is an emulator, emulating imaginary hardware. Just like running a z80 processor emulator on your intel/amd/ppc/sparc cpu. Would you call z80 assembly a scripting language? When op talk about relations between JVM and Joe Schmo Code Monkey, what he really meant was "Programming in Java is very different from knowing your hardware and the hardware you are trying to emulate in order

Interesting you know about odd, unpopular languages like Haskell but don't even know about GCJ, which would address your issue (6). The Oracle JVM is not the only JVM in existence. Plus, apart from start-up time (inconsequential on servers) the JIT generally produces faster code than AOT since superior optimisation information is available to a JIT instead of a AOT compiler. Even on the desktop these days were the JVM is pre-loaded by the OS the start-up time is not that long, and becomes increasingly insi

Interesting you know about odd, unpopular languages like Haskell but don't even know about GCJ...

I like GCJ a lot and I think every Java programmer should use it in preference to the clunky interpreter/JIT combination, absent a compelling reason otherwise. Just for starters, you get real executables.

Strange you also would like to use goto. Even in 1968 Dijkstra considered this harmful in a famous paper - so generally anyone wanting to use this is considered slightly deranged, there are alternatives that are much more maintanable. So railing against Java for a construct that the consensus considers bad form could also be seen as bad form (or an unawareness of just how bad designs that need goto are).

Actually, what Dijkstra was railing against in that paper (I'll assume you've also read Knuth's response to it) was the tendency of some developers to use goto to create patterns that didn't match what we now call structured programming. Of course, what has happened since then is that the set of structured idioms has expanded (notably including exceptions and try-with-resources) so that there is far less need for a raw goto. The main thing that isn't handled particularly well as yet is a state machine (thou

Bah. The kind of reply I get when you've never had to review the work of a dozen lesser programmers to see what simple thing they didn't get right. More complexity is not better - it's a shame if you haven't grokked that.

Bah. The kind of reply I get when you've never had to review the work of a dozen lesser programmers to see what simple thing they didn't get right. More complexity is not better - it's a shame if you haven't grokked that.

I was only half jesting but you have clearly proven my point in your reference to 'lesser programmers'. Do you make them bow down before you each morning? Kiss you feet before going to their cube? I have no doubt that you are a strong believer and advocate for the Nanny State - regardle

I think you have the wrong end of the stick and are projecting a worldview onto my statements. Please let me explain. By lesser programmers I mean the non-craftsmen who are just there to collect their paycheck and do the minimum thinking to get through the day. Surely you've had to work with or direct these people. There is nothing wrong with them or their attitude, it's just they are not going to spend a lot of effort worrying about the best, most robust, or most efficient way to solve something. Yet these