Yes it has always been the case that the server VM is not included in the "JRE" but is included in the "JDK", on Windows at least.

On Mac there currently is NO server VM at all, but the -server command line arg does change some parameters to the client VM so it behaves a bit more server like - though without any of the server VM optimizations.

I'd always assumed that the reason why the server VM wasn't in the JRE was because most people wouldn't know it was there anyway - it's an easy 2MB to cut. The minority who want the server VM are capable of getting the full JDK instead - classical client-side apps would prefer to avoid the startup cost so it's application servers et al that really benefit, and they frequently come bundled with the preferred VM.

Maybe simply, the server vm is smart enough to remove completely the loop as the result can be calculated with a simple multiplication... that is 1000000000 * 15..... Can even add the result directly in the print statement....I bet that if you get the loop count from... mhhh commandline, you will not get the same results in server VM.... You then finally bench the timer granularity ...

Probably has more to do with the JIT. AFAIK, the server VM is much more proactive at determining which methods it should JIT compile. Whereas the client VM waits until it actually sees a hotspot before it compiles.

Try moving the code into a method, calling it a couple of times, THEN perform the benchmark. This should give the client VM time to compile the method, if this is indeed what's happening.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org