Now the problem with the original logic this whoel silliness was started by is this:

(1) Statement: It takes more CPU to set up the non-static monomoprhic calss then static ones.

Answer: Not really true in hotspot. All call sites are initially assumed to be monomorphic. They become un-monomoprhic due to class loading (see below).

(2) Statement: it takes CPU to watch for it becomign non-monomorphic.

Answer: Again, not really true in Hotspot. Hotspot detects a call site potentially becoming non mono-morphic by watching the class loads and seeing if a newly loaded class potentially overr-rides a method thats currently being called in a mon omorphic fashion. There are pretty tricky data structures inside of HS to track all this.

This is part of classloading for any class, and is a one-time cost per class load which makes it pretty insignificant at run-time.

(3) new Statement: static calls take less memory.This MIGHT be true, but very much dependant on what your Vm does and how much it tries to optimize memory usage.In practice i don't think it singificantly effects foot-print which is actually pretty dominated today by reflection information.

Can we put this to bed now? The answer to the original question, in any modern VM, is an unambigous NO!

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

I've never ever seen a line that doesn't have a 'b' (only tried client VM). Is the interpreter always blocked when compiling? I've tried on a dual CPU machine as well and I still always get the 'b'. Maybe this is only ever different with the server VM?

As for the warmup - I have tried to do it, but making first two calls outside of measuring loop. It turned out to be not enough.

As for the commenting out printline, then it is not fair, because:1) This way you make tested rountine no-op - very bad mistake with modern compilers2) I do not agree that printing out to stdout will dominate benchmark - we are talking about 10 lines per few seconds, I'm not testing single ms differences3) Printline is the same in both methods, so even with overhead, it should have same effect.

So as far as your (1) Statement, it is not true for client Hotspot for many computers. In some cases very wrong, in some cases slightly wrong, but always wrong.

As for the server jvm, at least my benchmark was showing some numbers (showing that both cases are same as fast), in your case I got only 0ms from top to down... indeed writing benchmark is a tricky business, this printout was there for a reason...

One more about printout overhead - you can change SIZE to 0 to check it, it is 0 ms (which means less than 10ms, which in turn means it does not affect this benchmark, as marigin of error is anyway around 50ms).

As for the statement (2), I agree - it is mostly one-time cost, only during classloading and even taking into account that classloading is a worst slowdown during java startup, I doubt if it makes any difference.

As for statement (3) I'm not sure if I have said something to this effect, if yes, I can take it back - it was not my point. I'm talking only about time performance.

It seems that client Hotspot is not 'any modern VM'

Can we agree on statement, that static calls are faster in client jvm and do not make difference in server jvm ? It can be clearly seen from _everybody_ results except yours, so I think it affect enough people out there...

And I don't think it is silliness. IF there is a SIGNIFICANT difference, then it does make sense to use static methods rather than instance methods when it is possible.

sorry -Xcompile

it forces compilation of all methods immediately thereby removing some of the effect of the warmup. But its best justto properly warm the VM with a singiificantly long test run before you run your test.

I find it very odd that you are getting such divergent results from the same code I'm running. What is your platform and VM? I'll see if I can macth it in the lab.

In the end this is EXACTLY the problem with micro-benchmarks though. As they do not behave like real code they are prone to getting hung up on, and over-reacting to, some inner complexity of the VM process. Which is what I suspect is happenign to you here.

JK

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

Can we agree on statement, that static calls are faster in client jvm and do not make difference in server jvm ? It can be clearly seen from _everybody_ results except yours, so I think it affect enough people out there...

I don't follow your comment on the prints, maybe you could explain further. The fzct of the matter is that a print (or any system io) within a test is effectively a sleep(random()) and will screw your results unless your test last so long that this is in the noise (for "so long", figure hours). If you explain better what you were trying to do maybe I can suggest a solution that wont mess up your readings.

In re client/server. On MacOSX I had no perceivable difference between -client and -server. It would be good for whoever else had reported OSX numbers to check my results to see if they line up. As I say, taken literally, my numbers show that static is (nominally) SLOWER on MacOSX.

Meanwhile I'll run both -client and -server on a Win2K box in the lab tomorrow. It wouldl help to get stats on these test machines. It would actually be a more prope and reliable test if, in addition, we broke it into two tests, one for each case, such that the ONLY difference between the situations is what we are trying to test. When trying to really microbenchmark, control of variables is pretty critical.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

In the end this is EXACTLY the problem with micro-benchmarks though. As they do not behave like real code they are prone to getting hung up on, and over-reacting to, some inner complexity of the VM process. Which is what I suspect is happenign to you here.

Agreed, differences found here might not reflect reality. And chances are high that it is JVM/OS dependent which would explain why you got different results.

I don't follow your comment on the prints, maybe you could explain further. The fzct of the matter is that a print (or any system io) within a test is effectively a sleep(random()) and will screw your results unless your test last so long that this is in the noise (for "so long", figure hours). If you explain better what you were trying to do maybe I can suggest a solution that wont mess up your readings.

I agree that printout introduces random wait to benchmark. It indeed equals to sleep(random()) - but this random() is quite small. On normal machine it should be less than ms per printout. So we can add error +/- 1ms per printout to error of benchmark. With seconds-long benchmark, few printouts do not change the results, especially if you run benchmark few times.

Why printouts at all ? To make sure that result of method is actually needed. Without printout, 1.4.2 server on my home computer optimize this code to no-op, because it knows that all this computation is never used. With printout, Hotspot cannot 'cheat', because it HAS to print correct value to screen. I suppose that with enough levels of indirection, you can make Hotspot into believing that you need this value anyway - but there is no guarantee that you will succeed. On the other hand, printouts are fool-proof - only way to spoil them is precomputing result of entire function, which is hardly doable by jit with so long loop.

As you can see a REAL noop situation makes this execute almost immediately. I knew this instinctively from all the benchmarks I've run in the past, but this proves the point.

In any event a print is the WRONG way to solve that. If the compiler were truly smart enough to figure out that the loop did nothing (which would be quite a feat when you consider that its calling a sub function plus all the possabilities of side-effects) THEN what you should do is return the calculated value rather then void which would take the recognition of whether the value was used or not and mvoe it our of the scope of what you were testing.

NEVER NEVER NEVER do system IO in the midst of a test. I can't make that point strongly enough.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

So in the spirit of testing assumptions, I decided to see what time it takes to println on OSX.

I have a feeling that OSX, being a truely multi-tasking OS, doesnt block waiting for the print. The end result is that you are right that on OSX returning from a print takes under 1ms in the situation i happen to be running in. But, as I said, its unpredictable since you are turning control over to the OS and I woudl hesitate to say that its under 1ms in all circumstances.

My past experiences with Windows NT was that it DOES block in the kernel and prints take a whole lot longer even in the best case, but I'll check that out v. 2K in the lab tomorrow.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

Also server JVM gives me 0ms too so abies is sort of right with that print thing : a statement that really uses the result is needed or the JVM may notice the operations are dumb and bypass them. Maybe we could return v, store it in an array and print this array at the end of the benchmark ? (shouldn't take that much time).

Definitely JVM/OS dependent. Well, unless there are evidences that some JVMs behave better with instance methods, I would say : use what fits best in terms of design, and if you have to choose between static and instance (eg : singleton vs static class discussion) go with static.

Could someone post results obtained with other JVMs/OSes, so that we have much more material ?

"The final phase does peephole optimization on the LIR and generates machine code from it. Emphasis is placed on extracting and preserving as much information as possible from the bytecodes. It focuses on local code quality and does very few global optimizations, since those are often the most expensive in terms of compile time. It supports inlining any function that has no exception handlers or synchronization, and also supports deoptimization for debugging and inlining. "

JUST to confuse things further....I was a bit disturbed by the two levels of calls in the test as given because it compliacates the call chain. Again, in the interests of reducing variables, I simplified it to direct calls to the static and non-static functions.

For this test I still kept Abies multiple operations per loop though I'm a bit concerned about that complication (I'll factor that out next set of tests.)

I also increased the numebr of tiems we run each test, just as a sanity check.

The results were interesting though not too out of the ordinary. I had the labels backwarsd (Instacne is static and static is instance) so don't let that throw you.

Interesting thing - if I add printouts to test, static test in ibm jre has same speed as instance test - 3890ms. Strange, very strange - I really doubt if printout wait is causing this, I rather suspect that most of these 10-20% differences are from different pairing of instructions generated rather the real difference in jit quality. I have removed printouts and added 3 ?add(i,v) instructions in both methods and results are the same (6300ms).

At the moment it seems to me that for ibm and Hotspot server, speed is the same, with any umpteen percent differences in any direction caused by random thing like pairing of instruction or cache line boundary. So I give up as far as trying to prove that static method is _generally_ faster. I was proven wrong and I'm just left with statement that Client hotspot cannot manage to inline instance functions in reasonable way.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org