For university, I perform bytecode modifications and analyze their influence on performance of Java programs. Therefore, I need Java programs---in best case used in production---and appropriate benchmarks. For instance, I already got HyperSQL and measure its performance by the benchmark program PolePosition. The Java programs running on a JVM without JIT compiler. Thanks for your help!

P.S.: I cannot use programs to benchmark the performance of the JVM or of the Java language itself (such as Wide Finder).

It's not that clear what you want to do. Can you explain it a little further ?
–
RiduidelJan 3 '11 at 9:03

1

byte code is usually not optimised, instead the JIT will optimise the native code it creates. As such you may find that changing the byte-code will not improve performance the way you might expect as you are dependant on how it is turned into native code.
–
Peter LawreyJan 3 '11 at 9:06

@Peter,I think he is looking for scenarios that gives best performance
–
UVMJan 3 '11 at 9:14

@UNNI, it is likely the JIT optimises for certain expected patterns. changing the code to what looks more optimal could confuse the JIT and end up with sub-optimal code. I don't believe there are any trivial byte code changes which will see a significant performance improvement. More complex changes could do however and they could be worth investigating.
–
Peter LawreyJan 3 '11 at 10:42

@Peter, I have seen a benchmarking tool in java called JBenchmark.but not used so far how this is working.As you said it is bit difficult to optimize JIT that way to derive significant performance improvement.btw, is there any JIT pattern for that as you mentioed?
–
UVMJan 3 '11 at 10:57

3 Answers
3

Brent Boyer, wrote a nice article series for IBM developer works: Robust Java benchmarking, which is accompanied by a micro-benchmarking framework which is based on a sound statistical approach. Article and the Resources Page.

Caliper is a tool provided by Google for micro-benchmarking. It will provide you with graphs and everything. The folks who put this tool together are very familiar with the principle of "Premature Optimization is the root of all evil," (to jwnting's point) and are very careful in explaining the role of benchmarking.

Any experienced programmer will tell you that premature optimisation is worse than no optimisation.
It's a waste of resources at best, and a source of infinite future (and current) problems at worst.

Without context, any application, even with benchmark logs, will tell you nothing.
I may have a loop in there that takes 10 hours to complete, the benchmark will show it taking almost forever, but I don't care because it's not performance critical.
Another loop takes only a millisecond but that may be too long because it causes me to fail to catch incoming data packets arriving at 100 microsecond intervals.

Two extremes, but both can happen (even in the same application), and you'd never know unless you knew that application, how it is used, what it does, under which conditions and requirements.

If a user interface takes 1/2 second to render it may be too long or no problem, what's the context? What are the user expectations?