I read some where that javac would only inline a method if the method was declared final or static and that it had no local variables. Is this the way the JDT compiler handles inlining?

What are the rules regarding inlining the JDT compiler uses to decide if a method can be inlined or not?

And does anyone know of a white paper that describes current good programming practice to produce faster, smaller class files? For example: is it better to use public static variables or get() set() methods. Which will produce smaller class files?

On 20.04.2007 06:38, DemonDuck wrote:
> I read some where that javac would only inline a method if the method
> was declared final or static and that it had no local variables. Is
> this the way the JDT compiler handles inlining?
>
> What are the rules regarding inlining the JDT compiler uses to decide if
> a method can be inlined or not?
>
> And does anyone know of a white paper that describes current good
> programming practice to produce faster, smaller class files? For
> example: is it better to use public static variables or get() set()
> methods. Which will produce smaller class files?

AFAIK inlining is not done by the compiler but by the runtime. And IIRC
it will also inline non final methods.

Robert Klemme wrote:
> On 20.04.2007 06:38, DemonDuck wrote:
>
>> I read some where that javac would only inline a method if the method
>> was declared final or static and that it had no local variables. Is
>> this the way the JDT compiler handles inlining?
>>
>> What are the rules regarding inlining the JDT compiler uses to decide
>> if a method can be inlined or not?
>>
>> And does anyone know of a white paper that describes current good
>> programming practice to produce faster, smaller class files? For
>> example: is it better to use public static variables or get() set()
>> methods. Which will produce smaller class files?
>
>
> AFAIK inlining is not done by the compiler but by the runtime. And IIRC
> it will also inline non final methods.
>
> Kind regards
>
> robert

Basically, the JVMs now are incredibly smart about performing on-the-fly
optimization, including all the sorts of optimization that a compiler might
do as well as things like recompiling to native code. It's quite unlikely
you can do a better job by hand with small tweaks of the sort you mention.
Your primary job, at this point, is to write code that is clear, simple,
direct, correct, and uses appropriate data structures and algorithms; let
the compiler and JVM take care of making it run fast.

I had no idea that JIT would do inlining on the fly.
A casual thinker (like me) would think that would take
more time than to just push the stack -- or heap -- or...

Now you're going to tell me there's no stack and heap right :-)

Walter Harley wrote:
> "DemonDuck" <kwarner@uneedspeed.net> wrote in message
> news:f0arqo$686$1@build.eclipse.org...
>
>>uh...what's IIRC???
>>
>>And really? "...by the runtime..." What does that mean?
>
>
> Read up on "hotspot" and "JIT optimization".
>
> Basically, the JVMs now are incredibly smart about performing on-the-fly
> optimization, including all the sorts of optimization that a compiler might
> do as well as things like recompiling to native code. It's quite unlikely
> you can do a better job by hand with small tweaks of the sort you mention.
> Your primary job, at this point, is to write code that is clear, simple,
> direct, correct, and uses appropriate data structures and algorithms; let
> the compiler and JVM take care of making it run fast.
>
>

Walter Harley wrote:
> Basically, the JVMs now are incredibly smart about performing on-the-fly
> optimization, including all the sorts of optimization that a compiler might
> do as well as things like recompiling to native code. It's quite unlikely
> you can do a better job by hand with small tweaks of the sort you mention.
> Your primary job, at this point, is to write code that is clear, simple,
> direct, correct, and uses appropriate data structures and algorithms; let
> the compiler and JVM take care of making it run fast.

This is excellent advice. At the current stage of JVM and compiler
history, worrying about micro-optimizations in application code is
totally futile unless it is within REALLY large loops or other code that
executes millions of times in succession.
Premature optimization is a long-known anti-pattern, one that we have
all fallen victim to at one time or another as we learn. Unless you have
profiled your application and found an obvious bottleneck or memory
problem, the best advice is to focus on the readability and design of
the code.

"DemonDuck" <kwarner@uneedspeed.net> wrote in message
news:f0bgbl$hnn$1@build.eclipse.org...
> Will do...
>
> I had no idea that JIT would do inlining on the fly. A casual thinker
> (like me) would think that would take
> more time than to just push the stack -- or heap -- or...
>
> Now you're going to tell me there's no stack and heap right :-)

No, there's still a stack and a heap :-) Although I'm told that the latest
batch of JIT compilers can actually move variables between the two when
appropriate for optimization purposes...

JIT compilation really is pretty incredible; this is where some of the
smartest folks in computer science have spent their time over the last
decade or so.

The basic idea is that, as you may have noticed if you've ever disassembled
a compiled Java program, Java bytecode contains about the same information
as Java source code, only a tad more compactly. But bytecode is not native
machine code, at least not for any real machine, and native execution is
faster than bytecode interpretation. So rather than just interpret
bytecode, like early JVMs did, modern JVMs have built-in compilers, that
compile the bytecode to native code for the machine it's running on; that
way they can take advantage of not only all the flow analysis and the like
that a traditional compiler would use, but also realtime analysis of the
execution patterns of your code, e.g., which methods are frequently hit.
Thus the (trademarked) term "hotspot": like a profiler, it identifies the
execution hotspots, and focuses its effort there (frequently on a separate
thread of execution than the one actually running your code - so a method
may be interpreted the first few times it's hit, and then compiled to native
code thereafter).

I well understand your point about readability. Maintenance is more time
consuming over the long term than any other factor.

The hotspot or JIT compiler is amazing. If you look at the right
application or applet you can see it "breathing" at first before
it gets up to full speed. Although, early versions did bite (not byte)
me every once in a while. Some portions of my code wouldn't execute.

But that was a long time ago.

Thanks again for youse guys input...

Walter Harley wrote:
> "DemonDuck" <kwarner@uneedspeed.net> wrote in message
> news:f0bgbl$hnn$1@build.eclipse.org...
>
>>Will do...
>>
>>I had no idea that JIT would do inlining on the fly. A casual thinker
>>(like me) would think that would take
>>more time than to just push the stack -- or heap -- or...
>>
>>Now you're going to tell me there's no stack and heap right :-)
>
>
> No, there's still a stack and a heap :-) Although I'm told that the latest
> batch of JIT compilers can actually move variables between the two when
> appropriate for optimization purposes...
>
> JIT compilation really is pretty incredible; this is where some of the
> smartest folks in computer science have spent their time over the last
> decade or so.
>
> The basic idea is that, as you may have noticed if you've ever disassembled
> a compiled Java program, Java bytecode contains about the same information
> as Java source code, only a tad more compactly. But bytecode is not native
> machine code, at least not for any real machine, and native execution is
> faster than bytecode interpretation. So rather than just interpret
> bytecode, like early JVMs did, modern JVMs have built-in compilers, that
> compile the bytecode to native code for the machine it's running on; that
> way they can take advantage of not only all the flow analysis and the like
> that a traditional compiler would use, but also realtime analysis of the
> execution patterns of your code, e.g., which methods are frequently hit.
> Thus the (trademarked) term "hotspot": like a profiler, it identifies the
> execution hotspots, and focuses its effort there (frequently on a separate
> thread of execution than the one actually running your code - so a method
> may be interpreted the first few times it's hit, and then compiled to native
> code thereafter).
>
> You can learn more about the Sun version of this at
> http://java.sun.com/docs/hotspot/HotSpotFAQ.html.
>
>

I agree. Having spend a lot of time micro optimizing things (not
surprisingly, when you generate code for people, they will tend to blame
you all their performance problems), you can't performance optimize code
without measuring it in great detail. And with a JIT, the performance
of warmed up code can be drastically different than from a cold start,
so measuring the performance accurately is a challenge. The results of
such measurements can be very surprising and at times even counter
intuitive (like the cost of a method call will simply disappear as if it
were free because it's been in lined away, or the presence of an unused
derived class can have an impact). If Eclipse provided a performance
analysis tool as totally excellent as the rest of JDT, I'd be such a
happy guy! With a good measurement tool and a focused effort on only
the bottlenecks, you can of make simple changes that have a shockingly
huge impact on overall performance. The key is to measure, measure, and
measure again. (And then the depressing part is that what's optimal for
one JVM might well be significantly suboptimal on another one.)

Eric Rizzo wrote:
> Walter Harley wrote:
>> Basically, the JVMs now are incredibly smart about performing
>> on-the-fly optimization, including all the sorts of optimization that
>> a compiler might do as well as things like recompiling to native
>> code. It's quite unlikely you can do a better job by hand with small
>> tweaks of the sort you mention. Your primary job, at this point, is
>> to write code that is clear, simple, direct, correct, and uses
>> appropriate data structures and algorithms; let the compiler and JVM
>> take care of making it run fast.
>
> This is excellent advice. At the current stage of JVM and compiler
> history, worrying about micro-optimizations in application code is
> totally futile unless it is within REALLY large loops or other code
> that executes millions of times in succession.
> Premature optimization is a long-known anti-pattern, one that we have
> all fallen victim to at one time or another as we learn. Unless you
> have profiled your application and found an obvious bottleneck or
> memory problem, the best advice is to focus on the readability and
> design of the code.
>
> Hope this helps,
> Eric

Ed Merks wrote:
> Eric,
>
> I agree. Having spend a lot of time micro optimizing things (not
> surprisingly, when you generate code for people, they will tend to blame
> you all their performance problems), you can't performance optimize code
> without measuring it in great detail. And with a JIT, the performance
> of warmed up code can be drastically different than from a cold start,
> so measuring the performance accurately is a challenge. The results of
> such measurements can be very surprising and at times even counter
> intuitive (like the cost of a method call will simply disappear as if it
> were free because it's been in lined away, or the presence of an unused
> derived class can have an impact). If Eclipse provided a performance
> analysis tool as totally excellent as the rest of JDT, I'd be such a
> happy guy! With a good measurement tool and a focused effort on only
> the bottlenecks, you can of make simple changes that have a shockingly
> huge impact on overall performance. The key is to measure, measure, and
> measure again. (And then the depressing part is that what's optimal for
> one JVM might well be significantly suboptimal on another one.)
>
>
> Eric Rizzo wrote:
>
>> Walter Harley wrote:
>>
>>> Basically, the JVMs now are incredibly smart about performing
>>> on-the-fly optimization, including all the sorts of optimization that
>>> a compiler might do as well as things like recompiling to native
>>> code. It's quite unlikely you can do a better job by hand with small
>>> tweaks of the sort you mention. Your primary job, at this point, is
>>> to write code that is clear, simple, direct, correct, and uses
>>> appropriate data structures and algorithms; let the compiler and JVM
>>> take care of making it run fast.
>>
>>
>> This is excellent advice. At the current stage of JVM and compiler
>> history, worrying about micro-optimizations in application code is
>> totally futile unless it is within REALLY large loops or other code
>> that executes millions of times in succession.
>> Premature optimization is a long-known anti-pattern, one that we have
>> all fallen victim to at one time or another as we learn. Unless you
>> have profiled your application and found an obvious bottleneck or
>> memory problem, the best advice is to focus on the readability and
>> design of the code.
>>
>> Hope this helps,
>> Eric

DemonDuck wrote:
> Someone mentioned the Eclipse Profiler Plugin on sourceforge --
>
> http://eclipsecolorer.sourceforge.net/index_profiler.html
>
> Does anyone have an opinion on how good this profiler is?
>
> Ed Merks wrote:
This project is no longer being maintained and won't work on the current
Eclipse version. Eclipse has a Test and Performance Tools Platform
project (TPTP). That project includes a profiler.