On Thu, 2007-08-16 at 20:09 -0700, Erick Tryzelaar wrote:
> > So I don't buy 'slow' as an argument: the technique is much
> > FASTER than any JIT system in all aspects, in fact it IS
> > a JIT compiler -- it just compiles the whole program all the
> > way from source with disk based caching which persists over
> > invocations.
> >
>
> For loose definitions of JIT :) It doesn't do runtime optimization of
> the code, of course.
>
> And to be fair, since whole program optimization needs to start roughly
> from scratch every time, you can have some ugly compile times.
Bytecode has to be compiled too. If you have a one-off script which
doesn't benefit from optimisation much, then it is a toss up whether
a bytecode compilation followed by JIT based machine code generation
has higher cost than C code generation followed by C compilation.
If the code has to run for a long time, or be re-run often,
then the C compilation wins hands down. you would need a
VERY sophisticated JIT to actually use runtime information
to tune code generation, beyond 'it got used at least once'
of course. OTOH traditional compilers can apply all sorts of
static analysis driven optimisations a JIT cannot, because it
is looking at a bigger picture.
So roughly my feeling is that JIT offers NO performance advantages.
It's the worst possible combination you can have.
The advantage of JIT is that you can improve the performance
of a VM when you have to use a VM either because you're stuck
using rubbish like JVM for political reasons, or you need to
maintain a secure, restricted environment, for example running
scripts from a web-server. In the latter case a properly designed
language translator can make the same guarantees, but correctness
of translator based security assurance is probably harder to
demonstrate than for a VM.
I do agree a *sophisticated* JIT could outperform compiled native
code IF it were able to dynamically generate code based on real
time feedback from actually running code, but this is a
VERY difficult job *especially* with modern processors which
already do exactly this kind of thing with branch prediction etc:
the JIT now has to second guess the CPU circuitry to be able
to calculate alternative encodings.
In some sense I make the argument strongly AGAINST VM implementations.
I would argue *source code* is the proper object to execute, and a
traditional compiler driven by changes to the source is the correct
way to execute source: binary code should be regarded as a cache,
not the final product.
It's clear that this is entirely possible for Ocaml with
a suitable harness and minor language tweaks: Ocamlopt.opt
is very fast and Ocaml sources are quite portable, so
executing Ocaml *source* code is the proper
model of program execution -- the native code compilation should
be regarded just like a JIT optimisation.
The biggest obstacle here is the trivial lack of a proper
language construction to state dependencies, and a proper
packaging model. Felix does this right. Ocaml (with Ocamldep
and some fiddling) could do it too.
With such a program for Ocaml, Debian packagers would be ecstatic --
no more binaries. Just distribute and execute source.
BTW: AFAICS Alain Frisch patch to run ocamltop as native code
with native code dynamic loading comes very close to realising this.
--
John Skaller <skaller at users dot sf dot net>
Felix, successor to C++: http://felix.sf.net