Today many applications are based on machine code generated from code
written in languages such as C and C++ while others are based on byte
codes generated from code written in languages such as Java and
Smalltalk (yeah Smalltalk!). (I will ignore here for simplicity
matters such as:

Once can debate the pros and cons of each approach but that is not the
issue I want to discuss here.

Advances in hardware are imposing the reality of parallel processing
in the form of multi-core processors. Both byte code and native code
based compilers are forced to face this issue. What I want to know
is: Will this transition to mulit-core narrow or widen the performance
gap between the byte code and native code approaches? (assuming native
code is actually faster).

Some thoughts:

It seems to me that byte code compilers and the virtual machines they
target must be modified to incorporate knowledge of multi-core; with
the possible reward that the byte code approach will gain performance
ground on that native code approach. Some may debate this point
arguing instead that the byte code interpreters is where the knowledge
of multi-core should be embedded. There must be research going on in
this arena. Any news or results?

Interpreters that do JIT translation to native code will be less
effective because it will become slower to perform this task when
generating multi-core native code. This suggests that the byte code
approach is going to lose performance ground.

Perhaps the solution is to somehow group byte code into packets that
can be executed on a single processor and have the interpreter do
packet- processor scheduling. (I don't buy this idea though.)

There are other issues than performance to consider in the byte code
vs. native code debate. How are these issues affected by the
transition to multi-core?

There are doubtless many issues here that I haven't even dreamed of.
Feel free to bring them forward.