The programming paradigm has shifted from non-portable, domain specific programming to portable and rapid application development, from proprietary languages to open languages and from system programming languages to scripting languages.This paradigm shift has created the following needs and market opportunity:

a cross-platform scripting language meeting most programming needs;

a highly interactive development system without lengthy compile/link/execute/debug cycles;

a joint interpreter/compiler system for mission critical applications;

a platform for rapid application development, running in different operating systems and devices, safely and dynamically through network.

Being C (and C sharp - which I usually call C dumb) one of the most unfriendly programming environment I ever saw, and having the good guys having seemingly made it even more complex, this means, more or less:

5. No Assignment, Transfer or Disclosure. Licensee shall not transfer, disclose, disseminate, provide or otherwise make available all or any part of the Licensed Software or documentation to a third party without the prior written consent of SoftIntegration. Licensee shall not disclose the results of any benchmark tests of the Licensed Software to any third party without SoftIntegration's prior written approval. Neither the Licensed Software nor this Agreement may be assigned or otherwise transferred by Licensee.

Maybe someone else is interested to try the thingy and keep his/her big mouth shut about how it works....

When looking into the first links, my first idea has been "JAVA second generation": More universal, MUCH more slow, ...

Later I found the "Java" line.

It's a pity that a memory increase (low price of chips) leads to unnecessary memory wasting.
And increase of processor speed and number, 10000 machine instructions, generated by something like .NET or the introduced script language, do not matter, even if some simple ANSI C or especially assembler code, could do it in some ten machine instructions.

The idea is IMHO nice , and nothing should prevent you once having tested succesfully the code in "interpreted" mode, to compile it under your standard C# compiler, thus making it in no way different from any program developed along the "old" paradigm.

These assembler instructions are universal, not tied to any specific CPU machine architecture such as Intel32, AMD or PowerPC. To run byte code, java provides a virtual machine that will interface this byte code with the equivalent assembler instructions on the host machine.

Java code is optimized performance wise when it is first compiled to byte code and also later when running from the virtual machine. The virtual machine part is a bother but it is also the key to allow portability and older binaries to run on newer Java versions. Some manufacturers already created CPU understanding the Java machine assembler instructions by default: http://en.wikipedia..../Java_processor that speed up considerably the speed of execution but it was never commercialized with success.

To run byte code, java provides a virtual machine that will interface this byte code with the equivalent assembler instructions on the host machine.

Yep , in my perverted mind this VM interpreters the byte code. The Ch approach is that you write "portable" C code, you test it in the interpreted environment and then you compile it as "native" to the target.(or run it as "interepreted" in the same Ch but running on the target).

Unless I missed something (and it is very probable since it goes far outside my specific field of competence) Java is NOT an "efficient" solution, as it is slow as molasses when it comes to run something "serious", and the size of the Java runtime is objectively a mass of bloat.