It depends. I'd argue that for long-running processes, Java is about as good a choice as C or C++ is. Compiled languages get up and running a lot more quickly. However, in a JITed runtime, the optimizer has better information because it happens at runtime.

Lastly, don't forget about OCaml. It's a compiled object-oriented language that can run at near-C speeds.

Other than startup cost I don't know there is a theoretical reason why a JIT platform would not perform as well as a native code platform.

There are other language features, like garbage collection, that mean that Java will use more resources and probably be slower but it is very easy to overstate this advantage.

In practice however, there are different programming cultures around C++ than Java. This results in Java designs often being more abstracted, relying more on reflection and various runtime linking patterns that make it hard or even impossible to optimize quite as well. Yes you could do all these same things in C++ and make it slower but in my humble opinion, there is less cultural acceptance of that in the C++ community. I think this is partly due to C++'s reputation for "being fast" - so its self-fulfilling!

For things like device drivers and some embedded applications you have no choice but to use C/C++ but most developers will not encounter these scenarios.

I would say Java for rapid application development, and C++ for raw power.

Java has numorous libraries thta make development much simpler - example being the Swing API which make GUI development almost ridiculously easy, especially with an IDE like Netbeans. Also file I/O is much easier (IMO) in Java than C/C++. Java also has the advantage of being easily ported to other platform since it runs in a virtual machine.

C++ has long been used in the graphics community. In my understanding, it is used because it works closely with the hardware (being loosely typed and compiled, not interpreted in any way) but still uses object-oriented principles for structure and organization. The compilable aspect is important for GPU programming. Also, I don't think you can drop ASM instruction in a Java program. The combination of speed and flexibility makes C++ ideal for graphics processing and other real-time applications.

So, in short, whichever language fits best for your application. These are just the two object-oriented languages that I am most familiar with; there may be others as well that are better than these in certain applications.

@Michael, "char c = 1;" is a C (not C++ !) statement and C is a weakly typed language indeed. However C++ itself is by design a strongly typed language. The only reason your statement will compile in a C++ compiler is because the people who standardized C++ recognized the need to support legacy C code (remember C++ was invented to overcome the failures of C), and this was (and still is) a major issue. May I recommend you to read "Thinking in C++ vol.1." by Bruce Eckel (it's available for free on his website) before you go into further discussions of C++.
–
JasNov 2 '10 at 19:14

2

@Jas: Whether or not implicit conversions are there for legacy reasons or by design is irrelevant; the fact remains that C++ allows many implicit conversions, and this makes it a weakly typed language. Granted, it's stronger than most other weakly typed languages, but it's still weak.
–
Peter AlexanderDec 5 '10 at 11:54

Any time you care deeply about performance, you generally want to get as close to the metal as you can. In most languages, you can write out performance critical segments in C code. C programmers can drop down to assembly language for the really critical stuff. So if I'm writing some C# code, but I really need a tight performance on an inner loop, I can write some C or C++ code and use interop to call that code. If I need even more performance, I can write assembly in my C library. Going lower than assembly is possible, but who wants to write machine code these days?

However, and this is the big consideration, dropping close to the metal is only high-performance for small, tight goals. If I was writing a 3D renderer, I might do the floating point math and rendering in C, (using a library to execute it on the video card.) But performance problems are also architectural, and performance issues from large-scale problems are often better solved in a high level language.

Look at Erlang: Ericsson needed a language to do massive parallel work easily, because doing parallel processing was going to get them way more performance than any tightly optimized C routines running on one CPU core. Likewise, having the fastest code running in your loop is only performance enhancing if you can't remove the loop entirely by doing something better at the high level.

You can do huge system, high level programming in C, but sometimes the greater expressiveness of a more powerful language will show opportunities for architectural optimizations that wouldn't be obvious otherwise.

I'm absolutely certain that you never called interop to improve performance because it almost never does
–
EricNov 2 '10 at 21:21

Sigh No your right, I've PInvoked plenty of times but never for performance and it would appear to be very slow. The better method for CLR code would be to write a wrapper in Managed C++. But that wasn't really the point anyway. It was that every big language (Haskell, Python, C#, Java, etc) has some means of performently calling into native code when the need arises.
–
CodexArcanumNov 2 '10 at 21:57

Great reference to Erlang. I understood its specifity when I realized that some eggdrop (IRC bots) were written in Erlang instead of C : Why to waste time to optimize a eggdrop in C if Erlang makes it for you :)
–
JoeBillyNov 2 '10 at 21:58

1

Assembly instructions have a one-for-one correspondence to machine instructions (except for some assembly macros), so I doubt that writing machine code by hand is going to improve performance.
–
Robert HarveyNov 3 '10 at 21:07

@RobertH There is such a thing as hand-optimized assembly code (which takes a lot of time, I understand, as it has to be crafted to the task and architecture at hand); so I politely suggest you are not considering the greater scale of loops and such, which can be optimized, but seldom better than a good compiler. Thus it would be useful only if a good compiler is unavailable for the project at hand. This is what I've picked up from a few discussions around here or on SO.
–
Mark CNov 22 '10 at 23:34

Omitting development effort, time, and cost you are going to get the most performance from a languages that are statically typed and compiled. Dynamic interpreted (or Just in time compiled a.k.a. JIT) languages on the other hand generally are expected to reduce development effort, time, and cost. Additionally however they are expected to reduce deployment effort, time, and cost.

C++ is going one of the obvious winners here and is used extensively in the game development community for that reason.

Scalable solutions however seem to benefit from the reduced deployment costs of dynamic and interpreted languages. So while one box may run slower than equivalent C++ code, a cloud of interpreters or virtual machines may have the advantage assuming equal deployment budgets. Java is one example, but clouds of Ruby or particularly Scala may be more indicative of this trend.