Throughout the rapid and tumultuous history of Android, which is now five years old, almost every aspect of the OS has been changed, updated, or overhauled in some way. Everything, that is, except for the most important part: Dalvik, the virtual machine that runs almost every Android app, has remained virtually the same since day one — and Dalvik is slow. Now, with Android 4.4, Google has revealed that a Dalvik replacement is in the works — a replacement, called Android Runtime (ART), that should improve the performance of Android apps by a huge margin. The early version of ART in Android 4.4 already speeds up apps by around 100%, and the final version should be even better.

When you run software on a computer, such as a PC or smartphone, you are nearly always running compiled code. Compiled code is source code that has been compiled, by the developer, into code that the computer can understand (machine code). If you open an EXE file, you see compiled code — gobbledegook machine code that your CPU can execute. Windows, your web browser, Crysis 3, Linux, iOS apps — these are all examples of compiled code.

Big-budget AAA games like Crysis are nearly always compiled, for performance reasons.

The other kind of code is interpreted code. This can come in many varieties, but the main thing is that interpreted code cannot be directly executed by the CPU — it must first be compiled by the interpreter into machine code, using a process called just-in-time compilation (JIT). The most common example of interpreted code is JavaScript, which your web browser interprets/compiles whenever you visit a website that uses JavaScript. Interpreted code is useful for two primary reasons: You can make changes without having to recompile the whole thing, and because it’s platform agnostic. (You can happily run JavaScript on any platform that has a modern web browser — you can’t run a Windows EXE on Apple’s OS X.)

The other prime example of interpreted code is Java — and Dalvik is essentially Google’s version of Java. Java is desirable because a developer can write a program once, and then have it executed on any hardware platform that has a working interpreter (the Java Virtual Machine). For the same reason, because Android was designed to run on a huge range of platforms, hardware specs, and form factors, Google decided to use the Dalvik virtual machine for Android apps. This way a developer can write a single Dalvik app and be assured that it will run on smartphones, tablets, TVs, embedded devices, and so on.

This graph shows the performance of interpreted JavaScript (red/blue) vs. the same function compiled to native code (orange)

The problem with interpreted code, though, is that it’s slow — like, really slow. In the case of JavaScript, it’s around 20 times slower than the same code that’s been natively compiled with C or C++. Java/Dalvik isn’t quite that slow, but it’s still significantly slower than natively compiled code. In many cases, especially with modern processors, this speed difference isn’t glaringly obvious, but it all adds up. It’s impossible to lay the blame solely on Dalvik for Android’s slower responsiveness or higher power consumption, but it’s definitely a significant factor. It’s a simple equation: Interpreted code takes longer to execute and consumes more CPU time, thus reducing battery life and overall responsiveness.

Android Runtime – ART

Google knows all this about Dalvik, of course, which is why it’s been working on its replacement — Android Runtime (ART) — for more than two years. An early version of ART is included with Android 4.4 and can be enabled in Settings > Developer Options > Select Runtime.

ART straddles an interesting mid-ground between compiled and interpreted code, called ahead-of-time (AOT) compilation. Currently with Android apps, they are interpreted at runtime (using the JIT), every time you open them up. This is slow. (iOS apps, by comparison, are compiled native code, which is much faster.) With ART enabled, each Android app is compiled to native code when you install it. Then, when it’s time to run the app, it performs with all the alacrity of a native app.

It obviously takes some time to perform the AOT compilation at install time, but the long-term gains from apps that load and run faster will easily make up for it. You probably won’t even notice the AOT compilation of small apps; but for larger apps, we’re talking about a noticeable delay. If you switch an existing device from Dalvik to ART, you’re talking about a wait time of a few minutes while Android performs AOT for all your installed apps.

The main advantage of ART is that it allows Android developers to continue writing the exact same code, and having their apps work across a wide range of hardware specs and form factors — but now their apps will now run significantly faster, feel more responsive, and your device’s battery life should improve. Early testing indicates that ART is twice as fast as Dalvik. ART’s compiled code should also perform more consistently than JIT Dalvik, too, reducing UI latency and stuttering. The biggest gains will probably be for computationally intensive apps, such as photo and video editors, but if this early build of ART is anything to go by, there should be significant improvements across the board. ART could be change that finally makes Android as responsive and snappy feeling as iOS.

There’s no timeline for the official introduction of ART, but given that Google has been working on it for a couple of years, and that the implementation in Android 4.4 feels quite mature, it’s probably not that far away. It will probably be a headline feature in Android 4.5 — or more likely, given the scale of the change, Android 5.0.

Tagged In

It means that Android apps will be faster! Dalvik compiles code WHILE you run it, ART will compile code BEFORE you run it. It also means that your battery life should improve, because there is less code running.

Shibi

gobbledegook
ˈgɒb(ə)ldɪˌguːk,-ˌgʊk/
noun informal

1. language that is meaningless or is made unintelligible by excessive use of technical terms. eg: “reams of financial gobbledegook”
synonyms: jargon, unintelligible language, obscure language;

standard

>Visits website about ‘extreme tech’
>Doesn’t know how to use google

Dozerman

I honestly think he was being sarcastic. That being said, nice 4chan greentext.

realtebo

LOL^2

standard

sarcasm²?

symbolset

On my N5 it doesn’t mean diddly because this thing crushes every app you throw at it with the slower Dalvik. I turned on ART anyway though and the benchmarks are off the charts. Literally the benchmarks say “this test is too light for your device.”

On a lower specced device it means stuff will work as fast as it would on a much more expensive device that uses Dalvik. That drives down the cost of acceptable performance, expanding the world of people who can do mobile with reasonable performance.

Avatar1337

Interpretation in runtime is not the same as JIT. JIT is an optimization of interpreted code. Java uses an AOT to java bytecode and then a JIT compiler at runtime to further optimizing the interpretation.

Robert Engels

Not true. JIT compiles to native code.

Avatar1337

That’s what I said. JIT compilation is an optimization of interpreted code by compiling it to native code in runtime. They said “Currently with Android apps, they are interpreted at runtime (using the JIT), every time you open them up”. Interpretation and compiling while running the app are not the same thing.

Robert Engels

Sorry but that is not what you said. You stated “… JIT compiler to further optimizing the interpretation”. This is not correct. After the JIT compiler is finished it is pure native code. Many JIT compilers though have multiple stages, where it will compile to native making certain assumptions, and then either go back to interpreted or further compile again later using “deeper optimizations” if it decides it is beneficial to do so.

Avatar1337

I then said: “JIT compilation is an optimization of interpreted code by compiling it to native code in runtime.”. How can you not get that? I am saying that it is compiling to native code as an optimization of the interpretation. Why do you keep saying that I am wrong when I am clearly saying the same thing as you. You just misunderstood me. My point was to point out that the article formulated this badly: “Currently with Android apps, they are interpreted at runtime (using the JIT), every time you open them up”, because it sounds like interpretation and JIT are the same thing.

You generally have more time to do optimizations while you’re installing (as opposed to when you’re running, where it’s more of a rush since the program needs to remain responsive).
Also, it helps that the processor isn’t busy trying to compile your code (and remember, part of compiling bytecode is interpreting said bytecode!) while the program is running — meaning that more of CPU time is available to the program itself.

convolution

MY god. I have been waiting for an explanation like this!

John Smith

>it must first be compiled by the interpreter into machine code, using a process called just-in-time compilation (JIT).

Can you fire this writer now? Confirmed for not knowing anything.

http://www.mrseb.co.uk/ Sebastian Anthony

See comment below. Some of the details in this story were intentionally simplified, so that non-programming types can understand the general gist of ART.

(Btw, rather than just randomly spouting angry crap, much better if you leave a comment saying what exactly is wrong with the story — that way, for people looking for more information, they can come down to the comments section and read your highly informative comment. Your comment adds nothing of value.)

D.R.

I haven’t seen such poorly written article in more than 10 years! Looks like poorly researched high school assignment…

So many inaccuracies in this article about JIT compilation, interpreting code etc… Dalvik is VM runtime, ART is also going to be VM runtime… Dalvik optimizes code for execution on particular device building ODEX cache files (optimized dalvik executable) and placing them in Dalvik cache… The fact that ART and Dalvik will be interchangeable/selectable in phone settings is tell tale sign that ART is also VM runtime layer, better optimized than Dalvik, but certainly not native code execution… Think of it as Dalvik 2.0 with new hip name.

Message to author: either put more effort in next technical articles or switch to covering marketing and press releases. Otherwise I will have to agree that you should be fired!

Robert Engels

You dont know what you are talking about. Dalvik uses JIT which produces native code during runtime. The new ART produces native code ahead of time but it still needs a runtime to provide some services and bindings that an android application expects.

chojin999

You are the one here that clearly has no clue how virtual machines and Java work.

Robert Engels

I have worked on the OpenJDK project. I know exactly how they work.

Also, ART may not compile the entire application, if there are callbacks that are only used in rare cases, there is no reason to compile these to native code (native code is much larger than byte code).

Finally, there are many “java” semantics that go beyond “pure linux”, so even if the application is compiled, “something – in this case ART” needs to provide the bindings/system services.

If you don’t believe me, you can review the source code to ART if you’d like,

Maybe should learn a bit more before being so ignorantly aggressive with your mouth…

chojin999

Yeah sure.. you are the Google CEO, you design AMD, Intel , Samsung and Apple CPUs .. You are the Pope too maybe ?

http://oligofren.wordpress.com Carl-Erik Kopseng

You really should get into comedy. It certainly fits you better than programming.

http://www.cardinalphoto.com David Cardinal

Robert — I think you’ve hit the nail on the head. Fortunately for me, otherwise I was going to need to spend the rest of the evening reading the code to figure out what you already have!

Grunt Molzdtroiut

Hello, I was linked here looking for more infos on ART. You seem to know what you are talking about. Can you give me any pointers in the source, code sections etc. that would support the claim that ART performs significantly more compilation ahead of time instead of just in time? I have glanced over the code and it seemed to me that it’s still interpretation and when applicable JIT compilation. Since you are one of the people here who mention that ART performs AOT I assume you have found evidence of this in the ART source code. Thank you very much in advance.

You just told Robert Engels he doesn’t have a clue how VM’s or Java works! You are either a comedian or very stupid.
Hint: You just told Scotty Pippen he doesn’t know about basketball.

http://www.mrseb.co.uk/ Sebastian Anthony

Some of the finer details are simplified in this story, so that it’s approachable for non-programming types. Apologies if it’s not exactly, 100% technically correct — sometimes it’s better to convey how something works in general terms, rather than to get bogged down in minutiae :)

Robert Engels

That is true, but in this case the simplifications were way off. The comment by Pookie explains it best – and very simply.

Also, by doing compilation “before” the program runs, you can perform much more sophisticated optimizations (lengthy compilation cycles don’t matter much if they are only done once (at install) – can’t really do these in a JIT and have the application perform).

BUT…

when compiling in a JIT you can do optimizations based on actual usages that can’t be performed ahead of time. These optimizations should not be trivialized. You can read many papers on the amazing optimizations possible in a JIT. This is why many JVM vendors are moving to a hybrid solution offering AOT (ahead of time), but also doing JIT analysis and “redoing the compilation if it would help. The IBM J9 Java JVM is one such implementation. Ultimately, what they do is cache the last JIT and then use that the next time the program is run (assuming previous assumptions still hold).

chojin999

Marketing lies. It surely isn’t a Java to direct binary converter. It’s not that the code gets automagically turned into C/C++ or assembly. The ART is still a virtual machine. They can claim huge speed gains by compiling for ART instead of standard Java byte code for Dalvik but that’s only marketing. The speed gain will never be able to match a C/C++, Objective C and assembly compiled code with no virtual machine.

Robert Engels

Not true. It uses LLVM to compile to native ELF binary code ahead of time.

chojin999

You have no clue what you are talking about.

youreasumbass

actually, it sounds like youre the clueless one here…

chojin999

Kid, you are the one that needs to study some IT books about programming, OS design and virtual machines.

http://oligofren.wordpress.com Carl-Erik Kopseng

He can point to actual source code, facts and contributions the Java project. All you can point to is dog shit, ’cause that’s what your arguments boil down too.

Robert Engels

In fact it evens goes as far so that if your original android app uses JNI (java native interface), it will “fix up the bindings so it can be called more efficiently from the generated native code.

http://www.cardinalphoto.com David Cardinal

And you base that accusation & assertions on what? There is quite a bit of evidence that ART is actually a good-old-fashioned compiler back end that generates machine code for some or all of application code at installation time (not exactly rocket science, so why shouldn’t it?). Admittedly, Google’s official word on this is pretty slim so far, but I have seen no evidence whatsoever for your claims otherwise.

chojin999

You have no clue what you are talking about. It’s still a virtual machine. There is no magic trick to turn it into real native code. Anything that can be done is a huge compromise affecting performance.
Maybe you should study a bit about how virtual machines designs work.
And Google can’t discard the whole Java Dalvik virtual machine system otherwise all apps won’t work anymore and would need to be re-designed and re-written for real native code.

http://www.cardinalphoto.com David Cardinal

You’re pretty hopeless. As has been said several times, The Dalvik VM since Android 2.2 has featured an extension called a Just In Time COMPILER! ART definitely includes a compiler. So they are both more than pure Virtual Machines. They are hybrid runtime & environments — with ART even doing compilation upon installation. ‘nuf said. They generate NATIVE CODE. Dalvik does it based on following code execution at run time, ART does it in advance based on static code analysis.

PS I’ve written compilers and virtual machines, as has Robert. Have you done anything besides troll recently?

chojin999

Yeah, sure.. you wrote compilers and virtual machines too.. Now you will be telling us that you were among the ones that created Java too, uh?

The trick used to speed things up with Dalvik it’s not true native code. At all! And it has many issues.

limehouse

What David’s claiming is true. Don’t get stuck with your ego. This news is what I’ve always been waiting for, although we still need to see performance difference. It should be pretty close I think.

limehouse

What David’s claiming is true. Don’t get stuck with your ego. This news is what I’ve always been waiting for, although we still need to see performance difference. It should be pretty close I think.

Most of the benchmarks show that JITed code runs 10 to 20 times faster than interpreted code. There are many benchmarks done. Below given result graphs of two of them:

Its worth to mention that programs that run in JIT mode, but are still in “learning mode” run much slower than non JITed programs.

Drawbacks of JIT

JIT Increases level of unpredictability and complexity in Java program. It adds another layer that developers don’t really understand. Example of possible bugs – ‘happens before relations” in concurrency. JIT can easily reorder code if the change is safe for a program running in single thread. To solve this problem developers make hints to JIT using “synchronized” word or explicit locking.

For GC to occur program must reach safe points. For this purpose JIT injects yieldpoints at regular intervals in native code.

In addition to scanning of stack to find root references, registers must be scanned as they may hold objects created by JIT

Robert Engels

Can you please post your full name, so if I or someone else ever needs to measure your job qualifications, we have a better chance to do so?

The fact that Java re-orders code is a blessing, because it is very similar to what a modern CPU can/will do on its own. If you think you are writing “concurrently correct” code because it is “sequential” you have a whole host of problems coming your way…

chojin999

Because your full name it’s the truth here.. sure.. and you are the Pope, the IBM CEO, Apple CEO, Intel CEO .. everything.

How old are you kid ? 12 ? 15 ? Or are you an arrogant university student thinking to be the smartest one around going on tech blogs, sites and forums to show how cool you are, uh?

http://oligofren.wordpress.com Carl-Erik Kopseng

A true keyboard warrior. The stupid is strong in this one.

Gath Gealaich

“Example of possible bugs – ‘happens before relations” in concurrency.
JIT can easily reorder code if the change is safe for a program running
in single thread. To solve this problem developers make hints to JIT
using “synchronized” word or explicit locking.”

Except that this is not a problem with JIT compilers, this is a problem with all mainstream computer architectures and compilers. Both compilers AND CPUs reorder operations. In addition, many CPUs don’t guarantee that even such a seemingly innocent operation as a single ordinary 64-bit memory access will be atomic!

rober34

hey dont censure him, some teenage / college enthusiasm is also welcome :)

Gath Gealaich

“You have no clue what you are talking about. It’s still a virtual
machine. There is no magic trick to turn it into real native code.”

A “virtual machine” is exactly what it says on the tin. it’s a machine (with registers, instruction set, interfacing etc.) that’s virtual as opposed to physical. It’s a contract/specification against which you’re generating code. It doesn’t imply any particular mechanism of execution. It can easily be full AOT compilation like Microsoft does with ngen, JIT compilation, or an intepreter. Or even a continuously optimizing system like Oberon people tried with slim binaries.

But of course, “having written compilers and virtual machines”, you already know all that. The question is why you pretend that you don’t.

Very much in agreement with the other posters taking issue with this article. I think I will avoid ExtremeTech in the future.

massau

couldn’t a JIT compiler make code faster than native code (theoretical), because it knows the architecture and the system status and it could anticipate on that. like trying to keep the pipeline filled and all registers use.
Or is the main problem that the jit doesn’t have enough time to do the job whit all these variables?

Robert Engels

Exactly. 90% of the performance in an application comes from 10%of the code,so most JIT compilation trys to focus on the 10%

massau

so the JIT isn’t yet mature enough to be better than the native compilation.

Robert Engels

No. It depends on the usage whether the optimizations the JIT can make will perform better than the optimizations done at static compilation.

Here is a very trivial example that demonstrates what can happen in a JIT (and often does in HotSpot).

Say you are using a library routine in C that converts an int to a string. In the generic case you would have code similar to the following pseudo code.

if(x<0) {
append('-');
tostring(abs(x));
} else
tostring(x);

Well, a JIT can figure out that your application never calls tostring with a negative number and will optimize the code so it doesn't even include the ( if x < 0 ) check.

A c/c++ compiler cannot do that (although it would probably rely on modern CPU's branch prediction to make the performance hit negligible).

“Java is high performance. By high performance we mean
adequate. By adequate we mean slow.” –
Mr. Bunny

Anybody that has ever used a non-trivial Java program or has programmed in
Java knows that Java is slower than native programs written in C++. This is a
fact of life, something that we accept when we use Java.

However, many folks would like to convince us that this is just a temporary
condition. Java is not slow by design, they say. Instead, it is slow because
today’s JIT implementations are relatively young and don’t do all the
optimizations they could.

This is incorrect. No matter how good the JITs get, Java will always
be slower than C++.

The Idea

People who claim that Java can be as fast as C++ or even faster often base
their opinion on the idea that more disciplined languages give the compiler more
room for optimization. So, unless you are going to hand-optimize the whole
program, the compiler will do a better job overall.

This is true. Fortran still kicks C++’s ass in numeric computing because it
is more disciplined. With no fear of pointer aliasing the compiler can optimize
better. The only way that C++ can rival the speed of Fortran is with a cleverly
designed active library like Blitz++.

However, in order to achieve overall results like that, the language must be
designed to give the compiler room for optimization. Unfortunately, Java
was not designed that way. So no matter how smart the compilers get, Java will
never approach the speed of C++.

The Benchmarks

Perversely, the only area in which Java can be as fast as C++ is a typical
benchmark. If you need to calculate Nth Fibonacci number or run Linpack, there
is no reason why Java cannot be as fast as C++. As long as all the computation
stays in one class and uses only primitive data types like int and double, the
Java compiler is on equal footing with the C++ compiler.

The Real World

The moment you start using objects in your program, Java looses the potential
for optimization. This section lists some of the reasons why.

1. All Objects are Allocated on the Heap

Java only allocates primitive data types like int and double and object
references on the stack. All objects are allocated on the heap.

For large objects which usually have identity semantics, this is not a
handicap. C++ programmers will also allocate these objects on the heap. However,
for small objects with value semantics, this is a major performance killer.

What small objects? For me these are iterators. I use a lot of them in my
designs. Someone else may use complex numbers. A 3D programmer may use a vector
or a point class. People dealing with time series data will use a time class.
Anybody using these will definitely hate trading a zero-time stack allocation
for a constant-time heap allocation. Put that in a loop and that becomes O (n)
vs. zero. Add another loop and you get O (n^2) vs. again, zero.

2. Lots of Casts

With the advent of templates, good C++ programmers have been able to avoid
casts almost completely in high-level programs. Unfortunately, Java doesn’t have
templates, so Java code is typically full of casts.

What does that mean for performance? Well, all casts in Java are dynamic
casts, which are expensive. How expensive? Consider how you would implement a dynamic cast:

The fastest thing you could do is assign a number to each class and then have
a matrix that tells if any two classes are related, and if they are, what is the
offset that needs to be added to the pointer in order to make the cast. In that
case, the pseudo-code for the cast would look something like this:

Quite a lot of code, this little cast! And this here is a rosy picture –
using a matrix to represent class relationships takes up a lot of memory and no
sane compiler out there would do that. Instead, they will either use a map or
walk the inheritance hierarchy – both of which will slow things down even
further.

3. Increased Memory Use

Java programs use about double the memory of comparable C++ programs to store
the data. There are three reasons for this:

Programs that utilize automatic garbage collection typically use about 50%
more memory that programs that do manual memory management.

Many of the objects that would be allocated on stack in C++ will be
allocated on the heap in Java.

Java objects will be larger, due to all objects having a virtual table
plus support for synchronization primitives.

A larger memory footprint increases the probability that parts of the program
will be swapped out to the disk. And swap file usage kills the speed like
nothing else.

4. Lack of Control over Details

Java was intentionally designed to be a simple language. Many of the features
available in C++ that give the programmer control over details were
intentionally stripped away.

For example, in C++ one can implement schemes that improve the locality of
reference. Or allocate and free many objects at once. Or play pointer tricks to
make member access faster. Etc.

Programmers deal with high-level concepts. Unlike them, compilers deal
exclusively with low-level ones. To a programmer, a class named Matrix
represents a different high-level concept from a class named Vector. To a
compiler, those names are only entries in the symbol table. What it cares about are the functions that those classes contain, and the statements inside those functions.

Now think about this: say you implement the function exp (double x, double y) that raises x to the exponent y. Can a compiler, just by looking at the statements in that function, figure out that exp (exp (x, 2), 0.5) can be optimized by simply replacing it with x? Of course not!

All the optimizations that a compiler can do are done at the statement level, and they are built into the compiler. So although the programmer might know that two functions are symmetric and cancel each other now, or that the order of some function calls is irrelevant in some place, unless the compiler can figure it out by looking at the statements, the optimization will not be done.

So, if a high-level optimization is to be done, there has to be a way for the
programmer to specify the high-level optimization rules for the compiler.

No popular programming language/system does this today. At least not in the totally open sense, like what the Microsoft’s Intentional Programming project promises.
However, in C++ you can do template metaprogramming to implement optimizations that deal with high-level objects. Temporary elimination, partial evaluation, symmetric function call removal and other optimizations can be implemented using templates.
Of course, not all high-level optimizations can be done this way. And implementing some of these things can be cumbersome. But
a lot can be done, and people have implemented some
snazzy libraries using these techniques.

Unfortunately, Java doesn’t have any metaprogramming facilities, and thus high-level optimizations are not possible in Java.

So…

Java, with the current language features, will never be as fast as C++. This pretty much means that it’s not a sensible choice for high-performance software and the highly competitive COTS arena. But its small learning curve, its forgiveness, and its large standard library make it a good choice for some small and medium-sized in-house and custom-built software.

Notes
1.
James Gosling has proposed a number of language features that would help improve Java performance. You can find the text here.
Unfortunately, the Java language has not changed for four years, so it doesn’t seem like these will be implemented any time soon.

2.
The most promising effort to bring generic types to Java is Generic Java. Unfortunately, GJ works by removing all type information when it compiles the program, so what the execution environment sees is the end is again the slow casts.

3. The Garbage
Collection FAQ contains the information that garbage collections is
slower than customized allocator (point 4 in the above text).

4. There is a paper that claims that Garbage
Collection Can Be Faster than Stack Allocation. But the requirement is that there is seven times more physical memory than what the program actually uses.
Plus, it describes a stop-and-copy collector and doesn’t take concurrency into account. [Peter Drayton: FWIW, this is an over-simplification of the paper, which provides a means of calculating what the cross-over point is, but doesn’t claim that 7 is a universal cross-over point: it is merely the crossover point he derives using the sample inputs in
the paper.]

Feedback

I received a lot of feedback about this article. Here are the typical
comments, together with my answers:

“You forgot to mention that all methods in Java are virtual,
because nobody is using the final keyword.”

The fact that people are not using the final keyword is not a problem with
the language, but with the programmers using it. Also, virtual functions calls in general are not problematic because of the call overhead, but because of lost optimization opportunities. But since JITs know how to inline across virtual function boundaries, this is not a big deal.

Java can be faster than C++ because JITs can inline over
virtual function boundaries.

C++ can also be compiled using JITs. Check out the C++ compiler in .NET.

In the end, speed doesn’t matter. Computers spend most of their
time waiting on our input.

Speed still maters. I still wait for my laptop to boot up. I wait for my
compiler. I wait on Word when I have a long document.

I work in the financial markets industry. Sometimes I have to run a
simulation over a huge data set. Speed matters in those cases.

It is possible for a JIT to allocate some objects on a stack.

Sure. Some.

Your casting pseudo-code is naive. For classes a check can be
made based on inheritance depth.

First, that’s only a tad faster than the matrix lookup.

Second, that works only for classes, which make up what percentage of casts?
Low-level details are usually implemented through interfaces.

So we should all use assembly, ha!?

No. We should all use languages that make sense for a given project. Java is great because it has a large standard library that makes many common tasks easy.
It’s more portable than any other popular language (but not 100% portable – different platforms fire events at different times and in different order). It has garbage collection that makes memory management simpler and some constructs like closures possible.

But, at the same time, Java, just like any other language, has some
deficiencies. It has no support for types with value semantics. Its
synchronization constructs are not efficient enough. Its standard library relies on checked exceptions which are evil because they push implementation details into interfaces. Its performance could be better. The math library has some annoying problems. Etc.

Are these deficiencies a big deal? It depends on what you are building. So know a few languages and pick the one that, together with the compiler and available libraries, makes sense for a given project.

massau

a link alone would have been enough.

But this mainly points out that java is just flawed by design. Nether does its JIT have any interfacing whit the OS to optimise for the system at the moment. maybe some compile options would help java. or even some preprocessing. it could do a lot more optimisations when it compiles to byte code before the jit makes it native.

don’t forget that java also uses a VM witch gives it a lot of overhead.

i have programmed for java and i know it doesn’t have pointers etc. I liked some things but on the other side i missed some parts.

Maybe a new languish that is optimised for JIT compilation could be a lot better and faster.
But than again you could make a successor for c++ with a completely new syntax and a lot of higher level components (vector operations , matrix operations, parfor, variable precision with compiler hits) while still keeping the lower functionality.

So you don’t have to write a lot of lines to do little simple things.

chojin999

The new C++11 ISO standard adds a lot of new features to C++ including new syntax too.

What new language features does C++11 provide?
You don’t improve a language by simply adding every feature that someone considers a good idea.
In fact, essentially every feature of most modern languages has been suggested to me for C++ by someone: Try to imagine what the superset of C99, C#, Java, Haskell, Lisp, Python, and Ada would look like.
To make the problem more difficult, remember that it is not feasible to eliminate older features, even if the committee agrees that they are bad: experience shows that users force every implementer to keep providing deprecated and banned features under compatibility switches (or by default) for decades.

To try to select rationally from the flood of suggestions we devised a set of specific design aims.
We couldn’t completely follow them and they weren’t sufficiently complete to guide the committee in every detail (and IMO couldn’t possible be that complete).

The result has been a language with greatly improved abstraction mechanisms.
The range of abstractions that C++ can express elegantly, flexibly, and at zero costs compared to hand-crafted specialized code has greatly increased.
When we say “abstraction” people often just think “classes” or “objects.”
C++11 goes far beyond that: The range of user-defined types that can be cleanly and safely expressed has grown with the addition of features such as initializer-lists, uniform initialization,
template aliases, rvalue references, defaulted and deleted functions, and variadic templates.
Their implementation eased with features, such as
auto, inherited constructors, and decltype.
These enhancements are sufficient to make C++11 feel like a new language.

For a list of accepted language features, see the feature list

massau

i know the thread object is a lot easier than the posix version. but you still have to type a of redundant chars. like
“if(foo=bar)” it could be “if foo == bar”
instead of ending every line whit ; (which was good for mall screens) they could just use the ‘n’ char and use “…” if you need more than one line for the statement.
the {} could be raplaced by something where you know what end. a nice feature would be that spaces in nubers would be ingored so you can type 3 000 000 instead of typing 3000000.

instead of having to use && there could be a tri compaire. strongly typed is needed for the compiler and shouldn’t be deleted. so ‘auto’ could be used when needed. making the ‘-+’ or some non used char a new operator for compaires could be nice. like if x ==10 -+3 would be nice. it means that if x is 10 whit a deviation of 3 . so its true as long as x is between 7 and 13.
an example whit modern non-C syntax.

int a double b

double x= 30

matrix A = 50, 20, 30, 50; 65, 66 ,90, 10;

if 0.7<x30
double temp= sum(A)
end if; (or fi )

chojin999

Bjarne Stroustrup clearly explains there why you can’t add every syntax from other languages expecting such a mix to make sense and improve things.

It’s not that because a syntax exists in other languages it must be added to C++
Fact is that C++ is and will keep being better than any other language for anyone that wants to write maximum quality maximum performance programs (and apps ..the same).

Other languages are easier for common people. C/C++ and assembly require a strong understanding of both software, OS and hardware architectures to be used properly. It takes more time to write the code and more testing should be performed but the end result can be higher quality, faster programs. It’s up to the programmers to do that and ensure that it happens.

massau

it still doesn’t say that a less verbose languish could be as fast as c++. a languish whit a better syntax would be matlab, phyton. they are easier to understand.

c++ is nice and is fast because of its age but the syntax is old and verbose this was needed for the older compilers because it didn’t have much computing power. so a newer languish could be as fast but whit a less verbose syntax. so you could write a program in 100 lines instead of 1000 lines.

chojin999

Matlab.. Python.. ? You seriously want to compare those to C/C++, Objective-C ?

massau

i am comparing them on the syntax level not performance and features. there syntax is less verbose and a lot cleaner compaired to any C derived languish (C , C++ , java , Go etc.)

i know it was needed back than to speed up the interpretation of the code but computers got a lot faster and wouldn’t need so much tokens to do the job.

just look at python and matlab they can return multiple variables in a clean way. matlab has a wonderful array/matrix operation library.

if you look at C than it is a good syntax but it is just getting old, the “next big languish ” will have a new syntax that is a lot cleaner compared to C but it has to have a lot more features than C to compete whit it.

MrBillGates

Hey its me Bill Gates, Just so you know, C or C++ will always be slower than MS-Basic that C was created from.

http://www.cardinalphoto.com David Cardinal

In case anyone misses what I assume is the tongue-in-cheek part of that comment, C was written before MS-Basic.

Robert Engels

There are so many things incorrect with the above post it is amazing it is being found on ExtremeTech.

You might start by reading this, rather than a post that is probably 15 years old…

Anyway, another major difference is with a JIT is that the code can be optimized for the actual architecture it is being run on (i.e. generate code that uses AMD-64 special instructions, etc.) An AOT needs to make architecture decisions at compile time (although can dynamically link against libraries compiled with different optimization characteristics).

You need to go back to school and learn the fundamentals of computers and software design.

It might help you to start by studying the OpenJDK source code, and reading academic papers on the nature of a VM and JIT/AOT compilation.

Although quite frankly, it probably doesn’t matter. Someone with your obvious mental blocks probably chooses the wrong data structure to begin with so no matter what you develop in it is going to be messy and slow….

Tim Jordan

If the programmer uses a bubble sort in a situation in which a heap sort would have been better, the optimizer could not know this. “Given a description of an arbitrary computer program, decide whether the program finishes running or continues to run forever”. This is called the halting problem. Alan Turing proved in 1936 that the halting problem is not solvable. So if a program can not analyze another program and determine how long it is going to run then it can not decide on which sorting algorithm to use. Programmers are the best optimizers. They can profile their code to see where it spends most of it’s time and they understand sorting algorithms and which uses more space, time, the length of time as the data set increases by n² or n log(n). C is faster than C++ and C++ which is faster than any interpreted languages.

http://www.cardinalphoto.com David Cardinal

Tim — I’d add the caveat that programmer’s _can_ be the best optimizers, if they really work at it. When we added register coloring optimizations to our C compiler back (this was awhile ago) we got a raft of grief from programmers sure that they were doing a better job of allocating registers than the compiler ever could. But when we benchmarked it, the compiler did a better job more often than not.

Tim Jordan

One can not allocate registers using C. Those programmers you talk about never had the ability to allocate registers. This would have to be done at the machine code level and you said that they were complaining about your C compiler. C can not manipulate or choose which registers to use. Plus no company makes a C compiler except the CPU manufactures and CPU manufacturers don’t have programmers which give feedback to the so I can determine that you are lying about your job and by the fact that you believe it is possible to choose registers from C tells me that you are in high school. Probably a drama major. How many times have you been beaten up because you sashay your ass past the football team?

http://www.cardinalphoto.com David Cardinal

What? For the record, it was the C compiler on the Amdahl 470, where I worked in the 1980’s. We supported the register keyword (look it up, its in the language, although pretty much un-used now) in the compiler that allowed programmers to specify which local variables they wanted the compiler to put in one of the mainframe’s registers. We had lots of programmers that used the C compiler (we wrote and sold a version of UNIX that was written in C, and we wrote many of our own CAE tools, also written in C), so we got lots of feedback, since many of the programs written in C took hours or even days to run (simulating our new hardware). So maybe you should actually check your facts before slinging insults.

http://oligofren.wordpress.com Carl-Erik Kopseng

“no company makes C compiler except the CPU manufacturers”. Are you high? Lots of people make new compilers every year. It’s often part of the bachelor of computer science. Even I have made a basic C compiler in uni.

There are basically two cpu producers today for x86: AMD and Intel. AMD has no compiler I know of. Intel has a very good one. Yet I can name several others: Turbo C, GCC, Clang, … Wikipedia has a list o the order of a magnitude larger: https://en.wikipedia.org/wiki/List_of_compilers#C_compilers

http://www.cardinalphoto.com/ David Cardinal

FWIW, the register keyword in C can certainly be implemented by a compiler as a suggestion that the variable be kept in a register. We did that at Amdahl for a few years, but after we added intelligent register allocation to the compiler we found it to be much more efficient than letting the programmers pick which variables went into registers, so we started ignoring the keyword (we didn’t tell them though:-))

massau

is there a way around that problem? or does it need some-kind of really exotic computing (quantum computing) to solve it?

can’t you compare the sorting algorithms by looking at the length of the array . short ones would be bubble sort, long ones quick sort and really long ones use multi threaded processing or a merge sort for the gpu. this doesn’t require a jit, so it would also be possible to do this native.

Tim Jordan

Only mom and dad sort records which they know the length of. Most sorts in industry, sort as the data arrives. Like in a lottery. When customers buy a lottery ticket the system has to sort their guesses so that it can find the winner as fast as possible. This involves sorting the data as it arrives. The analyzer doesn’t know that it is sorting lottery ticket data and that the numbers are from 1 to 30 and that each person gets 6 guesses per ticket. It only knows that it is getting data. Without knowing something about the data and without knowing how many people are going to buy a lottery ticket, there is no way a non living computing machine can ascertain the best method unless it was a cyborg like Data on Star Trek.

roebling

Works flawlessly on my Galaxy Nexus/Android 4.4 v.2013.11.10, not at all on my Nexus 7 FHD/Android 4.4 PAv.2

Jacob Wadsworth

So amazing that Android has improved or advanced so much in a span of only 5 years. Other developers and companies perhaps need more than that to accomplish what they have. I’ll be looking forward for more developments the next couple of months. It is really advantageous to be an Android user especially for business purposes. – http://www.brinksmachine.com/

quesl

great

ac1dra1n

Shouldn’t this have been there since day 1?

http://bit.ly/ANDROIDISTHEBEST Chuck Norris

Ever heard of “technology advancement”? By your logic, we should have had quad-core CPUs since 1946.

ac1dra1n

Not necesarily what I mean. iOS has had it since….2008(?) I believe.

http://bit.ly/ANDROIDISTHEBEST Chuck Norris

iOS is not an open source OS, not to mention that it’s used on about 15 different devices, whereas Android has over 500 different devices.

ac1dra1n

Android ART compiles the app for every device. Like when one compiles code for one computer. The code is compiled for that computer and that computer only. (I know that isn’t true for all code, but the point remains). So the app is compiled specifically by the device for that device. So I don’t think it would matter how many devices there are, the app is compiled any way by the device.

http://bit.ly/ANDROIDISTHEBEST Chuck Norris

Well then it obviously wasn’t possible before.

rober34

in fact it is used in only 1 device, all apple devices have the same architecture except for incremental changes, as opposed to Android where you have hundreds of radically different platforms

http://oligofren.wordpress.com Carl-Erik Kopseng

No. iOS apps are compiled to native code before they are shipped. They can do this because all iOS-devices can use the same binaries.

Android devices are compiled to byte code before they are shipped. Then it’s compiled by ART _on_ the device.

This is because devices differ, and some devices will never have ART (older android) and need to run the byte code.

Aditya Ghosh

Hell, why not throw Gecko in there?

http://www.skiusainc.com/ Ehtesham Shaikh

I think there is no comparison between Android and iOS. Both the platforms are much different from each other. If someone want to buy a mobile phone other than Android it’s better to go for WP.

Metrx Qin

I will give it a try.

davinci5

Excellent article. Thanks!

Mangap

We still waiting for samsung and other manufacturer to deliver the kit kat upgrade. I hope they also prepare for android 5 upgrade too

SimonPieman

My Nexus5 already feels snappier than any iOS product, so I really don’t know what you are talking about.

ok heh

Learn reading comprehension or go home.

superg05

what i wanted to know by reading this article was does this fix there oracle issue?

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2016 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.