Posted
by
Soulskill
on Friday July 12, 2013 @11:17AM
from the keep-on-marking-those-benches dept.

MojoKid writes "A few weeks ago, the analyst company ABI Research published a report claiming that Intel's new CloverTrail+ platform (dual-core Medfield) for smartphones was significantly faster and more power efficient than anything ARM's various partners were shipping. If you follow the smartphone market, that was a very surprising claim. Medfield was a decent midrange platform when it launched in 2012, but Intel made it clear that its goal for Medfield was to compete with other platforms in its division — not seize the performance crown outright. Further investigation by other analysts has blown serious holes in the ABI Research report. Not only does it focus on a single, highly questionable benchmark (AnTuTu), the x86 version of that benchmark is running different code than the ARM flavors. Furthermore, the recently released Version 3.3 of the test is much faster on Intel hardware than on any of the other platforms. But even with those caveats in place, the ABI Research report is bad science. Single-source performance comparisons almost inevitably are."

If you write your code in C, you can port it relatively easy to iPhone, Android, WP8, and Blackberry (depending how much UI code you have). If you write it in Java, avoiding the NDK, you have to do two-four times as much work to port it.

Which would you rather do, use the NDK and recompile, or write once for each platform? "The right way" isn't always a single choice, it's usually a compromise.....

If Intel processors become popular in Android phones, Google will probably introduce a multiple-architecture executable format, much like iPhone does with FAT and MACH (currently around 70% of apps for the iPhone have two architectures, one for ARM7 and one for ARM7s).

The last Atom/Android phone I read about had an ARM emulator for running NDK apps. Performance suffers, but it's better than not running at all (particularly since Atom is more powerful than the common Cortex-A9 cores, so it has some performance to burn).

App stores make this less of an issue. If Apple wants to switch the iDevices to x86, they'll tell developers that they must have an x86 build posted by such-and-such date, or their app will be dropped from the store.

App stores make this less of an issue. If Apple wants to switch the iDevices to x86, they'll tell developers that they must have an x86 build posted by such-and-such date.

Apple DOES NOT do that for older apps. It will often place restrictions on app updates or new app submissions (like must include iPhone5 screen shots) but they don't ever force older apps to do any kind of update - nor should they.

if you enable the x86 portion then the app ships with both x86 and arm ndk compiled binary parts... and mips too.

But it requires every app to re-compile, so it's not like you can launch with a full selection.

all major apps would have that shit working instantly if they got even 10% of the android share

No way, it would take a lot of testing and even for a 10% share a lot of companies would approach it cautiously. It would take years to get 50% of the app market onto a new platform even IF (Huge, huge IF) yo

Apple too might benefit from a shared architecture between the desktop and phone

It'd have to be a hell of a lot more than a "might benefit" for them to do it. It'd have to be the inability of one of the platforms to support future requirements in their existing niche. Which is a very far distant prospect in each case.

Actually, no: a large plurality of Android apps are coded to Dalvik, which is basically (pun intended) java bytecode. Very few Android apps use the alternative (Native something something), which allows ASM or whatever, and then only for small snippets. That's why Android apps run indifferently on ARM, x86, and MIPS... various versions of those actually.

Intel has a VERY long history of questionable;) benchmarks, all the way to tweaking processor designs to run benchmark code faster. Microsoft's "Get The Facts" propaganda is just a pale imitation of Intel's history.

Intel has a VERY long history of questionable;) benchmarks, all the way to tweaking processor designs to run benchmark code faster. Microsoft's "Get The Facts" propaganda is just a pale imitation of Intel's history.

Supposedly, benchmarks are written to simulate real workloads. It seems to me that tweaking processor designs to run benchmarks faster is a good idea. If you have a better idea for what applications to design a processor for perhaps you should join a processor company in the workload analysis or planning group.

I agree that Intel, or any company with enough "muscle" (see Nvidia, ATI/AMD, MSFT, IBM, HP, Oracle, etc.), will try to influence the press and show their products in the best possible way, even dish

In summary, while the compiled ARM code performed a series of bitwise operations as written in order to excercise the CPU instructions, the Intel compiler seems to have applied some compile-time smarts to effectively bypass a lot of the work but achieve the same end result. In a real-world progr

Wow. What's more, apparently that optimisation was added by Intel after the benchmark was developed:

What's more, this optimization wasn't present in ICC until a recent release. Somehow I don't think that they just now discovered it has general purpose value. More likely case is that they discovered is they could manipulate AnTuTu's scores. Seems to coincide well with this third-party report appearing showing how amazing Atom's perf/W is - using nothing but AnTuTu. Or the leaked scores seen for CloverTrail+ and now BayTrail that are AnTuTu. Is this really a coincidence?

So basically they modified their compiler to optimise away the actual benchmark, then got someone to release a third-party report based solely on the benchmark they'd just manipulated the results of.

The interpretation of whether the benchmark is broken or not depends on what the benchmark was trying to ascertain. If it was trying to ascertain which CPU could process instructions fastest and most efficiently then it could be argued that it was broken because the result is heavily influenced by the logic of the compiler rather than the performance of the CPU. If it was trying to ascertain which platform had a better result allowing for toolchain-based optimizations, then you could argue the benchmark was

The intel code was just optimized with a smarter icc compiler that unrolled a loop doing trivial bitops, which is a fairly standard thing for compilers to do. If this breaks the benchmark because optimization wasn't in the spirit of what it was trying to measure, fix the benchmark.

It was so standard that the ICC compiler apparently didn't bother to do it until after the benchmark was released, probably because it's unusual for anyone to write code that benefits from that optimisation outside of benchmarks.

The article is on face value self admitted FUD. If, maybe, possibly. Due to the lack of hard facts, the article is raising concerns of what might be happening.

Has anyone tried to use one of the phones hands on? I had the oppertunity to try one of the earlier Intel phones, but not the new one, so I don't have a comparison between the older Intel phone and the newer one. A benchmark between the new and older model would be interesting. I know how the older Intel phone works. It worked very well. The on

Clover Trail is significantly more power efficient than Medfield because it has a lot more power control stuff in it, so more of it is turned off most of the time. This is not a big secret as far as I know.

The rest of the phone consumes power too, particularly the screen. So YMMV.

ARM has been focusing on mobile platform architecture for much longer than intel, its like Honda trying to make a truck, sure they did do it but its nothing like a company making nothing but trucks from the get-go. Intel needs to stay right where they are king and keep that crown and stop trying to follow rabbit holes thinking they might find money where ARM is the clear leader down in those places.

ARM has been focusing on mobile platform architecture for much longer than intel, its like Honda trying to make a truck, sure they did do it but its nothing like a company making nothing but trucks from the get-go. Intel needs to stay right where they are king and keep that crown and stop trying to follow rabbit holes thinking they might find money where ARM is the clear leader down in those places.

x86 is a horrible kludge and always has been, it survives not because it's any good but because there is a large amount of closed source code out there compiled specifically for it.Trying to shoehorn x86 into another market tho, one that isn't previously locked in to it just seems ridiculous... Even if Intel manage to become competitive with ARM by using more efficient fabbing processes, this is only detrimental to end users as an ARM chip fabbed on the same process would be better still.

Well, the best thing Intel may do to society is just dying... But I see how they'll think differently.

Anyway, one of the basic tenets of evolution is that it can only happen when you change things. Intel is trying to evolve without changing anything... They'll stay dependent of Microsoft, they'll keep the centralized product development, they'll stay compatible with the power wasting x86 applications, and looks like they are keeping the dishonest marketing department.

It depends upon how their architecture is licensed. ARM sells a licenses and then licenses on steroids. The latter allow you much latitude in how you design your SoC. Intel so far has an "I know best" attitude. The other problem for Intel is their chips are too expensive. To fight on price means their profit margins shrink and that makes chasing ARM less of profitable proposition. On the other hand, Intel might feel their future is threatened by ARM so much that they must go after ARM.

Yep, the power relation is similar, but ARM was successfull competing with them up to now. In fact, that's the main reason they have a monopoly at mobiles, because Intel killed everybody else... And time is playing at the ARM team, not Intel. With the end of Moore's Law (that's not a certainty yet, but a possilbe outcome for the next generation of fabs), Intel's lead in fab technology becomes less relevant.

I don't think there's any chipmaker (CPU, GPU or otherwise) who hasn't been caught doing it. Not that that makes it right, of course.

For the quick readers, note that this is about Clover Trail, not to be confused with the recently announced Bay Trail. Though it does cast doubts on Intel's claims about the latter's performance [extremetech.com]...

What's really hilarious is that this supposedly "rigged" benchmark got almost zero press (never posted on Slashdot or mentioned on major tech websites) when it first came out. Now the ARM religion has to wage a jihad on anybody who claims that it is physically possible to use silicon that doesn't include royalty payments to the Church of ARM to run your smartphone.

Using my own ARM-based smartphone and the Dexplorer app, I looked at Antutu and other common Android benchmarks. One interesting thing that stood out was that Antutu uses the NDK (i.e. there are C-compiled libraries for both ARM + x86 ABIs). The other benchmarks like quadrant and Linpack were pure java based. Basically what I'm seeing is that the Clovertrail+ x86 hardware absolutely is in the same ballpark as the snapdragon 600 for performance, but that the ARM vendors and Android developers have been optimizing Dalvik for the ARM architecture since 2009 while little has been done for x86. Once a real compiler like GCC gets to generate C-code for the x86 Atoms though (and also for the ARM parts BTW, it's a level playing field), then we see that Clovertrail+ puts up decent competition for modern ARM chips in the same power envelope.

As for the usual: "Intel cheats using compilers!" whine, please take a big dose of vitamin STFU. I've been bored to death for over a decade by the drone of the ARM jihadists who claim that their architecture is so magical that you can literally knock back a fifth of Tequila and vomit up a perfectly optimized web browser before breakfast. If ARM is truly so beautiful and if the engineers at ARM are truly such geniuses, then it should be trivial for them to implement compilers that blow away x86.

You misunderstand what Streisand effect is. No one is trying to delete or remove the report published by ABI Research. If you want to know what Streisand effect really means, a quick trip to wikipedia would show you the common usage of this term. Discussing the merits or shortcomings of such reports is something anyone with a brain would like to do. Only those ignorant or lazy would blindly accept everything they read without questions.
Your attempt to defend Intel from accusation of cheating using compile

When it comes to compiled code the situation is reversed, gcc has been heavily optimized for x86 whereas other architectures although supported have had far less work done on them. There's also Intel's compiler which generally produces faster code than gcc.

1) Massive increase in die area for units that translate the rotten x86 ISA to the internal RISC one.2) Massive power usage by that translation block3) Massive IP costs for everything that makes x86 'special' (special in the short-bus sense)4) Massive coding inefficiency overhead for having to control the true internal RISC ISA with code written to the x86 ISA (usually emitted by a compiler, of course)

5) Massive cost of supporting multiple overlapping compute models such as x87, mmx, sse, sse2, blah, blah, blah...6) Massive cost to support stupid number of obsolete and legacy instructions that nobody care about but still need to perform well to avoid performance regressions on old binaries

I hope this doesn't surprise anyone, but they're running different code. They're also running different instruction sets. And they likely have different memory controllers and caches sizes. I think we can say that Intel is pretty darn good an running AnTuTu, but that's all that graph says.