"ARM hasn't published DMIPS/MHz numbers for the Cortex A15, although rumors place its performance around 3.5 DMIPS/MHz."

Krait has 3.3 DMIPS/MHz, so if a dual Cortex A15 would run at the same frequency they would be fairly comparable I would imagine (obviously ignoring all other elements that could help performance on either of them).Reply

And if that's the case, HTC will have an interesting problem with their new lineup. It would mean, if the rumours are correct, their new flagship One X model using Tegra3 AP33 chipset at 1,5GHz and a 4,7 inch 720p screen might be slower compared to the One S, sporting the Snapdragon S4 chipset and a 4,3 inch screen with qHD.Reply

I can't wait to see the power savings, especially since the modem is a huge power draw, and one of the benefits is that QC is the manufacturer, which means integrated chip and less power consumption (as well as thinner device).Reply

I'd like to see a clock-for-clock comparison. Tegra 3's A9 is at 1.3ghz and Krait is at 1.5ghz here. I'd be interested to see what happens when A9 gets scaled down to 28nm, and whether or not Krait will still have an advantage.Reply

And what would you need 1.5Hhz A9 for at 28nm? A15 is much better than any A9, and will come at lower process nodes.Besides, you can see that 1.2Ghz Exynos is much slower than 1.5Ghz Krait so I don't think higher clock will change it. Reply

I agree the wording is a bit awkward there since they are both driving identical numbers of pixels. If he meant to compare it to the earlier 720p results it'd probably be better to make that explicit.Reply

I've been my old android device that comes with Android 1.6, and Cyanogenmod-ded to Gingerbread (it's not so responsive when running more than one app), because I need the new version of the Gmail app.Reply

Since both devices actually render the same amount of pixel but with different aspect ratio, would it be possible, that the performance hit seen for the iPhone 4S, is the result of graphics rendered in a standard aspect ratio (16:9 or something else) then having to be transformed to fit the particular screen?Reply

Maybe it's because at the lower resolution the faster CPU on the Krait (newer architecture with higher clocks) matters more than the faster GPU on the A5. When the resolution grows, the difference between the GPU becomes more apparent.Reply

Considering Apple controls the entire software stack and the A5 silicon, it'd be pretty stupid of them to do that. And if you look at how performance scales between the iPad (4:3) and iPhone (16:9), there's no slowdown due to aspect ratio.Reply

There are some hickups in Android that have to do with the UI thread looking up storage but for the most part, it's a CPU thing. The thing to keep in mind is that UI fluidity is an entirely different type of code than Javascript parsing. And looking at the Basemark results, Krait is quite capable in that department.Reply

I don't know for sure, not a definitive answer here, just adding to the discussion.

Like you said, it's a reference design (Mobile Development Platform). They put as little time as possible into making this pretty.

When I was in college we had some old development platforms for some Motorola chips that were essentially a large circuit card with ports on all the sides for all the I/O and buttons to push for different operating modes like programming mode. It in no way looked like what an actual product would look like - because that wasn't its purpose.Reply

Granted, we're only given SunSpider and BrowserMark benchmarks for the Atom Z2460 reference platform, but they're both actually ahead of the numbers for the Krait MDP - 1331.5 versus 1532 on SunSpider and 116425 vs 110345 on BrowserMark. While I expected Atom to be competitive, I'd thought it likely for Krait to be slightly ahead on the single threaded benchmarks, so I'm somewhat surprised that it's not. (Note that I'm somewhat surprised that there was no mention of how Krait compares to Atom Z2460 in the article.)

As for power, that same article states that the Atom Z2460 SoC consumes ~750 mW at 1.6GHz - that's for the entire SoC, not just the CPU core. It'll be quite interesting to see how actual battery life compares between products once released.Reply

The difference is, one is Intel's numbers and the other is a 3rd party reviewer's on an actual device.

So yes, I agree. We'll have to see what actual phones using Atom will be like. Note that Sunspider isn't the end-all of "single-threaded performance" either. The JIT for Javascript on x86 is far more mature -- having been developed for a decade now -- than it is for ARM.Reply

Well, I tend to trust Intel's numbers when they're actual hard numbers rather than percentages or normalized figures - they can't exactly get away with making up figures.

And no question about the fact that SunSpider/Browsermark aren't indicative of all too much... but I wouldn't claim that Intel's advantages on those benchmarks are due to a superior JIT/software advantage. Remember the performance figures from that Oak Trail Tablet prototype running an early Android port from June of 2011? That was a prime example of the sort of software disadvantage that Intel had to overcome in order to get Android running well on x86. While a bit dated, here's an excellent example of the performance differences on x86 java implementations between OS (note that linux had a slightly newer version, but they were both using the latest available) - http://www.phoronix.com/scan.php?page=article&...Reply

No, but you'd be surprised how much a bit of pick-and-choose can help. Most comprehensive reviews are pretty rigorous with how many times they repeat a test, how much warm-up they give a device and whether or not they pick the median, average, etc.

One could easily pick the best number, which can vary quite a bit especially for a JIT benchmark.

I've also seen that comparison before. There was a rather thorough discussion of it and its relative lack of merits at RWT. I'd link, but it's being marked as spam :/Reply

That 750mW is not for the entire core, it's for the CPU with hyperthreading disabled plus L2 cache. Intel said enabling hyperthreading adds some further draw to it, something between 10-20% as far as I can remember.Reply

I'm pretty sure I'm not the only one who is disappointed that the Krait dual core wasn't compared to intel's upcoming atom socs. Isn't this the generation that was supposed to bring arm up to parity with intel's chip? All in all this was a great article, sets the standard for tech reviews.Reply

How would they weaken them in shipping products?Do you have more comparisons between the MDP and shipping phones apart from the Rezound?I would look to different governor settings, different software builts and different settings as an explanation before jumping into conspiracy territory. :DReply

Why does the MDP MSM8660 have significantly higher (double) performance than the presumably "same" MSM8660 chip in HTC Rezound? Isn't the MDP MSM8660 supposed to showcase the same MSM8660 that will go into the market?

I'd love for Brian or Anand to prove me wrong here with some kind of technical explanation, but until then I'll just assume it's Qualcomm being sneaky and trying to manipulate the public's opinion about their chips.Reply

NDK is correct about governor being the reason, and I found that result interesting as well.

It's clear to me at least that the governor settings on the Rezound are fairly conservative, and that even with a workload that's supposed to completely load both cores (mashing the multi-threaded test button in linpack pro), you never really get better that single threaded performance.

It's Sense. If you look at some of the phones with a less bloated version of Android (like the Xiaomi phone used in the Vellamo benchmark article that runs the same processor as the Rezound but with MIUI), they score pretty close to the MDP scores.Reply

I am going to assume the latest OFFICIAL OS released by samsung Anandtech is not in the business of benchmarking every different ROM or OS on every phone. You would most probably be getting different results running ICS Cyanogen. As far as I know ICS is only official on the Nexus.Reply

Cyanogen is typically crippled by the fact that they are restricted to the open source versions. Especially in early release they don't have access to many of the customizations and binary code in release versions let alone per-release.

It's my experience that Cyanogen doesn't even come close to release performance or power use until about a year later. This is because it takes the manufacturers about 6 months to post their kernel source then another 6 months to port and modify for the cyanogen system.

So comparing a cyanogen alpha mod to a developer preview isn't even relevant, as was said. Reply

There original SGS2 results were incorrect in the S2 vs 4S post a while back. There was a pretty big flame war in the comments from people with stock phones getting around 2000ms in Sunspider but Anandtech just ignored them.Reply

Do you think that Win 8 tablets using ARM SoC's will likely have a SoC based on many of the components inside Krait? I know there will have be certain changes for WOA but the CPU and possibly the GPU (now that it supports Direct3D 9_3) will be used for these tablets?

And the same goes for ARM's A15, will WOA likely be running on SoC's based on that too?Reply

We can look at the perf of CedarTrail or Ivy Blossom or whatever. Since Intel has said they are more so competing with Qualcomm. And this is only at 1.5GHz. When the 2.5Ghz chips come out with the new Adreno (Former ATi GPU), everyone will have to pack up and go home.Reply

This is very obviously faster than something like the Tegra 3 in single or dual threaded performance, I wonder how many apps take advantage of more than two threads on Android or iOS? I'm guessing for the foreseeable future faster duals will win out. Reply

Can Brian Klug & Anand Lal Shimpi please clarify for me which version of the SGS2 is being used? Its a very pertinent question. Is it the i9000 with the 1.2ghz Exynos chip or the American Hercules T989/Skyrocket variants that have the lesser Snapdragon 1.5ghz chips in them.

Judging from the benchmarks, it really makes me think its the hercules/skyrocket. That really needs to be clarified, since unfortunately not all SGS2s are created equal.Reply

Hardware. But it's run on the JIT instead of native code. According to CF-Bench, Java FP performance is around 1/3 of native. Neither actually use NEON but instead uses the older VFP instructions.Reply

The Tegra 3 is actually a big disappointment from a performance standpoint. It actually has 5 CPU cores and the GPU performance isn't much better than the Tegra 2. The Adreno 225 is a much bigger upgrade but I'm afraid that it's another marginal upgrade.

The A5 in the iPad2/iPhone 4S are over 1 year old by March. In that time, Nvidia's Tegra 2/3 has not dominated and the MSM8960 is finally a true contender for the fastest SOC on the market. By the time this thing is out in volume, Apple has the A6 ready and most likely another 4-8x performance increase over the A5.

Unless the rumors are true and its A5X, not A6, with just faster dual cores rather than quads on a newer architecture. I would not be surprised, its like the 3G-3GS was an architecture change, then the 4 was just a faster chip on a similar architecture. The iPad 2 was an architecture change, the 3 might just be a faster version of the same thing, hopefully with improvements in the GPU. I'd be fine with that, as long as the GPU kept up with the new resolution. Reply

I was just plotting out what little resolution scaling info there is here and noticed something very odd. Both the iphone 4s and galaxy s2 actually score MUCH higher when the resolution is raised to 720p offscreen. I can see that in the 4s' case it could be explained with fps caps, but the S2 is definitely not hitting a cap at 34.6 fps @ 800x480, yet it hits 42.5 fps @ 1280x720. All other phones predictably step down in speed. Anyone else notice this?Reply

Yes I did. It was actually the reason I was going to post. I was curious to know if the iPhone had VSync or not because it made no sense that it would get better performance at a higher resolution. Neither of the results make any sense to me since they are counter-intuitive.

If the "offscreen" tests force VSync off then that could explain it for the iPhone but not really for the SGSII unless some parts of the test go way past the 60FPS cap with VSync turned on.Reply

I'm still carrying a first generation HTC Incredible (yep, one of the original ones!), been out of contract for a few months, was waiting to hear more about the 28nm SoC update. These look really, really good, seriously looking forward to them hitting the market now!Reply

I wonder how many apps scale beyond two cores. For the time being, I doubt its many, and since you're still not doing any true multitasking I think a faster dual core like this will trump a slower quad like the Tegra 3 most of the time. Reply

Probably very much like desktop performance here: going from 1 to 2 cores is a huge upgrade, even for single-threaded apps, because it off-loads background chores to the second core and you get a full core dedicated to your running app. Going from there to 3/4/6/8 cores is really only helpful if your apps are truly multi-threaded or you're heavily multitasking.

Now, the increase in IPC is definitely going to help everything go faster.Reply

How can You compare these video cards when the hardware is running different versions of software ?Let me tell you that I've tested my Samsung Galaxy S2 running official Android 4.0.3 with Vsync on and on Egypt test I got 60fps !!! - evidently this result is influenced by the framelimit imposed by Samsung drivers.So these benchmarks You did are not showing us the truth. Sorry to raise this problem - but I dare You to do these tests again :)))Reply

Galaxy S2 - official android 4.0.3 : 45.67 FPSFunny, huh ?!Redo the tests, please - otherwise we will think this is a commercial crappy presentation for new products - not a professional benchmark.Reply