Post Your Comment

57 Comments

Any chance you can divide the average power used per test into the scores to give us a performance factored by power ratio? That would be interesting to see. I think it was kind of obvious the hd4000 would win, but what if the lower end graphics had a higher TDP....Reply

Well, frankly the user interested in a Surface Pro right now isn't all that concerned about power/performance ratio, they are interested in running a full-blown OS on a high-quality piece of hardware.

Not that what you are asking for isn't important or interesting, it certainly is and would be. However, I will be more interested in those numbers when the Surface comes out using Haswell; I think Haswell will be a game-changer for it. The only reason MS released it now (using Ivy Bridge) was because they wanted to get it on the market and in the minds of potential buyers as soon as possible. In my opinion, of course.Reply

Clover Trail's showing is frankly pathetic, especially when it's being beaten by a $200 Nexus 7. Then again, I didn't expect much since they stuck with an SGX545 instead of anything remotely modern for the GPU. If I didn't know it was simply a stopgap for Bay Trail, I'd have completely dismissed them from the running altogether.Reply

It's both. In raw shader performance at equal clocks, the 545 has 1/8th the punch of an SGX554MP4. Intel has attempted to make up for lost ground a bit by cranking up the clockspeed, but they really should have used at least an SGX544MP2 at a bare minimum. Then on top of that, as you implied, the drivers are complete and total crap. Intel drivers have a history of being bad, but they've really taken it to the extreme here.Reply

Except they didn't even design the graphics chip in this "stopgap platform". The company that did (Imagination Technologies) had better designs when Intel made this chip, but Intel went ahead with the very outdated SGX545. Again, ImagTech had better solutions on the table, and other companies were using them.

Intel just didn't care enough, they saved on transistors and made a bigger profit. If you bought one, sucks to be you. Oh wait, isn't that Atom in a nutshell since it was introduced? Every generation someone says me that the next Atom will be worth a damn... I'm still waiting.Reply

What's baffling is how strongly Intel is itching to penetrate the mobile market, yet it refuses (or fails) to put more resources into the Atom line and prefers to use outdated technologies and architectures. Well you can't have it both ways, Intel, you either invest or accept your irrelevancy.Reply

Easily explainable... Intel only recently got serious about penetrating the mobile market. The ATOM was originally on a very slow 5 year product cycle and it's just over 5 years now!

However, the upcoming Bay Trail marks the switch to a two year tic-toc product cycle... along with the first major architectural update as they're finally switching from In Order Processing to Out of Order and will offer quad cores and support for up to 8GB of RAM with a full 64bit implementation.

They still have a lot to prove but it does look like they're finally getting serious about this, especially since they're pushing technology first introduced with Ivy Bridge to the ATOM with this release.

In the meantime, Intel is upping the graphical performance a bit with the Clover Trail+ as that one will use a GMA based on dual 544 PowerVR GPU at 533MHz... should provide much more decent graphical performance until Bay Trail starts coming out by the end of this year, and on through early next year.Reply

agree, but OEM will never use it that much, they prefer to stick to fooling consumers with lots of marketing djingles that they buy the best brand in blue, Temash would be a way better fit then atom. It has low power usage , cpu power and gpu power and probably be cheaper in total...Reply

Cheaper, no... Intel SoCs are barely more costly than ARM SoCs and that's about as low as they get.

Temash, though, should offer more performance, especially graphically... However, it's still a generation longer before AMD can really compete for mobile device range power efficiency.

Like aside from S3 suspend state, there's no support for features like always connected standby and idling doesn't go as low as ARM and ATOM SoCs can go.

The dual core Temash is the only one that can go fan-less but it will still consume more power than the average mobile tablet. While the quad core will still consume significantly more and will require a fan but it will also offer a higher performance range, especially when docked as then it can enter a Turbo mode of 14W max TDP for more Ultrabook range performance.Reply

Thanks for posting that oaso2000. I also went to the 3DMark website and looked at one of the results for a C-60 system, as the scores should be similar for the Z-60 (AMD's tablet APU). These are the results:

So, assuming those numbers are accurate, it looks like the Z-60 lags behind the newest ARM SoCs in terms of CPU performance, though not by a whole lot. GPU-wise, it's still ahead everything in the charts, bar the HD4000 in the Surface Pro.

This confirms my suspicion that even the Z-60 (which should be greatly improved upon by Temash) is a much, much superior tablet solution than Clover Trail. God damn OEMs going for Intel silicon when the AMD solution is better *and* almost certainly cheaper.Reply

If battery tech "kept up" then it'd be illegal to carry your devices on airplanes. And probably most places in public.

Unless an energy storage device has some rate limit on output, it is equivalent to a bomb. Fuel cells can be pretty easily hacked to explode. A superconducting loop storage could dump all its energy at one time like a lightning bolt. The effects of an energy storage flywheel failure are really spectacular.Reply

Is it possible to put up a comparison with a current generation desktop PC platform? Maybe a i7-3770 series or even an AMD APU. I know it's an apples to watermelons kind of comparison, but it would be fascinating to see where today's mobile offerings stand compared to old school desktops. Reply

Precisely what I was also curious about! With all of the focus on the march of progress of these SoC's and the availability of a cross-platform benchmark I wanted to know what the delta was...its still a big one, thanks for the post!Reply

"In an average frame, 530,000 vertices are processed leading to 180,000 triangles rasterized either to the shadow map or to the screen."

I don't think this is correct. Each vertex in a strip (which then make up objects) imply a whole new triangle, so an example object with 530K verticies would have 530K - 2 triangles. I don't think a simple divide by 3 would yield the correct number of triangles rasterized for a scene like this. It would likely have far more triangles; probably very close to the number of total transformed vertices.

Look up "Triangle Strip" or "Triangle Fan" on wikipedia for more info. Corrections welcomed.Reply

Triangle lists are used more often than strips and even if a strip is used it will be a bunch of strips not one giant strip. That said, the number is still weird as it implies there's no vertex reuse.Reply

Good idea.When you said "old school desktops" that made me think of actual old desktops.I'd love to see what old desktop configuration comes closest to some of these devices.Athlon XP with a Radeon 9700? Who knows!Reply

I second that! My phone has a Tegra 2 in it and I guesstimated it was about the equivalent of a GeForce FX 5200 (if that)...but I don't have a PC with a configuration that old to test it. Of course Tegra 2 is long in the tooth now but I think it'd be really awesome to be able to relate a current SoC with an equivalent oldschool PC, just as a frame of reference.Reply

Out-of-order execution is not likely to help Atom with physics and other floating-point code. These workloads are all about memory bandwidth and the ability to simultaneously issue FP load/store and FP compute instructions.

Remember that Intel spent most of the 1990s trailing the floating-point performance of the PowerPC 604/G3/G4, even though these microarchitectures had much smaller reorder buffers than the P6-based Pentiums.

If Atom is to improve its floating-point, vector, and graphics performance, it needs a major overhaul of its load/store units, cache hierarchy, and memory controllers. There's just not a huge demand for better integer performance on mobile devices.Reply

"All of these tests are run at the default 720p settings. The key comparisons to focus on are Surface Pro (Intel HD 4000), VivoTab Smart (Intel Clover Trail/SGX 545) and the HTC One (Snapdragon 600/Adreno 330). I've borrowed the test descriptions from this morning's article."Reply

I thought Intel disclosed what the next gen 22 nm atom's graphics engine is "comparable to". I wonder where that "comparable to" lands on the charts. 22 nm Atom will not have HD 4000 but it may be possible to speculate sorta insight-fully whether 22 nm Atom may have the right stuff. There is a place for properly labeled, insightful speculation about the upcoming battle. I liked the articles where you you used lots of wires to measured voltage drop and calculated power consumption by functional unit (e.g. graphics engine) within a SOC. The appeal of applying that technique to the "comparable to" SOC is it should provide an upper bound to next gen Atom's GPU power consumption. That technique provides wrong results because next gen Atoms will be built with a different (more power efficient) fabrication technology. However, it may hint how much (or little) Intel needs out of its low power fab tech to be competitive. If the din from the crowd at the coliseum clamoring for more red meat gets too loud, I might throw that into the ring.

Second, I think in the first Chart, it is a boo boo to color the HPC item red.Reply

I think its relevant to compare a Porche to a Toyota. That gives some insigt, and the test here gives some surprising results.

But the presentation of the test and the context is wrong, because its presented as if IB is a competitor to Arm. And thats just way off, and done in that perspective, it would be the same as if i compared my Samsung 9 series x3c to my doughters HP 11.6 AMD e450 APU. Thats just different markets and needs.

When you read this and other articles you get the impression its interesting if broadwell or successor can reach AMD power levels. How on earth did that become interesting. Its excactly like comparing a cheap tn low res screen to the excellent pls. Or cheap wiskey to the most expensive. The comparison have its strong sides, and also the obvious limitations. I think its good journalism to present at write that explicit.

Intel have huge capex, and that means its a dream to compare IB to ARM. Even Atom is a stretch, but thats the same market as huge high-end ARM, so its fine.Reply

Who cares about Surface Pro? Why are you comparing $1000-$2000 tablets with $200-$500 tablets? You know that's not a fair comparison.

A much more fair comparison is with Surface RT, and then we can actually see what's the difference in scores for the cross-API/cross-platform benchmark, while using the same hardware (Tegra 3). So do one with Transformer Prime vs Surface RT. You may even include the Nexus 7 (although that may have a slightly lower clock for both CPU and GPU).Reply

This comparo was done between new benchmark suites and whether they give comparable results across platforms and API's. It's just numbers. There is no need to get upset about it.

Anand stated in the article that the new 3dMark has not been released for Windows RT yet. This article is clearly a piece of a much larger article that will thoroughly analyze 3dMark results across all supported platforms. We just have to wait for 3dMark to be released for all the planned platforms. Give it time. Reply

Wow, a clunky, 50 pound tablet running a hostile OS that MS is trying to shove down our throats, can run 3DMark better the the light and elegant tablets! I must purchase this immediately!The Surface is running a faster processor but trade this off with the above caveats that remove it from the real tablet catagory (light, portable, etc.). I thing the Surface is more in the laptop class because of its weight. Compare it there for price and features for a real comparison. This is a stupid comparison. MS much?Reply