The fourth-generation iPad’s new GPU is a quad-core monster

We already know that the biggest change in the not-so-different fourth-generation iPad is its brand-new A6X processor, but we've still been left to guess at many of the details. We know that it uses two of Apple's custom ARM CPU cores, and we know that Apple promises twice the CPU and GPU performance of the previous-generation iPad, but up until now we haven't known just how its quad-core GPU would deliver the promised performance gains.

That has changed now that Chipworks has taken the A6X apart for analysis. The new A6X, which like the A6 and newer A5 chips is manufactured on Samsung's 32nm process, is apparently 30 percent larger than the A6 despite using the same CPU cores. As in the A5X, most of that space is being taken up by the massive graphics processor required to drive a high-resolution panel like the iPad's Retina Display.

We had guessed that the new iPad could be running the same Imagination Technologies PowerVR SGX543MP4 as the A5X, simply running at a higher clock speed—this would theoretically be able to reach the doubled performance claims that Apple is making, but as we know, ramping up clock speed can be a power-inefficient way to raise performance. Chipworks' analysis shows the GPU cores to be much larger than the SGX543MP4's, though, and based on this information AnandTech deduces that the A6X is using a more-powerful PowerVR SGX554MP4.

Enlarge/ Like the A5X before it, the A6X has a much higher GPU-to-CPU ration than the A6.

Each of the four SGX554 cores features double the arithmetic logic units (ALUs) as the SGX543MP4's cores, meaning that each individual graphics core should be about twice as fast as an individual graphics core in the previous iPad. Hence, the claims of "up to" double the graphics performance. The rest of the A6X's extra size is taken up mostly by a larger 128-bit memory controller (compared to the 64-bit controller in the A6), which should keep those graphics cores fed with information and improve on the A6's already-impressive memory bandwidth scores in our benchmark suite.

This beefier GPU should allow more game developers to run their titles at the iPad's native 2048x1536 screen resolution, rather than running at a lower resolution and applying anti-aliasing as they sometimes would before—the end result should be subtly crisper and, of course, more detailed graphics. Look for a full performance analysis of the newest iPad in the coming days.

So how does a high-end SoC GPU compare with (a) current integrated GPUs of the sort that Intel is shipping on their i5s and so forth and (b) with discrete GPUs. Obviously the latter are far more powerful, but is something like this comparable to a GPU from say 2003 or 2004?

Have they finally surpassed Console GPUs? I remember reading this was expected soon, if so this beast might the first.

Is that really possible? Could a GPU faster than, say, the Xbox 360's, really fit into an iPad with passive cooling? I realize that today's consoles aren't really state-of-the-art anymore, but it's hard to imagine an iPad competing with the Xbox 360 or PS3.

Unless the apps are written to specifically take advantage of them, adding cores in does little more than consume additional power.

Most iOS developers, like, darn-near all of them, have no idea how to "specifically take advantage" of multiple cores. They code with Objective-C, not assembly. They leave it up to Apple to make sure that the APIs and the implementation of OpenGL have all of the core-specific optimizations.

Here is the Anandtech review with benchmarks. Fill rate is marginally improved, triangle throughput and ALU/shader performance has been boosted. Looks like 50% faster in a game benchmark. Blows the Exynos5 away, despite the 4th-gen ipad being introduced with barely any fanfare. Ridiculous.http://www.anandtech.com/show/6426/ipad ... r-the-hood

Unless the apps are written to specifically take advantage of them, adding cores in does little more than consume additional power.

You are in fact precisely wrong, in this instance. The tiling nature of the PowerVR GPU means that there is an almost linear speed-up as cores are added to the system. It works by subdividing the screen into small tiles, and a core processing a tile at a time. You can see that if there are more cores, those tiles get processed faster - there really is little to do in order to get the speedup. Basically it's a matter of registering the core with the OpenGL renderer.

Have they finally surpassed Console GPUs? I remember reading this was expected soon, if so this beast might the first.

Is that really possible? Could a GPU faster than, say, the Xbox 360's, really fit into an iPad with passive cooling? I realize that today's consoles aren't really state-of-the-art anymore, but it's hard to imagine an iPad competing with the Xbox 360 or PS3.

at a glance they look on par. but the consoles are still better, especially with lighting effects.judging this by looking at Dead Space HD for the ipad.

Have they finally surpassed Console GPUs? I remember reading this was expected soon, if so this beast might the first.

Is that really possible? Could a GPU faster than, say, the Xbox 360's, really fit into an iPad with passive cooling? I realize that today's consoles aren't really state-of-the-art anymore, but it's hard to imagine an iPad competing with the Xbox 360 or PS3.

The Xenos GPU in the Xbox 360 gets a theoretical 240 Gflops at 500MHz. The SGX554MP4 in the A6X gets a theoretical 77Gflops at 300MHz according to Anand's estimates. Clocked at 500MHz, it's possible for the SGX554MP4 to reach something like 120 Gflops, so still well below on a pure numbers basis.

Also something to keep in mind, the Xbox 360 only renders at 720p max. Games on an iPad can render at much higher resolutions, though I understand many games choose a lower resolution and scale up for performance reasons.

Most iOS developers, like, darn-near all of them, have no idea how to "specifically take advantage" of multiple cores. They code with Objective-C, not assembly. They leave it up to Apple to make sure that the APIs and the implementation of OpenGL have all of the core-specific optimizations.

There is no need for assembly to write code that scales well to two and more cores. Apple's APIs, like Grand Central, and their language extensions, like blocks, make it really easy to split work up into pieces that are then handled by the OS either in parallel if possible, or in sequence if not. The programming language you use has nothing to do with this.

And besides, as others have pointed out, there's even less (i.e., nothing) that needs to be done to take advantage of more GPU cores, since that is handled by the OpenGL drivers.

Have they finally surpassed Console GPUs? I remember reading this was expected soon, if so this beast might the first.

Is that really possible? Could a GPU faster than, say, the Xbox 360's, really fit into an iPad with passive cooling? I realize that today's consoles aren't really state-of-the-art anymore, but it's hard to imagine an iPad competing with the Xbox 360 or PS3.

A console is built to run hard for hours at a time. Since it runs off your house AC, you essentially don't care how much power it burns. If you were to play a game on an iPad (or any phone) that used the CPU/GPU to the max, the battery would drain pretty quickly (likely <1 hour) and your hand/lap would get rather warm.

And games are better optimized for consoles. Developers know the exact spec and (ideally) build their game optimally for it with no concern for sandboxing, security, multitasking, etc. They are allowed to program direct to the hardware using assembly if needed, so can more efficiently utilize the silicon in a game console than is possible through iOS (or Android, or even a PC [edit: technically it's possible on a PC but not practical due to hardware variety]).

To those concerned with battery life, Apple states that it will get the same 10 hours as previous ipads. There's little reason to doubt that, Apple's battery spec claims are usually very close to real-world results.

I think it's great that performance is so stellar, but having a delta across ipad lines so significant means that not many games/applications will be published that actually push the ipad(4) to its limits.

its weird that their new 'the new iPad' naming scheme is going to kick them in the ass this time around...

if they had announced iPad 4.... everyone would have jumped, but this time around no one knows there's been an upgrade... no massive lineups everywhere... people will just get 'the ipad' not knowing the difference from upgrade to upgrade

The whole idea of Apple having a product with a quality gaming experience is so insane to me that I still don't believe it.

I'll continue to not believe it until they provide either a snap-on console-style peripheral, or even a standard peripheral API, that lets me play an FPS on the thing. I'm all in favor of Fruit Ninja Photorealistic Edition, but give me the ability to play MW3, Black Ops, or Battlefield, or whatever in bed, using a reasonable control scheme, and I'm SO in. (actually, this *is* a cool upgrade, but damn, I just want to play an FPS on that retina display with joysticks, buttons, and triggers - it'd be like a giant, and way cooler, PS Vita)

I love to roast Apple whenever I can, but it's really hard when their mobile graphics performance is clearly always a step ahead of everyone else. Sure Android is a more robust OS and WP8/WinRT are a league beyond that, but geez Louise with cheese on bees' knees, update your damn hardware already!

Tech moves fast, Tegra 3 is an old dog already yet it's still sold as if it's the greatest thing ever.

Could someone help me put these stats into perspective, from a gamer's point of view? Basically how do they compare with an XBox 360 or a PS3? I am guessing somewhere between the two?

The differences are such that it is hard to make a fair comparison. The console GPUs are quite ancient by industry standards, and due to the compatibility requirements of consoles where writing to the bare metal is still possible, have only changed in process node to get smaller, cheaper, and cooler. By comparison, a GPU in a more abstracted environment like iOS can not only receive a die shrink but change quite under the hood in ways developers don't need to know about. Their only concern is whether they now have enough pixel pushing power to take fuller advantage of the Retina display.

Add to that the concerns of heat and battery life and you've got a vast gulf in the operating conditions for the two products. I'd venture to guess that given console-like conditions, the iPad 4 GPU would at least match the console GPUs for most tasks. The console GPUs have a big advantage in the raw throughput numbers, as pointed out above, but that gap might close a lot if the tablet GPU were operating without power considerations and far more substantial cooling.

After more than seven years since the Xbox 360 launched, it seems entirely reasonable to me that a high-end tablet model be able to match much of its capabilities. Tablet performance will leave current consoles behind at some point in the next couple of years.

I love to roast Apple whenever I can, but it's really hard when their mobile graphics performance is clearly always a step ahead of everyone else. Sure Android is a more robust OS and WP8/WinRT are a league beyond that, but geez Louise with cheese on bees' knees, update your damn hardware already!

Tech moves fast, Tegra 3 is an old dog already yet it's still sold as if it's the greatest thing ever.

In this case Apple is creating their own hardware which is not accessible to other manufacturers. It's not a matter of updating their hardware. Android has regularly had at least one OEM cram the lastest and greatest into a new phone the second they could. The latest and greatest everyone else is producing is just consistently behind what Apple's private hardware team is producing for their guy's to use.

I love to roast Apple whenever I can, but it's really hard when their mobile graphics performance is clearly always a step ahead of everyone else. Sure Android is a more robust OS and WP8/WinRT are a league beyond that, but geez Louise with cheese on bees' knees, update your damn hardware already!

Tech moves fast, Tegra 3 is an old dog already yet it's still sold as if it's the greatest thing ever.

There are other considerations than performance, such as price and availability. If you need three million CPUs and your vendor can only deliver half that number it kill you business no matter how good the price or performance. You cannot lead the market with a product you cannot deliver.

Also, I suspect Nvidia got a lot of commitments from device makers who were caught by surprise when competitors produced stronger products than expected.

Greetings from an iPad 4! When I pulled it out of the box a little over four hours ago, it was at 84%, and now it is at 65%. I haven't used it literally non-stop but on the other hand I ran a lot of benchmarks on it. (If you're wondering, I found that CPU-bound benchmarks meet or exceed the advertised 2x speed up, with more real-worldish ones being more like 1.6 to 1.8x. Linpack however reports almost 5x speed up, which makes me wonder if it's buggy.) So battery seems completely satisfactory so far, possibly better than iPad 3 but too soon to say.

I think it's great that performance is so stellar, but having a delta across ipad lines so significant means that not many games/applications will be published that actually push the ipad(4) to its limits.

On a per-pixel basis, the iPad 4 is now on par with the iPad 2, iPad mini, iPhone 4S, and new iPod touches and is significantly _slower_ than the iPhone 5.

Consequently, I think this iPad will be pushed to its limits quite often, especially as the iPhone 5 improve its market share and developer attention.

So how does a high-end SoC GPU compare with (a) current integrated GPUs of the sort that Intel is shipping on their i5s and so forth and (b) with discrete GPUs. Obviously the latter are far more powerful, but is something like this comparable to a GPU from say 2003 or 2004?

You can't really compare the two because these mobile GPUs are starting to support feature sets seen on modern desktop GPUs. If I remember correctly, the Nexus 4 sports the OpenGL 3.0 ES feature set (comparable to direct3D 10) though IDK about the GPU in the iPad.

Have they finally surpassed Console GPUs? I remember reading this was expected soon, if so this beast might the first.

Is that really possible? Could a GPU faster than, say, the Xbox 360's, really fit into an iPad with passive cooling? I realize that today's consoles aren't really state-of-the-art anymore, but it's hard to imagine an iPad competing with the Xbox 360 or PS3.

at a glance they look on par. but the consoles are still better, especially with lighting effects.judging this by looking at Dead Space HD for the ipad.

That game was made for the A4 generation, so the graphics were programmed to run well for those devices. On an A6, the graphics will be rendered much faster, but they will still look the same since they haven't been updated to fully take advantage of the new hardware. You want to be comparing consoles to new games that were written with the A6 in mind.

On another note, does that mean Infinity Blade 2 will now be able to run at the native resolution now?

the article speaks of mobile graphics...no other perspective is needed here.it is a Monster

WaltC wrote:

If high-resolution 3d gaming is your bag, and you are interested in a bit more than the ancient OpenGL 2.5 API benchmarks bring to the table--then for Goodness' sake *don't* buy a portable device like this. Be kind to yourself. If you buy it make sure it is for primary reasons *other* than 3d gaming and I doubt you'll be disappointed.

this

BullBearMS wrote:

The other devices had just managed to catch up with the performance Apple had been getting with the older cores in the iPad 3 and iPhone 5.

Now, it's once again a bit of a spanking.

it is a monster, and one i would have loved to have on androidas an android fan i would love to run emulation on better hardware.and apple has nailed it with the new ipad...

double of that 4 cores in the nexus 10 when outputting to a 1080p tvthats awesome for a "portable console"

Andrew Cunningham / Andrew has a B.A. in Classics from Kenyon College and has over five years of experience in IT. His work has appeared on Charge Shot!!! and AnandTech, and he records a weekly book podcast called Overdue.