Deducing details about Apple’s A6X processor

Apple promises double the CPU and graphics performance over the A5X, but how?

As usual, Apple didn't share many specifics about its new A6 "Extreme" (A6X) processor, which powers the fourth-generation iPad. However, by looking at Apple's claims that it's "twice as fast" as the A5X-powered third-gen iPad, it may be possible to deduce what's inside.

According to Apple, the A6X processor "delivers up to twice the CPU and graphics performance of the A5X chip." In other words, the dual-core CPU can process data twice as fast as the dual-core 1GHz, Cortex A9-based A5X. It can also churn through OpenGL triangles and textures at twice the rate of the PowerVR SGX543MP4 in the A5X. So how did Apple do that?

Looking at CPU power for the moment, we already know that Apple designed a custom ARM-based core for the A6. Running at 1.2GHz in the iPhone 5, two A6 cores run twice as fast as two 800MHz A5 cores in an iPhone 4S.

However, the A5X in the third-gen iPad was clocked at 1GHz. That means Apple is clocking the A6X higher yet. Given that architectural improvements account for some of the speed increase, Apple only had to clock the iPhone 5 at 150 percent to achieve double the compute performance of the iPhone 4S. With this in mind, we believe Apple is clocking the A6X's CPU cores at 1.5GHz.

Examining the GPU is slightly different. Apple already jammed four SGX543 GPU cores into the A5X in order to achieve performance parity with the two SGX543 GPU cores in the A5 chip that powers the iPad 2. The extra GPUs were needed just to keep up with the 2048×1532 pixel Retina display, so these did not offer any graphics performance improvement. However, Apple says that the A6X pumps pixels twice as fast.

Apple could be using a newer-generation PowerVR core, but that appears to be very unlikely. Only one announced processor is known to use a PowerVR Series6 design, and it won't even begin sampling until 2013. Given that Apple just released the A6 a month ago, we're confident Apple is still using the same SGX543 core.

Here's what we know about the PowerVR SGX543 core's performance: it scales almost linearly with the number of cores and clock speed. So to double the performance, Apple would either have to double the number of cores to eight or double the clock speed of each of the four cores. Apple says that the A6X has "quad-core graphics"—the same as the A5X—so Apple clearly boosted the clock speed. Since the GPUs in the A5X were clocked at 250MHz, we believe that Apple has clocked the SGX543 cores at 500MHz.

Given the significant boosts in clock frequency—150 percent for the CPU cores, and 200 percent for the GPU cores—you may be wondering how Apple can still promise a 10-hour battery life. After all, the iPad still has the exact same 42.5Whr battery, but the processor is twice as powerful. The power savings come from the same place as we saw in the iPhone—Apple moved from a 45nm process to a more power-efficient 32nm process. Instead of keeping performance the same and decreasing the iPad's thickness and weight, Apple instead chose to double its performance without sacrificing all-day battery life.

Of course, we won't know how accurate our educated guesses are until one of the new iPads can be thoroughly benched, and the A6X's architecture is analyzed by the likes of Chipworks. However, we feel confident suggesting Apple has mated two A6 ARM cores running at 1.5GHz with four PowerVR SGX543 cores running at 500MHz. Given the performance results we saw with the iPhone 5, we expect the updated iPad will remain at the top of the tablet performance heap for some time.

87 Reader Comments

We bought my dad an iPad 3 just last week. The rumors of an upgraded full size iPad hadn't really stirred up yet. He is returning it for an iPad 4. 2x cpu and gpu performance + a better FaceTime camera is nothing to sneeze at.

What does your dad do with all that CPU and GPU performance?

Yea, indeed, what do any of us do with all that CPU and GPU performance? Nothing worthwhile!

We do not deserve these riches, which have been so mercifully showered upon us. Let us all, who dare to make this purchase, of which we are not worthy, pause briefly three times a day to contemplate our own unworthiness and, nightly, before retiring unto our chambers, let us discipline ourselves with the knotted cords of penance, thus atoning for our presumption.

I suspect that price isn't much a factor for moving retina displays into the MacBook Air.

Retina Displays HAVE been moved into the MacBook Air. It's called MacBook Pro with Retina Display.

Just as the iPad 3 had to get bulkier and heavier to accomodate a beefier battery to power the Retina display, the MacBook Air had to do the same.

The 13" MBPr is basically a 13" MBA. Swap in the Retina display, add a battery big enough to still reach the magical 7 hours ... boom. And now that the case has gotten bigger, you have thermal capacity to ditch the ULV processor, so throw in a regular mobile part. It will help to justify the price tag.

So you are right: price isn't the reason, battery life is. If Apple wants to keep the MBA as slim as it is, Retina displays need to become more frugal.

How do you figure? Remember, the A6X is likely on the more power-efficient 32nm process, rather than the 45nm process of the A5X. A6's clock speed in the iPhone 5 is up over the iPhone 4S without making that device too hot.

Not to mention that it is not the bottom of an iPad3 that gets warm, it is the left hand side (when looking at the screen).As far as I can tell, the electronics all run down the right hand side, which implies (as one would expect) that the primary heat generator is the screen LEDs which presumably run down the left hand side and feed into a diffuser.

Yea, indeed, what do any of us do with all that CPU and GPU performance? Nothing worthwhile!

I was merely commenting on the conclusion that the iPad 3 has been made obsolete. As if twice the CPU/GPU performance was opening up a whole new usage scenario.

It's still a dumb question. Since when is getting the best deal for you dollar bad? Since when is technological progress bad? Should we all still be using Pentium II's with 800x600 16 bit monitors? Or is it only bad when it's someone who chooses an Apple product, or a touch tablet that isn't for "real work"? Is that the problem here?

Yea, indeed, what do any of us do with all that CPU and GPU performance? Nothing worthwhile!

I was merely commenting on the conclusion that the iPad 3 has been made obsolete. As if twice the CPU/GPU performance was opening up a whole new usage scenario.

It's still a dumb question. Since when is getting the best deal for you dollar bad? Since when is technological progress bad? Should we all still be using Pentium II's with 800x600 16 bit monitors? Or is it only bad when it's someone who chooses an Apple product, or a touch tablet that isn't for "real work"? Is that the problem here?

You said that the iPad 4 had made the iPad 3 obsolete ("Trollery aside… yes.", oddly enough, left out for some reason from the quote), which is simply asinine. I'd imagine that his point is that the iPad 3 is NOT obsolete in anyway, despite your arbitrary assertion that it is so. Not the highest end != obsolete. I believe that was his point.

iOS developers tend to try to be fairly good at ensuring their programs work well on all the supported generations of the class (iPad vs iPhone/iPod) of iOS devices Apple supports... it's generally not too hard in iOS land, since you can span many years with just a tiny number of devices. Developers aren't going to start ignoring the 3rd gen iPad anytime soon.

You forgot to discuss one of the other major differences between the A5 and the A5X, the number of memory channels. I'd assume the A6X will continue to have 4 32bit channels just like the A5X and will probably use the same slightly faster ram that they use on the A6 in the iPhone 5.

Yes, lpDDR3 @ 1066Mhz likely for RAM. Also, do not forget that A6 has 1MB level 2 cache whereas the normal ARM has only 256K L2 cache. This quadruple cache makes the difference without need of clock-speed boosting. I think the frequencies estimate are close. Do not forget the very aggressive scaling being done by the chipset to give a quick instant boost and throttle back to very low Mhz on idle. These tricks were all used by Intel in their low power chips. I still think Apple A6 is not truly an Arm A15 design. It might use some of the tech there but not all of it. Samsung Exynos 5XXX series will be A15.

"we expect the updated iPad will remain at the top of the tablet performance heap for some time."

It won't. Google's Nexus 10 tablet should surpass it in both GPU performance (72 Gflops for Mali T604 vs 64 Gflops for the overlocked SGXMP4), but especially in CPU performance. I've seen Samsung's Exynos 5 Dual score under 700ms in Sunspider, with double the score in the V8 and Octane tests (compared to A6, which has just 20% lower CPU performance than A6X).

[quote="Ragashingo"Yea, indeed, what do any of us do with all that CPU and GPU performance? Nothing worthwhile!

Not true! I can tell you that upgrading to the iPhone 5 from a 4S means dramatically faster app launching for me. Also, anything related to photography is much faster. Taking photos is quicker, manipulating them in photo treatment apps is dramatically faster (Hipstamatic is blazingly fast).

And the improved WiFi means movies and TV shoes download a lot quicker than they did with my 4S.

I would guess that anyone would see the same type of benefits from the new iPad (4).

Power consumption in the iPad (retina, or otherwise) is dominated by the display backlight.

Actually it is much worse with the retina, the increased gamut comes at a price. They use more intrusive color filters which in turn need brighter backlights to compensate. I think the iPad3 has double the LED power usage as compared to the iPad2, that more than anything required a nearly 2x's larger battery. The iPad3 is practically a giant battery with an LCD mounted to it.

Possible, but do you think they would have used the PowerVR GPU in the iPhone 5 if they had their own design ready for launch barely a month later?

I wonder about a future battle brewing over that. Apple owns about 10% of Imation. I wonder if now that they are designing their own SoC's if they might want more control but then again Intel owns about 15%

Designing a GPU is harder than a cpu and you'd need to negotiate with nvida, Imagination , AMD, Intel, possibly Intergraph, S3 ,rambus and others to get patent licensing and its not as easy as buying an ARM license from ARM

Where as using a customised version of an off the shelf gpu core/tile avoids/reduces the patent licensing, thats assuming Imagination provides the patent coverage.

Designing a GPU is harder than a cpu and you'd need to negotiate with nvida, Imagination , AMD, Intel, possibly Intergraph, S3 ,rambus and others to get patent licensing and its not as easy as buying an ARM license from ARM

Where as using a customised version of an off the shelf gpu core/tile avoids/reduces the patent licensing, thats assuming Imagination provides the patent coverage.

All true but Apple is one of very few ARM architecture licensees so at least wrt CPU cores they no longer classify as using off the shelf core designs.

Well how much extra efficiency do you get when transitioning from 45nm to a 32 nm process. That might give a bit more insight into the heat and power efficiencies.

P = C*V^2*FP = powerC = capacitanceV = voltageF = frequency

The capacitance is mostly from the gates of transistors. For a linear reduction in scale, area is scaled by the square. Keeping the same W/L (width divided by length) of the transistor gates, the power will scale by (32/45)“2.

In most cases, the company with the finer geometry process is the winner, at least for logic chips. Architecture matters too, but process is generally king.

Ars really needs a FAQ since this is a common question.

You can derive the power equation quite easily. P = V * IWith I = currentI = DQ/DTThat is current equals the rate of charge flowQ = C * V Charge equals capacitance times voltageI = C * DV/DTBut V is a constant (supply voltage)Hence I = C*V* FWhere the time differential is replace by multiplying by frequency Since P= V*I, we end up withP = V*C*V*F = C*V^2*F

So as a first order estimate:1) Power scales linearly with frequency2) Power scales with a square law effect to core voltage3) Power scales linearly to gate capacitance, but gate capacitance scales with a square law to the fab process dimension. That is, scale the process by 70.7%, and cut the power in half.

If you look at the intel 22nm Ivy bridge cores, they are approaching the TDP of the 40nm Atom cores. This is why intel could come out of nowhere with chips as low power as ARM if they want to since intel owns the 22nm market at the moment.

Well how much extra efficiency do you get when transitioning from 45nm to a 32 nm process. That might give a bit more insight into the heat and power efficiencies.

P = C*V^2*FP = powerC = capacitanceV = voltageF = frequency

The capacitance is mostly from the gates of transistors. For a linear reduction in scale, area is scaled by the square. Keeping the same W/L (width divided by length) of the transistor gates, the power will scale by (32/45)“2.

In most cases, the company with the finer geometry process is the winner, at least for logic chips. Architecture matters too, but process is generally king.

Ars really needs a FAQ since this is a common question.

You can derive the power equation quite easily. P = V * IWith I = currentI = DQ/DTThat is current equals the rate of charge flowQ = C * V Charge equals capacitance times voltageI = C * DV/DTBut V is a constant (supply voltage)Hence I = C*V* FWhere the time differential is replace by multiplying by frequency Since P= V*I, we end up withP = V*C*V*F = C*V^2*F

So as a first order estimate:1) Power scales linearly with frequency2) Power scales with a square law effect to core voltage3) Power scales linearly to gate capacitance, but gate capacitance scales with a square law to the fab process dimension. That is, scale the process by 70.7%, and cut the power in half.

If you look at the intel 22nm Ivy bridge cores, they are approaching the TDP of the 40nm Atom cores. This is why intel could come out of nowhere with chips as low power as ARM if they want to since intel owns the 22nm market at the moment.

Well there are a lot of things to debate here. First of all, not all transistors are created equal, so the capacitance reductions that you mention are very rough estimations. For example for high fan-out gates you need big transistors, for the analog parts you need big transistors, for the transistors in your critical path you need large transistors etc.

In this formula also you seem to forget that it implies that ALL transistors are switching at each cycle. This is (thank God) not true and therefore usually when we refer to this formula there is a factor before...

... that is we would write the formula as... P = a * C * (V^2) * f

where a is the average capacity that switches in every cycle.

Now this is an area where architecture plays a major role and process is not the king.

"we expect the updated iPad will remain at the top of the tablet performance heap for some time."

It won't. Google's Nexus 10 tablet should surpass it in both GPU performance (72 Gflops for Mali T604 vs 64 Gflops for the overlocked SGXMP4), but especially in CPU performance. I've seen Samsung's Exynos 5 Dual score under 700ms in Sunspider, with double the score in the V8 and Octane tests (compared to A6, which has just 20% lower CPU performance than A6X).

Ha...Ha...you so funny!

Eh, it’s a pattern with Lucian, he keeps popping up everywhere posting how the elusive next version of some or other nVidia or Samsung CPU/GPU combo will beat the crap out of what Apple is shipping and how Apple’s CPUs are probably not that great in the first place seeing as how Apple is always hyping everything (no matter how often their claims are proven right by benchmarks, it seems).

Designing a GPU is harder than a cpu and you'd need to negotiate with nvida, Imagination , AMD, Intel, possibly Intergraph, S3 ,rambus and others to get patent licensing and its not as easy as buying an ARM license from ARM

Where as using a customised version of an off the shelf gpu core/tile avoids/reduces the patent licensing, thats assuming Imagination provides the patent coverage.

Plus Apple holds some 8-9% of Imagination Technologies, afaik and could probably either double/triple that or buy Imagination outright, like they did with the CPU design outfits. Likely doesn’t make as much sense at this point, anyway. And who knows, maybe ATi will be up for grabs soon, too. It’s not like they couldn’t afford it.

I've been downvoted a lot and people have *replied* to the topic with phrases like yours... yet nobody has actually provided an explanation of what you'd need all that processing power for.

I'm not saying there shouldn't be progress, I'm quite happy that Apple has been trying to double performance for every model - which is the reason they updated the iPad in fall suddenly, because they couldn't double actual performance because of Retina.

I'm just saying there is no reason for consumers to constantly be onto the latest greatest hardware if there is no use case. As an example: I'm still on my GTX 460 from mid 2010. Not because I hate progress or only play retro games, but because this GPU can play all recent games with nearly maxed settings at 2560x1440. I'd be loving a new GPU, I've been praying to multiple deities that MS and/or Sony finally give us the next-gen consoles we badly need.

But as long as games need to run on an X360, I look at the GTX 680 and ask: what would I use all that gaming power for?

He'll be able to use it for an entire 12 months or so longer before it falls off the list of devices supported by the latest apps and iOS.

Rember, the 3rd gen iPad is only barely fast enough to drive such a massive resolution. There is going to come a day when it's not fast enough anymore. Probably pretty soon too.

Given that the iPad 3, 2, and mini all have equivalent GPU prowess, and that they are all more powerful than just about any Android tablet excepting Medfield and a quad core Snapdragon, they will likely have longer useful lifespans than the Nook HD, Galaxy Nexus 7, Iconia 110, and even the Surface RT.

"we expect the updated iPad will remain at the top of the tablet performance heap for some time."

It won't. Google's Nexus 10 tablet should surpass it in both GPU performance (72 Gflops for Mali T604 vs 64 Gflops for the overlocked SGXMP4), but especially in CPU performance. I've seen Samsung's Exynos 5 Dual score under 700ms in Sunspider, with double the score in the V8 and Octane tests (compared to A6, which has just 20% lower CPU performance than A6X).

that's all great and dandy, but most developers are too busy milking iTunes to give a crap about the android arena with its endless variety of differently specced systems and versions of OS's.

android is worthless for schools/corporations, the openness and myriad of options/vendor lock-ins doesn't help anything in those arenas.

anywho, the gaming's on the ipad, that's all i care about. you can run your suite of benchmark apps all you want all day long if it gives you ewood though

i have no love for apple, or hate for google though, i just go where the games are myself.

"we expect the updated iPad will remain at the top of the tablet performance heap for some time."

It won't. Google's Nexus 10 tablet should surpass it in both GPU performance (72 Gflops for Mali T604 vs 64 Gflops for the overlocked SGXMP4), but especially in CPU performance. I've seen Samsung's Exynos 5 Dual score under 700ms in Sunspider, with double the score in the V8 and Octane tests (compared to A6, which has just 20% lower CPU performance than A6X).

Ha...Ha...you so funny!

Eh, it’s a pattern with Lucian, he keeps popping up everywhere posting how the elusive next version of some or other nVidia or Samsung CPU/GPU combo will beat the crap out of what Apple is shipping and how Apple’s CPUs are probably not that great in the first place seeing as how Apple is always hyping everything (no matter how often their claims are proven right by benchmarks, it seems).

Seriously! So many tech sites have the guy on them saying things like this all the time. Sad stuff.

Anyway, I wonder if this is a one time thing to shift the iPad's refresh time to close to the holidays (joining the iPods and iPhone). Or maybe there'll also be an updated iPad in March as well. That'd be interesting.

Well how much extra efficiency do you get when transitioning from 45nm to a 32 nm process. That might give a bit more insight into the heat and power efficiencies.

Going from 45nm to 32nm is only 70% smaller on a linear sense... but CPU's are (mostly) 2 dimensional, so it's about half the size for the same transistor count (32^2 / 45^2 = 0.5057). Half size = half the power/heat output. That's my basic assumption.

Well how much extra efficiency do you get when transitioning from 45nm to a 32 nm process. That might give a bit more insight into the heat and power efficiencies.

Going from 45nm to 32nm is only 70% smaller on a linear sense... but CPU's are (mostly) 2 dimensional, so it's about half the size for the same transistor count (32^2 / 45^2 = 0.5057). Half size = half the power/heat output. That's my basic assumption.

That hasn't been true for at least a decade. And even if the assumptions that held true a decade ago still worked, it wouldn't be accurate because that would be under the assumption that all the gates were constantly switching. Smaller transistors increase leakage current (assuming everything else is held constant.) For most applications, about half of your power draw is from transistor leakage which gets worse as the transistors shrink, again assuming everything else remains the same. That's why since around the 90nm node, there have been technological changes in nearly every transistor generation that enable the shrink to be effective. (for foundries, that's been e-SiGe for transistor strain at 40nm, which improves performance without increasing leakage, HKMG at 28/32 nm, which improves gate leakage for a given performance level.) The next step is Tri-Gate/FinFETS or FDSOI, which allows for improved control of the transistor channel, which in turn decreases transistor source drain leakage current for a given performance level. But no foundry plans on implementing FinFETs until after the 20nm node, which may reduce costs slightly and improve performance for a few applications that are not power limited. ST has implemented FDSOI which can provide similar leakage benefits as FinFETs for lower performance levels for their 28nm and (future) 20nm nodes. But their digital IC division has been stuck with a lack of resources and is seriously floundering for quite some time now. In any case, I doubt you'll see any serious power reductions from node shrinks (outside of Intel) until FinFETs come into play, which won't happen for at least 2 years.

Well how much extra efficiency do you get when transitioning from 45nm to a 32 nm process. That might give a bit more insight into the heat and power efficiencies.

P = C*V^2*FP = powerC = capacitanceV = voltageF = frequency

The capacitance is mostly from the gates of transistors. For a linear reduction in scale, area is scaled by the square. Keeping the same W/L (width divided by length) of the transistor gates, the power will scale by (32/45)“2.

In most cases, the company with the finer geometry process is the winner, at least for logic chips. Architecture matters too, but process is generally king.

Ars really needs a FAQ since this is a common question.

You can derive the power equation quite easily. P = V * IWith I = currentI = DQ/DTThat is current equals the rate of charge flowQ = C * V Charge equals capacitance times voltageI = C * DV/DTBut V is a constant (supply voltage)Hence I = C*V* FWhere the time differential is replace by multiplying by frequency Since P= V*I, we end up withP = V*C*V*F = C*V^2*F

So as a first order estimate:1) Power scales linearly with frequency2) Power scales with a square law effect to core voltage3) Power scales linearly to gate capacitance, but gate capacitance scales with a square law to the fab process dimension. That is, scale the process by 70.7%, and cut the power in half.

If you look at the intel 22nm Ivy bridge cores, they are approaching the TDP of the 40nm Atom cores. This is why intel could come out of nowhere with chips as low power as ARM if they want to since intel owns the 22nm market at the moment.

Well there are a lot of things to debate here. First of all, not all transistors are created equal, so the capacitance reductions that you mention are very rough estimations. For example for high fan-out gates you need big transistors, for the analog parts you need big transistors, for the transistors in your critical path you need large transistors etc.

In this formula also you seem to forget that it implies that ALL transistors are switching at each cycle. This is (thank God) not true and therefore usually when we refer to this formula there is a factor before...

... that is we would write the formula as... P = a * C * (V^2) * f

where a is the average capacity that switches in every cycle.

Now this is an area where architecture plays a major role and process is not the king.

The equation is a first order estimate.

Note that fan out requirements are also reduced since the loading on the line scales. Thay is, the gate area is reduced, hence less capacitive load.

Also, if you take the same chip and scale it, then the same transistors are switching. Thus this equation is very good to a first order.

Remember, the U-2 spy plane fundamentals met this same type of back of the envelope test before they proceded the real design. (Kelly Johnson was the ultimate back of the envelope engineer.)

First order analysis is a sanity check. If the math says a 50% power reduction and someone is promissing 95% reduction, your BS radar should go on high alert.

What doesn't scale is when you go off chip. The external load is heavily dominated by wiring and the package. Incidentally, the bonding pads don't scale. Thus the chip size doesn't scale 100% with the process reduction, thus the push for systems on a chip. That is, avoid going off chip if possible.

Designing a GPU is harder than a cpu and you'd need to negotiate with nvida, Imagination , AMD, Intel, possibly Intergraph, S3 ,rambus and others to get patent licensing and its not as easy as buying an ARM license from ARM

GPU harder than a CPU? That's not my understanding. GPUs are huge parallel processing machines. You design the shaders and then replicate it X number of times on the chip. My understanding is that GPUs are usually laid out using automated design tools rather than by hand, as is often the case for general purpose CPUs. This is because GPUs are so much simpler given the repetition.

Ya all DO realize that if (big IF) the 2X claims are true, new iPad4 is going to wipe the floor with everything else out there?

I mean seriously.

The performance of all of the mobile devices CPU and GPU wise from Apple simply kills the competition at this point in time, designing and engineering the CPU and the GPU to your OS needs seems to be the way to go, buying Intrinsity, PA Semi, and Anobit are paying off big time in the performance and profit margin areas.

Yes and no. Moore's law was more about transistor counts and I don't believe we are in a place where transistor counts vs. price are actually breaking Moore's law, even in the mobile space.

It is pretty impressive, BUT we aren't getting these massive performance increases for free. As was mentioned in the Anandtech article on the iPhone 5, power usage becomes a lot more variable.

The TDPs of these almost Cortex A15 chips and the upcoming Cortex A15 chips are a lot higher than the old Cortex A9 chips. However, they tend to have lower idle power and are significantly faster, more so than the increased TDP. That means per watt efficiency is higher, however more power is being thrown at the compute "problem" as well...so in heavy/near infinite workloads (IE gaming) you are likely to face a scenario of significantly shorter battery lives.

Give it a couple more generations and you might have a phone that can manage 12hrs of talk time, 10hrs of Wifi websurfing with 4x the power of the latest greatest phones today, but if you slam it with a heavy workload, such has some serious gaming, you might have 2hrs of battery life (or less!)

If you look in the x86 space you see more of a race toward TDPs (in general) AND increase performance per watt AND lower TDPs. However, gains are tending to be much more incremental (8-20% per generation). In the ARM world, performance gains per new generation is looking more like 100%, with gains in performance per watt, BUT HIGHER TDPs. So battery life is a lot more dynamic and can be horribly short (well, that is likely anyway) in a heavy workload scenario.

I suspect in large part because of this we are going to see a significant tapering off in performance gains between generations of processors pretty soon in the ARM world as a large part of the gains in performance per watt are in new process nodes (a little in better architectures). Now granted, going to 20nm FinFET could yeild up very large power savings, allowing another huge leap in performance, but after that, unless something "revolutionary" comes along beyond your "average" gains in power savings with new process nodes, I doubt we'll see more than a modest 20-30% gain in performance per generation at best.

So maybe one new generation of huge gains in performance. Perhaps 2 and then it is going to slow. A lot.

*A note, Intel is on a Tic-Toc cadence of a new generation every year in effect. ARM seems to be closer to a 2 year per generation cadence (Cortex A9 was around for a little over 2 years before Cortex A15 is just about to start dropping). I mention that only by way of comparing the gain between Intel/x86 and ARM. ARM does have huge gains of, say, 100% in a generation, but that new generation also takes 2+ years to occur. Intel/x86 manages 2 generations of maybe 10-25% performance gains per generation, figure maybe 30-40% in the same time that it takes ARM to get a 100% gain...but Intel/x86 is also tending to lower TDPs a fair amount in those 2 generations and ARM is INCREASING TDPs between generations.

"we expect the updated iPad will remain at the top of the tablet performance heap for some time."

It won't. Google's Nexus 10 tablet should surpass it in both GPU performance (72 Gflops for Mali T604 vs 64 Gflops for the overlocked SGXMP4), but especially in CPU performance. I've seen Samsung's Exynos 5 Dual score under 700ms in Sunspider, with double the score in the V8 and Octane tests (compared to A6, which has just 20% lower CPU performance than A6X).

that's all great and dandy, but most developers are too busy milking iTunes to give a crap about the android arena with its endless variety of differently specced systems and versions of OS's.

android is worthless for schools/corporations, the openness and myriad of options/vendor lock-ins doesn't help anything in those arenas.

anywho, the gaming's on the ipad, that's all i care about. you can run your suite of benchmark apps all you want all day long if it gives you ewood though

i have no love for apple, or hate for google though, i just go where the games are myself.

I haven't been having any issues finding plenty of games to play on Android which work just fine and look great...