I'm not sure who they contract their manufacturing out to now, but it is a very real possibility that they couldn't get 28nm going in time to hit Apple's production timeline. You have to remember that these chips have probably been in production for several months.

I'm guessing that the "A6" that showed up in the beta code is a die shrink of this chip for the next iPhone. I don't think they'll have an A15 design ready for a July launch.Reply

*Facepalm* Samsung WAS making a 28nm chip for Apple as far back as October. It was going to be the new A6 and an even thinner iPad 3 was supposed to happen. BUT since Apple got their panties in a bunch Samsung told Apple to piss off.Reply

28nm yields are relatively low across the board. For the number of iPads Apple is going to be selling this year, it makes sense that they went with the more dependable 45nm process. It is either that or deal with not being able to make enough iPads to satisfy demand.

Samsung denying components out of spite is a ridiculous idea. They may be competing in the consumer products division, but in terms of components Apple is by far Samsung's largest client.Reply

I'd say you're pretty blind. It is almost as bad as saying that anti-aliasing doesn't make a difference in games.

After a day with the iPad I want this sort of PPI in ALL of my monitors. My desktop monitors are high end NEC IPS displays, and now all I see are pixels and blockiness in text. Crossing fingers that yields get good enough to allow this sort of pixel density in desktop and laptop displays.

It is insane that a display of this quality exists in a consumer product that only costs $500.Reply

You can't just switch to a different manufacturing process. Their A5, with both CPU and GPu, is designed and optimized for 45nm. They have to invest a lot of money to make it work on 32nm or 25nm. They have to redesign both CPU and GPU. It would be stupid to do this with no further change. Intel requires a whole new Generation for such a move.

For Apple it would have been idiotic, because ARM A15 technology is available now, so they better use their knowledge and time to build a new A6 based on Cortex A15 with the PowerVR6 GPU and build on 25nm. They probabley have to invest the same money and time, but get much much more.

You should rather ask why Apple still uses the Cortext A9 and hasn't switched to A15 yet (look at Qualcomm, it's possible to have A15 technology ready already).Reply

I've said it before, I'll say it again.To me everything looks like A5X is a plan B because A6 (for whatever reason) was delayed.Look at, for example, the separate RAM that is being used, rather than PoP as in A5. I'm sure that Apple did not want this there, they were forced into it by switching to the larger A5X, and probably not being in a position, in time, to integrate double the RAM into the package.I also am guessing that this *externally driven* RAM is a substantial part of the requirement for a large battery. I find it hard to believe this much larger battery (which supposedly gives us much the same lifetimes for various tasks as the previous iPads) is required for the screen.

People keep seeming to forget that we went through this transition before:The iPhone 3GS had a 1219 mAh battery, the iPhone4 had a 1420mAh battery. The iPhone 4 was generally considered to have rather better battery life than iPhone 3GS, especially in tasks like movie-watching. That seems to indicate that, contrary to the "wisdom" of web commenters, there is no intrinsic reason that a highDPI screen has to burn power.

Sure, it is possible that this particular screen design burns power, in a way that the iPhone 4's does not --- but I think the onus on people who claim this is to provide proof, not to simply assert "well of course it does".Reply

This is what I am saying. THINK.What determines the power usage of an LCD display. The backlight.So why should more pixels over the same area require more energy? The total emitted light is the same.

Now in principle there COULD be effects related to more "borders" between pixels resulting in more dead area that doesn't channel light --- aperture effects. In practice,(a) this seems to have been taken care of in the manufacturing process --- there was an article on it flooding the web about five days ago(b) like I said, we have the example of the iPhone retina display. If that doesn't require a substantial boost in power over its predecessor, why should the iPad display be different?

There ARE more backlight LEDs in the iPad3 screen. But that does not imply that more backlight power is being generated --- they may just be there to create a more even backlight. Certainly my iPad1, while having a quite acceptable screen, had patches where light would bleed through a pure black image more so than in other regions of the screen.Reply

You're assuming the backlight is the same, and not more powerful. With the iPhone, they only had to pack 614,000 pixels in a small area, but with the iPad they're dealing with five times more pixels on a larger area. They even mentioned that they had to (rough remembering of it:) separate the "pixels" from the "signals" in the display and lift the former up so that signals don't get crossed. That may make it thicker, and make it need a more powerful backlight.

Plus it requires more power from the GPUs to run the display. I doubt the RAM would increase power consumption by anything but a small amount.Reply

The transmissivity (amount of light it lets through) of a screen is a function of the transmissivity of the pixels and the fill factor (fraction of the total area that is actually pixels and not black). In general: higher pixel density = lower fill factor and lower transmissivity of the individual pixels. That is exactly what is going on here; the new iPad screen has significantly lower fill factor and thus, for the same luminance, needs a stronger backlight.

This is the sole reason for the larger battery. The vast majority of power is sucked up by the screen even in the original iPad and iPad 2.Reply

You might be right that it was plan B, and that they had to make some unwanted decisions.But let's talk about power consumption:The new SoC is similar to the old one, except of a second GPU. Power consumption of the chip might be 30% more. RAM does consume some power, too, but not that much, or does RAM get noticeable hot?But the display is the deal breaker. Just take a look at the Engadget post about the iPad screen under the microscope:http://www.engadget.com/photos/the-new-ipads-lcd-u...It's pretty obvious that horizontally no added black gap was introduced by the switch to the higher density. But vertically about twice as much black area was added! Additionally does each LC cell consume power if turned off, now they have to control 4 times the amount of cells, so the panel without backlight will consume at least 4 times more power (whereas a panel doesn't consume that much at all compared to the LED backlight, still an increase). So if you increase the pixel density you will get worse transmittance, thus you have to increase the backlight brightness. With a single row of LEDs they couldn't operate the LEDs in their most efficient region, so they had to add a second row to increase brightness.

Other way to think about it: The new battery is 20Whr larger but it has the same battery life. So if you think it's because of the RAM and SoC, both together have to consume an additional 2 Watt. That alone is ridiculous.

It's wrong to say it's the SoC only, it's totally wrong to say it's because of the added RAM, it's also wrong to say it's because of the display only, but it's mainly because of the display.

Higher backlight brightness maybe twice as bright, faster GPU necessary, more RAM necessary, all because of the higher resolution.

And placing the RAM at the side or over the chip doesn't really change the power consumption, it's just a space saving, thus cost saving.Reply

With Rogue, these mobile SoC GPUs are getting into and maybe beyond the 200Gflop range (they said 20x the per-core performance of the 543) as the PS360 GPUs. The current MP4 is about 30 I think. Do you think the limitation will be elsewhere for actual real world graphics performance though? Last I checked these chips still didn't have the memory bandwidth of graphics cards even from 2005, and then there's processor performance and how large apps/games can be, not to mention controls. With so much potential in Rogue and future SoC chips I hope the other problems are looked at too. Reply

A 45nm A5X is a deal-killer for me. The "iPad 3" is essentially an underpowered version of the iPad 2 considering the display's high resolution and lack of CPU/GPU clock increases. The next iPad will benefit from a full node shrink to (presumably) 28nm on BOTH the CPU and the 4G baseband; likely in addition to new CPU (Cortex A15) and GPU architectures. The iPad 3 is shaping up to be a repeat of the iPhone 3G (read: only survives one iOS update before becoming slow enough to impair its usefulness).

This is in addition to the battery problems the iPad 3 is likely to experience: that 45nm A5X is BIG for a mobile SoC, and will be generating a lot of heat. Hot iPad innards = significantly diminished Li-Ion battery lifetime...Reply

I'm not sure about your iPhone 3G comparison. Apple's iOS updates seems to be more RAM dependent than anything else. A good example would be that iPhoto runs on the iPhone 4 (512mb) but not the original iPad (256mb) even though the latter's SOC is faster. The new iPad version RAM doubled while the iPhone 3G wasn't.Reply

I am very curious to see practical benchmarks. It is possible that the GPU upgrade increased performance for things other than rendering video. Remember that Core Image, Core Video, and other components of iOS/OS X are GPU accelerated. Applications actually feel a little bit snappier than they do in the iPad 2.

It is a minor difference but it is there. Again, looking forward to Anandtech's review.Reply

So it's as big as Ivy Bridge,that's a more interesting comparison.Mobile GPU war -Apple can't be part of such a war,so for such a war to exist we would need Android phone makers that have their own SoC to go for huge die sizes,forcing Nvidia,Qualcomm and everybody else to do the same but that would push phone prices up so maybe it would be better to have no such wars before 20/22nm.There is also the matter of heat and a huge GPU could force lower CPU clocks (like it might just be doing right now in Apple's case).For traditional PCs,consoles and TV's,obviously, a large GPU could work even before 20/22nm since $ and heat budget are less of a problem but there isn't much of a point to go that way unless you got the sales volume and the software.If anything, i would much rather see 2-3x faster storeage in phones and tablets.Reply

Even on a device that does not use a swap file? Once an app is launched, which takes on average a couple of seconds, where is all this slow disk i/o that's "bottlenecking" the experience? On the other hand, I can't think of many applications on something like the iPad that aren't using the GPU for *something*.Reply

You can't really compare it to Ivy Bridge though, IB will be using 22nm transistors so it will have a much higher density, and even Sandy Bridge is on 32nm now while Apple is on 45nm. Heck, even Tegra is on 40nm, so Apples chips are less dense overall. Reply

That still remains one of the oddest parts of Tegra 3. As pointed out in Anand's own Medfield review, current ARM cores by themselves can be very easily choked by lack of memory bandwidth (handling that much better seems to be a major part of why Medfield did so well). With the Tegra 2 it was somewhat understandable, because it was quite an early part, but at the end of 2011, with a quad-core part and updated (though still weak) GPU, it was very odd that Nvidia of all places would stay on a single-channel when everyone else had left that behind.

Tegra 3 was pretty disappointing. I very much hope as you say here that Wayne will be a major leap forward and really blow everyone's socks off. It's definitely going to be wicked exciting in 2013, with both Series 6 on the GPU-side and big.LITTLE A15/A7 heterozygous SoCs on the CPU-side. We're still on such a strong upward curve in the mobile space, every year is bringing incredible leaps forward and massive competition. Like being back in the early/mid-90s all over again but even better :).Reply

I've always felt like Tegra was designed for marketability over all else. Every Tegra revision was supposed to be the leader of mobile SoCs, but every time they turned out to be more hot air than performance. Quad + 1 cores is marketable; dual channel memory to actually feed the cores isn't. An 8 "core" GPU is marketable, but its handily slaughtered by a year old Imagination Tech (SGX) chip. Reply

1) They are cheap to make since the die size is so small. When carriers don't subsidize the device, margins can be small. Apple can make their chips be bigger since A) they sell so much volume and thus they can push downward pressure on their marginal costs by buying in bulk B) They are the market leader so they can charge more for their device2) Tegra 2 was the Android development platform for Android 3.X thus everybody knows the software and you don't have to pay money to tweak it.

So good marketing combine with cheap to make means you can make your money and sell the device.Reply

So the next SoC in iPhone coming up in 6 months time.This definitely wont be the A5X as it simply wont fit in the iPhone size.

A5X with 28nm die shrink? But as someone has stated above this doesn't make any sense because switching nodes requires tuning and redesign. It would be better if they simply design A6 around the new node.

So what will A6 be? Cortex A15+A7 with Rogue? Sounds Great! But Both A7 and A15 aren't anywhere near ready in a few months time. And it would take Apple a month to stock up parts.

IMHO 162.94mm2 makes it very unlikely that the A5X will end up in the next iPhone. An A6 makes more sense, maybe even with big.LITTLE A7+A15. It was said that there will be devices available with big.LITTLE by the end of the year, so if the next iPhone launches in October like it did last year it could happen (yes, Apple needs a lot of chips, but they also have the advantage of owning and designing both the chip and the phone, so the timing advantage and disadvantage might cancel each other out).

I mean what are the alternatives? A5X die shrink? A6 with just 2xA15?Reply

We won't see an A5X in the iPhone 5. The whole point of the extra graphics power is for powering the retina display in the iPad 3. It'll either stay with an SGX543MP2 or, and more likely, we'll see some 600 series chips.

Maybe it's a "quad-core" in the sense of 2xA7 + 2xA15. ARM said it's also possible to expose all 4 cores to the OS (standard would be the OS sees either 2xA7 or 2xA15). Apple with full control over software and hardware could easily expose all 4 cores to iOS.Reply