It doesn't matter. ARM has discontinued their 1st Gen Midguard GPUs (Mali-T604 and T658) for all intents and purposes. If you go to the Mali-T604 page, you see this message:

"Did you know about the ARM® Mali™-T624?

Designed for visual computing and using innovative tri-pipe architecture, the ARM® Mali™-T624 GPU Compute is the upgraded version of the Mali-T604. This solution builds upon a track record for high-quality scalable multicore solutions for 2D and 3D graphics. Key APIs supported include OpenGL® ES 1.1, 2.0 and 3.0, DirectX® 11 and OpenCL™ 1.1."

ARM has even removed the Mali-T658 page from their servers. In fact the only reason Mali-T604's page is still present on ARM's website is because Nexus 10 has recently been launched. I guess this infuriated Samsung and they decided to leave Mali for the Exynos 5.

Hummingbird already used PowerVR hardware, so it's not surprising to see them go with that again.As for PowerVR > anything, Adreno, especially the new 3xx series looks very competitive in terms of performance/power consumption and depending on the. Getting power consumption numbers for Mali GPUs is a bit more difficult, but for what they are, I find them quite adequate as wellBut I guess Fanboys have conquered the smartphone hardware space now as well.Reply

Yes, but not necessarily because PowerVR are the best (there aren't many good apples to apples comparisons for mobile GPUs out there). Apple spends the most die area of any SoC manufacturer on their GPU, that's why they are the fastest.Reply

"I wonder, why did they not opt for the PowerVR Rogue arch ? That could have been much handy in beating Apple."

Because Apple usually get the good stuff first? They're typically a version ahead of the comp hence the 554MP4 on the iPad4 vs the 54xMPx on the others. It's been this way since the first iPad. Probably Apple's 10% stake in Imagination working for them although the NovaThors have been touting Rogue for a couple of years but still no production - issues?Reply

Well that's very disappointing. I was hoping for Mali T628. But even this Mali GPU is disappointing in itself. If they decided to move to PowerVR, they should've at least gone with Series 6. Instead of moving forward with OpenGL ES 3.0, they're moving backwards.

What the hell Samsung? I fear this may be just the first of Samsung's dumb moves in 2013, and 2012 may have been their peak.Reply

I suspect, as Anand has mentioned before, that the power efficiency of the PowerVR architecture was a key decision in not using a Mali T6xx. In the power test article the T604 was pretty power hungry.

I don't think Apple will get Rogue into their next iPad, especially if the rumored March launch is true. A6X is so much faster than its rivals, they can use the die shrink to 28nm to boost clocks, and still easily have the fastest GPU. Reply

It would be nice if they added a row that explicitly states what the max shipping frequencies for each chip are that are used to calculate the GLFOPS rating. Working backwards the frequencies are: 250MHz (A5), 250MHz (A5X), 533MHz (Exynos 5 Octa), 280MHz (A6X). The 280MHz A6X frequency in particular would be good to get explicitly on paper since it's been hinted at before, but I don't think anyone has explicitly stated it.

And for most correctness, I believe the GFLOPS actually round to 51.2 GLFOPS for the Exynos rather than 51.1 GFLOPS at 533MHz and 71.7 GFLOPS for the A6X rather than 71.6 GFLOPS at 280MHz.Reply

Hasn't happened yet. PVR and Apple have ruled the roost since 2007. Don't hold your breath. Every single halo mobile gpu launched in that timeframe has been one disappointment after another... well except of course for PVR launches. I'm really tired of waiting for 1) Some other device maker to use as many PVR cores as Apple and 2) Some other GPU developer (Nvidia!) to get off their butts and make something competitive. I think I'm going to close my eyes and check back in 3 years.... AGAIN.Reply

In that case is an extra question.How do they count cores?From what I understand, current SoC with PowerVR have at most 4 PowerVR cores.Mali designs are similar having at most 8 cores according to wikipediaThe Tegra 4 has 72 GPU cores of some kind.Are they counting different things? Is the architecture so significantly different?Reply

Yes they are counting different things. Each PowerVR "core" is really a complete GPU with shader ALUs, texture units, control logic, etc. Each nVidia "core" is really just a shader ALU. That's why Anand counts SIMDs which is basically the ALU count. Each SGX543/SGX544 "core" has 4 SIMDs/ALUs. Whereas each SGX554 "core" doubles the SIMD/ALU count to 8 per core. As such the SGX554MP4 in the A6X has 32 ALUs. PowerVR ALUs are also beefier and are able to process 4 MADs instructions each whereas each nVidia ALU can only do 1 MAD. That works out to the SGX554MP4 being able to process 128 MAD instructions per clock whereas Tegra 3 is only able to do 12. Tegra 4 is reported to use basically the same GPU shader architecture as Tegra 3 just with everything increased 6x. Tegra 4's 72 cores therefore can likely process 72 MAD instructions per clock vs 128 MAD instructions per clock for the SGX554MP4. Clock speeds, memory bandwidth, and other factors will affect real world performance, but clock-for-clock, the SGX554MP4 in the A6X should be faster than the Tegra 4.Reply

The problem is always the same and likely to stay: cost.This is reflected in the die area of each mobile chip. A5X (iPad3) was ~163mm2 while Tegra3 was ~80mm2. You are talking of a 2x factor. For twice the cost, Apple can afford to be better.(see this table to see the different sizes: http://www.anandtech.com/show/5685/apple-a5x-die-s... )

In discreet GPU, this would be like comparing a GTX680 (top of the line, $400) to a Radeon 7870 (top middle, $200). The Nvidia card will trample the Radeon, at the cost of... money.

Of course, in the case of tablets and phone, we do NOT see those costs, as they get hidden into the total cost and often pocketed by the product manufacturer (another interesting link explaining why Apple may have introduced the iPad4 as a cost reduction: http://blogs.timesofindia.indiatimes.com/WebWise/e... )

So, as the GPU capabilities will be capped by costs, it is unlikely that we will see anyone else beat Apple (which, honestly, overspends in graphic power at the moment as a differentiation and advantage for gaming).Reply

The cost between a 80mm2 die and a 160mm2 is not that huge if you are shipping in the tens of millions. You'd spend that $10 when it significantly improves your $500 device and imposes less demand on the battery, and the amount of heat from the device.

Having cool running silicon is very important in these slim devices. That's why apple burn area. It means they can design the device they want, rather than the device that the chip enforces them to.

Having a smaller, higher clocked and hotter running chip would probably cost Apple more in working around the constraints or having a larger battery.Reply

the gpu of this phone has twice the graphic performance of the Iphone 5, which the gpu runs at 267MHz, then the gpu of the Galaxy S4 I9500 running at 533MHz, already that one core of the SGX544 gpu, at 200MHz has 7.2 GFLOPS of compute performance, 3 cores of SGX544 at 533MHz have 57.6 GFLOPS, not 51.1 GFLOPS, like the specialists of tech site Anandtech wrote.Reply