According to Primate Labs' own John Poole, the latest version of the app -- which landed on the App Store today -- "features a dramatically improved processor frequency detection algorithm, which consistently reports the A6's frequency as 1.3GHz." In speaking with us, he affirmed that "earlier versions of Geekbench had trouble determining the A6's frequency, which lead to people claiming the A6's frequency as 1.0GHz as it was the most common value Geekbench reported."

I thought this was interesting as previous reviews done by Anandtech showed that the CPU was clocked at 1.2Ghz+ and not the 1Ghz espoused by others, this lends credence I guess. Also note that Krait S4 and Exynos have about a 25% better floating point and integer calculator than the A6.

Yet, it's interesting how the CPU achieves almost twice the performance of the previous generation CPU. I suppose it is software optimization combined with a good hardware?

yeah it's optimization and the fact that iOS was made for only specific hardware

it's like how L4D2 on Linux got more fps, not because of Linux using less ram or stuff (as it is, Windows 7 idles at 0% cpu usage, not much ram, etc) but because the Linux kernel is open source, and OpenGL is an open standard, Valve was able to make the Source Engine, the graphics driver, and the Linux kernel communicate in some mind blowingly efficient ways unlock the true potential of the graphics card.

basically optimization is a big deal, but all these years it was never considered a big deal because we were used to "one size fits all" policies from the tech leaders of that era (Microsoft)

yeah it's optimization and the fact that iOS was made for only specific hardware
it's like how L4D2 on Linux got more fps, not because of Linux using less ram or stuff (as it is, Windows 7 idles at 0% cpu usage, not much ram, etc) but because the Linux kernel is open source, and OpenGL is an open standard, Valve was able to make the Source Engine, the graphics driver, and the Linux kernel communicate in some mind blowingly efficient ways unlock the true potential of the graphics card.
basically optimization is a big deal, but all these years it was never considered a big deal because we were used to "one size fits all" policies from the tech leaders of that era (Microsoft)

It has nothing to do with the Linux kernel or it being open source. OGL sends more draw calls than DX does and Open GL actually runs faster under Windows than it does OS X.

DX makes it so that programming in low-level assembly is a waste of time as does OGL. High-level programming languages exist for a reason.

Valves showcase is hardly an exercise in capabilities more than it was fodder for the Linux proponents, considering they're supporting Linux more than they ever have in the past.

The DX version of L4D ran at 303fps vs. 315 for the OGL versiion, easily chalked up to error of margins.

This all coming from Valve's Linux team.

Quote:

Originally Posted by DuckieHo

Memory bandwidth is probably the biggest deal. Current, ARM-designs are (or are close to) being RAM bandwidth-starved especially for GPU-intensive workloads.