I think it's pretty obviously an ARM solution as that's what they have experience with. I think that also makes more sense because they have been touting the benefits of using those combinations. Furthermore the 544MP3 would practically be a step back from the T604, even ignoring the other ARM and IT solutions they could use.

I don't think the delay switch matters that much, when the goal of these chips is reasonable performance with good battery life, not maximum performance, otherwise you much as well just chuck the A7 cores and run the A15s at full blast.Reply

Yeah, the compromise-y nature of it is important for the whole thing to make sense. In theory, 6W's a lot. In real use, you rarely hit that--you usually just blast 1-2 of the A15s for a few seconds while you load a webpage or app or do some other big chunk of CPU-bound work.

If I'm going to second-guess and play armchair engineer (as DigitalFreak aptly put it), maybe you can imagine other uses for all that die area than going 4+4-core when many workloads still aren't heavily threaded--more cache w/the A15s, more GPU (I bet games on 1080p phone screens can use a lot), something. Apple was OK with dual-core, at least as of the A6(X). Other hand, I haven't the first clue how other designs perform, etc. and Samsung does, so I should close my mouth. :)Reply

I don't see the mansion of L3 cache, and the L2 cache of A7 and A15 are not shared. Therefore, it's highly possible the switch is across the main memory, which may add mile-seconds of delay (dumping and reloading cache data to and from low power ddr, power down and warm up the cores). How much should A15/A7 done to just even out of the performance and energy penalty of switching?Reply

It depends on what is handling the switching.These initial implementations are using a cpufreq driver with internel core switching being moved from the hypervisor to the kernel in order to switch between pairs (as illustrated above, the heterogeneous mode will come after a good solution is found for the scheduler). The switching times aren't bad b/c you have cache coherency (not shown above) and thus you only need to transfer the active register states.

"While it's possible for you to use both in parallel, initial software implementations will likely just allow you to run on the A7 or A15 clusters and switch based on performance requirements."-Imagine future projects such as OUYA based on this baby with all cores enabled :)This will be a perfect HTPC.

Intel better be prepare, time is ticking. Seems like every generation, ARM cpu takes a big jump in performance.Reply

There's a difference between a more powerful CPU and one that simply has more cores slapped on. If ARM really had a more powerful CPU, then this architecture would only have one smaller CPU that was able to run everything while consuming less eneregy, rather then needing two in order to save energy.Futhermore, if Intel should be afraid of ARM, then they should be afraid of AMD for making an 8-core processor that outperforms a 4-core processor by a bit.Reply

I'm afraid you are confused in this case...that is the architecture of the cores, being ARM v7...all Cortex cores use this architecture, the A5, A7, A8, A9 and A15...so it is correct :) The left column in that table is the A15, the right is the A7.Reply

Any idea why samsung is using A7 for little instead of A5? If A7 and A5 are both ARM v7 archietecture, it makes more sense if they use A5 instead of A7. Beacuse A5 is low power core than A7 and thats the main concept of little core right?Reply

Both say ARM v7a, which is the instruction set architecture. Both A7 and A15 processors use the same instruction set, hence they are able to implement the big.LITTLE architecture in the first place.Reply

The Architecture is Arm v7a, the actual chip designs are called A7 and A15. This means both designs understand the same instructions and can thus run the same software (which is needed for quick transparent switches). ARM is not very good at the naming game yet.Reply

No, they never test it. They haven't even tested if the SoC works at all. And the performance numbers, they are just random numbers. /s

The A7 is only slightly slower than an A9. Android JB runs smooth on dual core A9 SoCs because it makes heavy use of the GPU for rendering. So for lag free UI the GPU will be more important.A quad core A7 will be faster than a A9 dual core! It will handle the usual tasks without any issues at all.Reply

The A7 is supposed to have just slightly less performance than an A9, but with much reduced power requirements. It will most likely take over from the A8 and low-end A9 SoCs for low-to-middle-range phones. Look for dual-core A7s to hit "feature" phones this year.Reply

According to the chart, the Quad Core A15 part consumes about 5W! Probably CPU only. If this SoC is put inside a smartphone, the battery would be dead in less than an hour if you also consider the power consumption of the display and PowerVR GPU..This SoC is for tablets with large enough batteries and large enough surface to passivly cool the 5W+GPU waste power. Maybe in the next Galaxy Note 10 it will find a use or in a boosted Nexus 10, but never in a smartphone.Reply

The A7 turns back the clock about 3 years back to the Cortex-A8 days in terms of DMIPS/Hz. I can easily see many an app, process or thread wanting more. Will be interesting to see where running a web browser will land. It's not going to be pretty if it stays on the A7.Reply

All very true. :DMy Galaxy Nexus has some "think pauses" (I'm running a custom everything, so not sure if that happens on plain Android). But when that happens I often wonder if it is a CPU issue or a memory one. It mostly happens when starting/switching between memory intensive apps (big emails, video, browser). Would the noticeable performance increase be bigger from an A15 upgrade or from getting a midway decent SSD with >200MB/s seq r/w and >30MB/s rnd r/w. :)Reply

That's the idea behind big.LITTLE. In low demanding tasks use the A7 in high performance task use the A15.But the device must be able to handle the A15 power consumption.But if the A15 consume 5W and you start a game which will most probably make a use of the A15 power, your smartphone battery will be dead in an hour, just because of the CPUs.Yes, in standby the battery life will be good, but I never denied this. That's what big.LITTLE is made for.In heavy use however, this SoC will, with CPU and GPU full power, consume, most probably, 10W. A smartphone battery has <10Wh. A smartphone surface is too small to dissipate 10W.Conclusion:This SoC won't find a use in a smartphone. It's physically impossible, except you never make a use of the A15 cores, which defies the purpose of this SoC!Reply

If you assume 3.5 DMIPS/MHz for Cortex-A15, a quad-core A15 running at 2 GHz is 3500*2*4 = 28000 DMIPS. That's quite close to the point in the upper right in the plot, which is actually a little over 5 Watts. Maybe 5.2 W.

Even in a tablet, the SoC may be prevented from maxing out the CPU and the GPU at the same time. This could be an 8 to 10 W SoC with both the GPU and CPU maxed out.Reply

I don't think the die photo shows the whole die, only the upper half maybe.I'm also no chip designer, but maybe the orange brown area gets used for wires to connect the different parts.The top left part looks odd, not orange brown, not structured, but near the RAM interfaces. Maybe they blurred that part because it contains their 'secret CPU switching' part?Reply

I see.Yup, the photo only shows half of the die, Really made it more confusing.I didn't mean wasted and in empty and does nothing but more as in no active component. And if that was the entire die the percentage occupied by the orange brown space would have been huge. Since this is probably just half of the die it matters a lot less.Reply

I was saying years ago how intel needed to take an atom and stick it on the same die as an i-series chip. And essentially do with them exactly what is described in this article. But of course, they didnt do it, and as a result they lost billions in potential mobile chip sales to companies like apple. Haswell looks kind of like an improvement, but you can tell they're still not doing what needs to be done.

90% of the time, a tablet/ultrabook only needs the cpu power of one single atom core. This is the basic fact that has been ignored by intel (and AMD) for more than a decade now. But samsung understands this.Reply

No!Extra devices get shut down if not used.Laptop screens are/were terrible, but this means they are low resolution TN panels. But TN panels have better transmittance than IPS. Low resolution displays have better transmittance than high resolution. I hope you're able to follow, but this means, yes, they are more efficient and don't require such a bright backlight.The power consumption of a HDD is, in idle, higher than the one of a SSD. But therefore you get more space. And in general, the impact is small compared to the CPU and GPU power consumption.I'm sorry, your logic is flawed.

Btw: This is an article about the power consumption of a SoC, and only the SoC! Why do you compare it with the power consumption of a whole system? This SoC consumes less power than an Intel CPU/GPU/chipset combo. That's what matters. Nothing else! So don't compare apples with oranges.

Mali T604 is the same level as PowerVR 554 @280 MHz. And PowerVR 544MP3 is slower than both of those GPUs. So it would be very strange to see it inside Exynos Octa. More likely there will be Mali T658/678MP4 which is twice as fast as anything on the market today.Reply

But I'd rather have seen a Quad-core A7 and Dual-core A15. I feel at least two of the four possible threads should always be on a LP core. Maybe an Exynos 5 Hexa, and an Exynos 5 Octa for the next Chrome book (plus a bigger battery).

That Exynos 5 Hexa could then fit another (power gated) GPU module. Or maybe a small 2d accelerator to allow constant gating of the GPU during normal usage (like the OMAP 4470)

Regardless, the Exynos 5 Dual is too weak of a low end (between low core count and no LP cores) while the 5 Octa is too strong. *Sigj* this time I feel I really could armchair-engineer a better solution. Reply