Not to be outdone by Nvidia's Tegra 4 announcement and Qualcomm's Snapdragon 800-series announcement, Samsung took to the stage today to announce the next processor in its Exynos 5 lineup: the Exynos 5 Octa is an eight-core SoC destined for tablets and high-end smartphones.

Not all of these CPU cores are created equal: four of them are high-performance Cortex-A15 cores, the very same found in the Exynos 5 Dual that powers the Nexus 10 and Samsung's ARM Chromebook. The other four are Cortex-A7 CPU cores—these have the same feature set and capabilities as the A15 cores, but are optimized for power efficiency rather than performance.

This makes the Exynos 5 Octa one of the first (if not the first) products to actually use ARM's big.LITTLE processor switching technology, something we outlined back in October of 2011. The SoC is designed to dynamically split the workload between the high-performance and the high-efficiency CPU cores based on the task at hand—less strenuous activities like browsing an app store or checking e-mail might be done on the A7 cores, for instance, while gaming and number crunching could be handed off to the A15 cores.

The A15 and A7 cores can work in concert with one another, making it theoretically possible to create a device that can set new speed records without devouring your battery (though of course we'll need to get actual devices in for testing to see how well the technology works in practice.) This differs from Nvidia's Tegra 3 and Tegra 4 SoCs, which include a low-power Cortex-A9 and a Cortex-A15 (respectively) "companion core" that can only be used when all of the SoC's other CPU cores are powered down. There's nothing to say that one approach is inherently better than the other, but while they're conceptually similar, they're functionally quite different.

The new SoC is also built on a 28nm manufacturing process that promises to be more power-efficient than the 32nm process used by the Exynos 5 Dual.

Samsung didn't name a particular GPU while it was on stage, but did say that it featured "twice the performance of any previous-generation processor." Assuming that "previous generation" includes chips like Samsung's own Exynos 4, we're probably talking about the same Mail-T604 GPU found in the Exynos 5 Dual, which in our Nexus 10 benchmarks turned in performance roughly twice that of the Exynos 4 (though if we find any evidence to dispute that fact we'll update this article).

Share this story

Andrew Cunningham
Andrew wrote and edited tech news and reviews at Ars Technica from 2012 to 2017, where he still occasionally freelances; he is currently a lead editor at Wirecutter. He also records a weekly book podcast called Overdue. Twitter@AndrewWrites

Edit for clarification after re-reading several ARM white-papers about big.LITTLE. Seems I was mistaken. There is a slim possibility that this would be a true 8-core SoC, where the OS sees and uses all 8 cores simultaneously. I still doubt that the first ever big.LITTLE implementation would jump straight to the AMP variation. But, I guess we'll just have to wait for the in-depth reviews to know for sure.

I still think it's marketing fluff to call this "an 8-core SoC" instead of trumpeting the big.LITTLE setup, or calling it "powerful, power-saving 4-core SoC" or something along those lines (like how nVidia trumpets their "companion core" for power savings).

Original post:

Spoiler: show

Repeat after me: THIS IS NOT AN 8-CORE SoC!!!

Yes, there are 8 separate CPU cores in the SoC; however, you cannot use more than 4 of them at once.

The mobile OS only sees 4 cores, period. When it's idle or doing light tasks, the OS uses the A7 cores. When tasks build up and the A7 cores ramp up the frequency, the OS switches completely to the A15 cores. All running tasks migrate from the A7 cores to the A15 cores.

This is exactly how the big.LITTLE setup works. And it all happens transparently. The user never sees more than 4 cores.

You CANNOT run applications on the A7 and A15 cores in parallel...it's one set of four cores or the other.

This is a 4-core SoC. Period.

Calling it anything else is marketing fluff of the worst kind! And not something I'd expect to be parrotted by Ars.

It amazes me that GPUs are an after-thought. The primary market for these SoCs is Android devices. One of the areas that Android devices fail to compete with iOS devices is in GPU power. Why constantly ignore that, especially at the same time that so many are looking to make Android gaming devices?

It amazes me that GPUs are an after-thought. The primary market for these SoCs is Android devices. One of the areas that Android devices fail to compete with iOS devices is in GPU power. Why constantly ignore that, especially at the same time that so many are looking to make Android gaming devices?

How easy it for devs to take advantage of GPU power in Android apps? With everything basically running in a Java VM, it seems like it would be a lot harder to tap into all of the GPUs power the way that something like Infinity Blade does on iOS, since those apps run closer to the metal from my understanding of how iOS works. How do the available APIs compare?

Funny thing though, there are actually 8 cores on the SOC. SO IT IS AN 8-CORE SoC!11!!!1

I updated the post to cover that. While there are technically 8 separate CPU cores in the SoC, you can never use more than 4 at once. Thus, it's not an 8-core SoC as we all understand. It's pure marketing fluff to try and call it an 8-core CPU.

Even the ARM literature on big.LITTLE refers to this setup as "quad-core".

Yes, there are 8 separate CPU cores in the SoC; however, you cannot use more than 4 of them at once.

The mobile OS only sees 4 cores, period.

[...]

Calling it anything else is marketing fluff of the worst kind! And not something I'd expect to be parrotted by Ars.

This, while probable, is not necessarily true. big.LITTLE MP (http://www.arm.com/files/downloads/big_ ... _Final.pdf page 7) allows for both the A15 and A7 cores to be online at the same time. This is more tricky for a standard SMP scheduler, as cores don't normally differ in performance characteristics, but not impossible.

It is not clear from this article or any sources if Samsung supports big.LITTLE MP.

Funny thing though, there are actually 8 cores on the SOC. SO IT IS AN 8-CORE SoC!11!!!1

I updated the post to cover that. While there are technically 8 separate CPU cores in the SoC, you can never use more than 4 at once. Thus, it's not an 8-core SoC as we all understand. It's pure marketing fluff to try and call it an 8-core CPU.

Even the ARM literature on big.LITTLE refers to this setup as "quad-core".

I can just see it now, all the whiners returning their Galaxy Note IIIs because all the task managers only show 4 CPU cores in use ... which is 50% of what they paid for! Blargh, this is the worst kind of misleading marketing. And something I'd expect Ars to pick up on and refute.

Funny thing though, there are actually 8 cores on the SOC. SO IT IS AN 8-CORE SoC!11!!!1

I updated the post to cover that. While there are technically 8 separate CPU cores in the SoC, you can never use more than 4 at once. Thus, it's not an 8-core SoC as we all understand. It's pure marketing fluff to try and call it an 8-core CPU.

Even the ARM literature on big.LITTLE refers to this setup as "quad-core".

Actually you are incorrect, there is another operating mode, big.LITTLE MP, which allows for all of the processors to be active.

Even if confined to a 4-core at a time mode, you are using all 8 cores at some point during your day, and getting the benefit of the different characteristics of those cores - so it's not just pure marketing fluff to call it an 8-core CPU.

It amazes me that GPUs are an after-thought. The primary market for these SoCs is Android devices. One of the areas that Android devices fail to compete with iOS devices is in GPU power. Why constantly ignore that, especially at the same time that so many are looking to make Android gaming devices?

How easy it for devs to take advantage of GPU power in Android apps? With everything basically running in a Java VM, it seems like it would be a lot harder to tap into all of the GPUs power the way that something like Infinity Blade does on iOS, since those apps run closer to the metal from my understanding of how iOS works. How do the available APIs compare?

Dalvik isn't a normal Java VM, it's very fast. It'd be a lot easier for a Java app to unlock GPU performance than CPU performance in any case, because OpenGL calls are OpenGL calls.

At any rate, you can also use the NDK and be 'closer to the metal' anyway.

This is likely a very ignorant question, but don't modern CPUs have the ability to enable/disable portions of them, and also have been able to dinamically alter their speed for quite a few years no? If so, what is the advantage of something like big.LITTLE, especially for a thermal and power constrained case such as that in smartphones and tablets?

It amazes me that GPUs are an after-thought. The primary market for these SoCs is Android devices. One of the areas that Android devices fail to compete with iOS devices is in GPU power. Why constantly ignore that, especially at the same time that so many are looking to make Android gaming devices?

The new adreno gpus and even the n10 have good gpus. Only by apple unexpectedly releasing an updated ipad did they regain the lead over the n10.

This is likely a very ignorant question, but don't modern CPUs have the ability to enable/disable portions of them, and also have been able to dinamically alter their speed for quite a few years no? If so, what is the advantage of something like big.LITTLE, especially for a thermal and power constrained case such as that in smartphones and tablets?

Yes, there are 8 separate CPU cores in the SoC; however, you cannot use more than 4 of them at once.

The mobile OS only sees 4 cores, period. When it's idle or doing light tasks, the OS uses the A7 cores. When tasks build up and the A7 cores ramp up the frequency, the OS switches completely to the A15 cores. All running tasks migrate from the A7 cores to the A15 cores.

This is exactly how the big.LITTLE setup works. And it all happens transparently. The user never sees more than 4 cores.

You CANNOT run applications on the A7 and A15 cores in parallel...it's one set of four cores or the other.

This is a 4-core SoC. Period.

Calling it anything else is marketing fluff of the worst kind! And not something I'd expect to be parrotted by Ars.

Interesting - do you have a link for that, because the design from ARM at certainly does allow for big.LITTLE to run both A15 and A7 cores simultaneously in AMP mode. It's just the 1st round of silicon which only supported all-big or all-little at a given instant, as you describe. It's not clear from the article which approach Samsung took, and I'm not sure if/when AMP-capable chips will (did?) hit the market.

OS support has a lot to do too - switching between the big or small cores, can be done with just a power management driver, or transparently by a hypervisor or monitor running under the OS kernel. AMP mode requires enough smarts in the scheduler to make sane decisions about where a particular thread/process should run.

EDIT: I should learn to read the whole thread before replying. Beaten by pretty much every other post

This is likely a very ignorant question, but don't modern CPUs have the ability to enable/disable portions of them, and also have been able to dinamically alter their speed for quite a few years no? If so, what is the advantage of something like big.LITTLE, especially for a thermal and power constrained case such as that in smartphones and tablets?

Yes, and both the A7 and the A15 does do this. However each has different microarchitectural designs (Pipeline length, possibly out-of-order execution, etc) so they have quite different power/performance curves. As the A7 ramps up in frequency/performance there is only so far it can physically regardless of how mych power is available. So then you switch to the A15.

Is it possible each is also built with transistors/processes with different optimisations for power-consumption Vs flat-out performance? That would further diverge the performance/power curves.

[Bad Car Analogy] Its like asking just because car engines have the ability to work at different RPMs, why do we need a gearbox..[/BCA]

A Promise not to hog the battery? Sure, I've seen the Cortex A15 power usage levels over at Anandtech, that things eats up more power than any other SOC out there. Intel, Qualcomm, Apple, and now even AMD with their new SOC have it beat in terms of compute power per watt.

I've also seen the Cortex a7 benchmarks, and that makes it look worse than the Cortex a9. I'm pretty sure ARM's reference designs have just made their first big foul up. And just when competition is fiercest, which is too say nothing of theoretical 7 watt TDP Haswell chips would should will probably crush the a15 in higher end tablets and notebooks. Good thing for ARM that they still have those licensing deals.

This is likely a very ignorant question, but don't modern CPUs have the ability to enable/disable portions of them, and also have been able to dinamically alter their speed for quite a few years no? If so, what is the advantage of something like big.LITTLE, especially for a thermal and power constrained case such as that in smartphones and tablets?

Even when running at 100% idle, with all cores power-gated, a Cortex-A15 CPU core leaks more energy than a Cortex-A7 CPU core (even when the A7 is running almost full tilt). Ars, Anandtech, and ARM all have graphs showing the power curves for the two cores. There's a small cross-over between the A7 running full tilt and the A15 being mostly idle.

Yes, there are 8 separate CPU cores in the SoC; however, you cannot use more than 4 of them at once.

The mobile OS only sees 4 cores, period. When it's idle or doing light tasks, the OS uses the A7 cores. When tasks build up and the A7 cores ramp up the frequency, the OS switches completely to the A15 cores. All running tasks migrate from the A7 cores to the A15 cores.

This is exactly how the big.LITTLE setup works. And it all happens transparently. The user never sees more than 4 cores.

You CANNOT run applications on the A7 and A15 cores in parallel...it's one set of four cores or the other.

This is a 4-core SoC. Period.

Calling it anything else is marketing fluff of the worst kind! And not something I'd expect to be parrotted by Ars.

Interesting - do you have a link for that, because the design from ARM at certainly does allow for big.LITTLE to run both A15 and A7 cores simultaneously in AMP mode. It's just the 1st round of silicon which only supported all-big or all-little at a given instant, as you describe. It's not clear from the article which approach Samsung took, and I'm not sure if/when AMP-capable chips will (did?) hit the market.

OS support has a lot to do too - switching between the big or small cores, can be done with just a power management driver, or transparently by a hypervisor or monitor running under the OS kernel. AMP mode requires enough smarts in the scheduler to make sane decisions about where a particular thread/process should run.

EDIT: I should learn to read the whole thread before replying. Beaten by pretty much every other post

Yeah, after going back and re-reading several ARM white-papers, I noticed the sections on big.LITTLE MP. Oops! Guess I'll be tasting shoe leather for awhile.

I still doubt that the first ever public release of a big.LITTLE architecture would be an MP implementation. But, I guess we'll wait for the in-depth dissections to be sure.

Edit 1: Their press release only mentions big.LITTLE, not big.LITTLE MP. And they had a bunch of ARM folks on stage with them, without mentioning MP.

"There's nothing to say that one approach is inherently better than the other"

Yes, there really is. Tegra 3's switching time from the companion core to the fast ones is a couple of orders of magnitude longer than big.LITTLE's.

"In theory, this gives Kal-El the best of both worlds – good peak performance and attractive idle power. However, the implementation results remain to be seen. The 2ms switch overhead is very high"http://www.realworldtech.com/kal-el-cores/

It amazes me that GPUs are an after-thought. The primary market for these SoCs is Android devices. One of the areas that Android devices fail to compete with iOS devices is in GPU power. Why constantly ignore that, especially at the same time that so many are looking to make Android gaming devices?

How easy it for devs to take advantage of GPU power in Android apps? With everything basically running in a Java VM, it seems like it would be a lot harder to tap into all of the GPUs power the way that something like Infinity Blade does on iOS, since those apps run closer to the metal from my understanding of how iOS works. How do the available APIs compare?

This is why we have things like RenderScript and OpenCL.RS is android only but it works similarly to openCL (additionally it can potentially target the immense compute power in DSPs like hexagon).So, iow, not that hard.

This is likely a very ignorant question, but don't modern CPUs have the ability to enable/disable portions of them, and also have been able to dinamically alter their speed for quite a few years no? If so, what is the advantage of something like big.LITTLE, especially for a thermal and power constrained case such as that in smartphones and tablets?

Even when running at 100% idle, with all cores power-gated, a Cortex-A15 CPU core leaks more energy than a Cortex-A7 CPU core (even when the A7 is running almost full tilt). Ars, Anandtech, and ARM all have graphs showing the power curves for the two cores. There's a small cross-over between the A7 running full tilt and the A15 being mostly idle.

There's a difference between idle and hot plugged.There is a hacky hotplug governor that's been around Linux for ages (traditionally used to remove dead CPUs in servers) but it has been used in the embedded space to completely power down CPUs if the SoC supports it. The problem is that the unplug time is far from deterministic (ranging from 5ms to a few secs) and it causes a general system freeze so state info can be migrated. IOW, it wasn't designed for the use to which the embedded people have put it. However, work is proceeding to see if these problems can be fixed.

This is likely a very ignorant question, but don't modern CPUs have the ability to enable/disable portions of them, and also have been able to dinamically alter their speed for quite a few years no? If so, what is the advantage of something like big.LITTLE, especially for a thermal and power constrained case such as that in smartphones and tablets?

Yes, and both the A7 and the A15 does do this. However each has different microarchitectural designs (Pipeline length, possibly out-of-order execution, etc) so they have quite different power/performance curves. As the A7 ramps up in frequency/performance there is only so far it can physically regardless of how mych power is available. So then you switch to the A15.

Is it possible each is also built with transistors/processes with different optimisations for power-consumption Vs flat-out performance? That would further diverge the performance/power curves.

[Bad Car Analogy] Its like asking just because car engines have the ability to work at different RPMs, why do we need a gearbox..[/BCA]

The ARM PDF that has been linked to a couple times shows this very well.

The graph on page 5 shows that the A7 core running at it's fastest speed is slightly faster than the a15 running at it lowest speed for performance but the a15 uses something like 50-75% more power. The lowest power/lowest speed for the A7 looks to be something like a 10th the power of the A15 at it's lowest power/speed.

I read an article about Intel that had a quote that a given CPU architecture can scale up and down it's power consumption/performance by about 10 times between the highest and lowest points. You can only lower the clock speed and voltage so much for a given design and you can only push up the clock speed and voltage of a given design so much. This is part of the reason why the current Core processors top out at 95watt requirement (they actually seem to max out at using more like 70-80). A processor that could use 140watts power would be really hard to get down to the 10/13 watts that they are saying for their new ULV chips. Ultimately this type of big.LITTLE design does make sense because you want to push the idle/low power state as low as possible for battery life reasons while still being able to provide higher active performance. It's not the only way to do things as others have mentioned Nvidia uses a companion core that is the same design that was process optimized to run at lower power/speeds in the Tegra 3.

I'd bet that this will be a regular big.LITTLE implementation in general usage (Android or Windows tablets/phones) where the OS only sees 4 cores and the SOC handles automatically migrating back and forth between the big and little cores based on the power state the OS is telling it to use. I think this will be the case even if the chip is capable of the big.LITTLE MP which would allow the OS to see all 8 cores. This means the Android and Windows schedulers don't have to be modified to understand the difference in the cores to correctly run the right task on the right core.

The a7 core on it's own doesn't look like a bad core at all. It should provide almost A9 level performance while being noticeably lower power. That should be plenty reasonable performance for a mid end smart phone while hopefully allowing for much better battery life with out having to have a frigging huge battery like the droid MAXX.

I've also seen the Cortex a7 benchmarks, and that makes it look worse than the Cortex a9. I'm pretty sure ARM's reference designs have just made their first big foul up. And just when competition is fiercest, which is too say nothing of theoretical 7 watt TDP Haswell chips would should will probably crush the a15 in higher end tablets and notebooks. Good thing for ARM that they still have those licensing deals.

It doesn't matter that the A7 benches below the A9 - as soon as you get close to maxing out the A7 your application will migrate to the A15, and that will blow the A9 away. The A7s are there to do the everyday computing for the smartphone at a very low power consumption. The A15s will kick in for games and other intensive applications. I don't know if Samsung's implementation can have all cores on at once, or if all processes need to be migrated to the A15s just because one is high power.

Let's talk about 7W TDP Haswells when they "soon to be available", not a year away.

Maybe when Samsung implements this new CPU Android will at last run without lag?

This old myth again? I haven't noticed any lag on my S3. If indeed there is any, its most likely due to Dalvik and that's not going to go away anytime soon.

This old excuse again? You must have the Super Special Edition S3, given only to diplomats and celebrities, because my S3 lags like crazy. Or maybe you just have lower standards? Every S3 I've laid my hands on lags exactly the same way mine does. Sure I get to used to it after a while, but then I use a friend's iPhone 5 and my fantasy is destroyed by its buttery smoothness.

Android phone developers had better clue into the fact that "super big bright screen" isn't going to be enough for much longer. They are going to have to do something about the user experience, and that will require taking an honest look at how borked Android is.

Maybe when Samsung implements this new CPU Android will at last run without lag?

This old myth again? I haven't noticed any lag on my S3. If indeed there is any, its most likely due to Dalvik and that's not going to go away anytime soon.

This old excuse again? You must have the Super Special Edition S3, given only to diplomats and celebrities, because my S3 lags like crazy. Or maybe you just have lower standards? Every S3 I've laid my hands on lags exactly the same way mine does. Sure I get to used to it after a while, but then I use a friend's iPhone 5 and my fantasy is destroyed by its buttery smoothness.

Android phone developers had better clue into the fact that "super big bright screen" isn't going to be enough for much longer. They are going to have to do something about the user experience, and that will require taking an honest look at how borked Android is.

I owned the iPhone 3GS, 4, and 4S. I still own the 4S as my work phone, and switched to the S3. The S3 is smoking fast and I've never seen lag, where as I have on my iPhone.

If I have a bunch of background apps suspended, the iPhone becomes particularly slow, but the S3 doesn't.

Android devices had lag early on because the UI wasn't GPU-accelerated in early versions of Android. But this is an issue of the past.

We can believe one anecdotal report or a conflicting one, or we could believe the countless developers and tech reviewers who all cite the S3 as lightning-fast and the lag issues as a thing of the past.

It amazes me that GPUs are an after-thought. The primary market for these SoCs is Android devices. One of the areas that Android devices fail to compete with iOS devices is in GPU power. Why constantly ignore that, especially at the same time that so many are looking to make Android gaming devices?

How easy it for devs to take advantage of GPU power in Android apps? With everything basically running in a Java VM, it seems like it would be a lot harder to tap into all of the GPUs power the way that something like Infinity Blade does on iOS, since those apps run closer to the metal from my understanding of how iOS works.

OpenGL is a high level language that describes relatively abstract tasks that can be performed by a GPU or even a CPU. Its not particularly close to the metal in the sense that you are thinking. Your GPU doesn't take the raw API calls and execute them. Instead they're parsed by a driver and essentially compiled into a form that your GPU can understand. This is why two GPUs with vastly different architecture can execute the same code (and in fact even CPUs can execute them in spite of having no graphics hardware at all).

Maybe when Samsung implements this new CPU Android will at last run without lag?

This old myth again? I haven't noticed any lag on my S3. If indeed there is any, its most likely due to Dalvik and that's not going to go away anytime soon.

This old excuse again? You must have the Super Special Edition S3, given only to diplomats and celebrities, because my S3 lags like crazy. Or maybe you just have lower standards? Every S3 I've laid my hands on lags exactly the same way mine does. Sure I get to used to it after a while, but then I use a friend's iPhone 5 and my fantasy is destroyed by its buttery smoothness.

Which version of the S3? US (using Qualcomm S4 SoC) or International (using Samsung Exynos SoC)? The performance between the two is very different, as Samsung heavily optimises their version of Android for the Exynos.