There’s absolutely no doubt that competition between chip makers is steadily increasing not only for PC processors but for mobile and other-purpose processors as well. The big five that need to be mentioned are Intel, AMD, nVidia, Qualcomm, and Apple. All these companies have different takes on how to evolve their processors, which will make it interesting to see whose strategy will allow them to rise to the top. My opinion, however, is simple – choose more cores over better cores.

Before I get into the nitty gritty, please note the wording of the article title. This article is about why I believe more cores will beat better cores, which implies that this is will ring true (again, according to my opinion) sometime in the future. As for now, with applications still predominantly single-threaded and unaware of multiple cores, better cores are the winners.

Background

Over the years we’ve seen our processors turn from single-core lightweights all the way to eight-core monsters (or 16-core if you include servers). Obviously, having multiple cores is beneficial and lets the system work on more data at the same time than if the system only had a single core. But at this point, a new question arises – is there a point where it’s more beneficial to stop adding cores and just make them better? Will having 12 cores instead of 8 make much of a difference? We may feel that having 4, 6, or 8 cores reaches the “good enough” plateau as far as the number of cores goes, but we could do a lot better.

Why More Cores Will Be Better

Of course, having more cores and better cores is the best solution, but what if you have to choose? If I was the one choosing, I’d go with more cores. Why? The inspiration to my answer lies in how GPUs work.

GPUs are packed with cores. In fact, some of the latest cards have 2,048 cores to brag about. They have that ridiculous amount of cores because it lets them work on data at the same time. With more cores, more data can be crunched. Yes, GPU cores are only good at one type of work (which is why we still use CPUs, not GPUs), but the same concept can be applied to CPUs as well.

With more cores, more data can be crunched by the CPU, and you get a speedy system that zips through anything you throw at it, provided that it’s programmed to be aware of all your cores. In short, many good cores will eventually be better than a few great cores.

The Current Plans of Big Chip Makers

Intel currently seems to be maintaining a 4-core limit (6 for their extreme series of products) but is making continued refinements to their cores. However, nVidia is also increasing its number of cores. So is Qualcomm with its Snapdragon processor, although somewhat more slowly while it also makes custom adjustments to the stock ARM designs. Even Apple is gaining cores with its iPhone/iPad processor, but at a very slow rate.

Which strategy is the best? Right now, who knows? Maybe you have an opinion?

Conclusion

What will really happen in the end is something that we can only find out through patience. However, as more software is becoming adaptable to numerous cores, the advantage will eventually shift to those processors that, as an entire component, can output the most work. Until then, we’ll just have to be happy with what currently works the best.

What’s your opinion, more cores or better cores? When do you think we will finally know which choice is better? Any other thoughts? Let us know in the comments!

For anyone interested, go check out Amdahl's Law [http://en.wikipedia.org/wiki/Amdahl's_law] The true advantage of more cores depend purely on the quality of the code. And a proper parallel program is really a bitch to get right.

And you can't go about comparing CPU's with GPU's. They are entirely different beasts. GPU cores run programs called Shaders. These shaders are tiny programs written in a language close to the GPU's machine language. Each component/object in a 3D environment has some shader applied to it. Be it to reflect stuff, surface finish of the object or some light effect. The shader affects how the object looks on the screen at the end.
Because of this, GPU's are good at doing parallel work, it's got a huge collection of custom cores only capable of performing tasks you can define in Shader Language. Each Shader is separate from it's neighbour and works independently. This makes GPU work truly parallel.

Not so with a normal PC. Any OS is usually one big program that loads/unloads sub programs into some execution environment - no one gets access to the hardware. Only when we see software vendors make software (and OS's)that is properly parallel from the word go, will we see a true advantage.

Don't for one moment think an eight-core 1.4GHz processor will beat a single core 4GHz today when doing some benchmark. The software available to us is not nearly as optimised as it should be.

It's not really the quality of code that's the issue (although it can be) - it's how suited the job is for multitasking. Graphics Processing Units have huge amounts of data that need to be processed in real time - but it's just one type of operation. There's little decision-making to be done based on previous data - it's just doing the finishing work on whatever the CPU has decided.
Most programs don't work like this, however. The next state of a program is usually highly dependent on the many, many states that have come before it. If you're driving a single car, the choice of path you take depends on many variables - road conditions, traffic, the signs telling you how to get there. If you're in a busy city, you can't always count on being able to take the same routes to your destination. You may know in advance that you want to turn at one intersection, but until you actually get there, you don't know for sure that it's possible. On top of that, you're driving just a single car - you can't take two roads at once.
If you can find a way to subdivide tasks - tasks that don't require other tasks to come before them, then you can multitask. You can have one thread watching for input while another thread processes input already received, and another one works on displaying data that's already been processed. One still needs tasks to be completed by other threads before they can do anything, but once they have the data, they can work independently while the previous thread is now free to deal with the next batch.
More cores allows for more to be processed simultaneously; it doesn't actually speed things up. If there isn't any way to efficiently distribute the workload, then more cores do nothing for you; you're constrained by how much work a single core can do in a given time frame. If you CAN break up the workload, and have each core handle its own small, independent piece, then you can get much more done in the same amount of time. That's why multicore processors seem to be faster.

Even if the job that a program is trying to perform isn't something that can be easily split into subtasks, having multiple cores should still be advantageous when looking at the whole system, as there are multiple programs running at the same time. If they (and their workloads) and more evenly distributed among the cores, then in general the programs will have more CPU cycles available as there are more CPU cycles total.

Danny Manno

March 20, 2012 at 1:00 am

Take it from someone who has had to study the ins and outs of processors both CPUs and GPUs from pipelines to Flynn's taxonomy to RISC and CISC and various architectures and instruction sets; its not that simple.
Have you ever tried writing multi-core optimised code?

I'm just curious where you got the Task Manager screen shot from, only because the background is the eagle logo of the University of North Texas (which most people don't use unless it's a university computer).

Yes, that is the eagle wallpaper for UNT, but no it's not a university computer. I'll be attending UNT for my freshman year in this coming Fall, and I just found the wallpaper and have it set on my personal computer. So I guess I'm an exception. :)

I will prefer better but lesser cores over many cores. Because better cores means less power draw, more optimized for specific applications & they will be used to their full potential & output consistent but smooth performance. I will prefer upto 4 cores because many tests have shown that it is a sweet point for many resource hog applications & there are 90-95% applications out there that are still not optimised for 2 or 4 cores...

for now....wasnt that the point of the article? its an opinion about speculation. thats what makes choosing a stand difficult at the moment....but if i had to choose one i would say better cores (with max limit of 8 cores-servers excluded) will come on top in the long race as well, simply because we value convenience too much.