Moore's Law is coming to an end

After nearly 40 years, Moore’s Law, an observation about the evolution of computers, made back in 1965 by Intel co-founder Gordon Moore, is coming to an end. We have been hearing predictions about Moore’s Law ending around 2020 at the 7nm node, where manufacturing difficulties are said to become too challenging, but the reality of the situation is actually different.

Problem with such hypothesis is that Moore’s Law is formulated not just for transistor size and efficiency - it’s also about component cost. Here is the exact definition of Moore’s Law, as proposed by Moore:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.

Hearing about the end of Moore's Law around the same time when Intel unveils 14nm Broadwell chips scheduled to come by the end of the year might be strange, but there are plentiful reasons to back up such statement. In order to understand the reasons, you just have to look at the chart below, shown by ST’s Joel Hartmann, showing how cost starts drifting off the vector at 20nm and lower.

Moreover, this is not an isolated finding: the same data is also backed by chip maker GlobalFoundries, and others. Basically, this second chart by GlobalFoundries shows how 28nm polySiON manufacturing is the most cost efficient architecture at the moment, and scaling down, or using other processes like HKMG and FinFET, comes at a much higher cost ratio.

Gordon Moore formulated the law in 1965

In addition, one has to factor in the increase in wafer prices that zeros out the transistor density gain at sub-28nm nodes. Finally, on top of that, SRAM bit cell size scaling encounters some fundamental challenges at sub 28nm. Since SRAM bit cell size is also crucial for the end SoC size, it is extremely challenging to reduce that size in newer nodes. We recommend you to take a look at the source at the bottom of the article for a detailed breakdown of all the technical difficulties encountered when scaling below 28nm.

However, what seems clear even without digging in much depth is that there are valid reasons why 28nm indeed brings the end of Moore’s Law. This would also bring the end of dimensional scaling that chip makers have relied so long upon. Now, silicon crafters will have to look into different ways to optimize chip performance in the very near future. What would it be? We’re sure we’ll learn soon, but feel free to tell us about your guesses and thoughts right below.

Manufacturing on the 28nm process is most cost effective so manufacturers will tend to use the most cost effective route.

But is that even valid? Tech always becomes affordable, yet we manage to always creep into the more advanced and expensive technologies, then after a while this technologies become more widely used and cheaper. I don't see cost affecting Moore's law for a LONG time (20 years min). Moreover, graphene will extent Moore's law to about 2-3nm process.

Nobody ever argued that Moore's law wouldn't end, everyone knows it will. But, that's only because Moore's law relates strictly to circuitry density, not speed. Processor speeds will continue to rise for centuries free from Moore's law.

+1 for you
I was going to say the same, the page is taking into account only the cost of TODAY to build lower than 28nm, but it does not take into account that the more chips are produced the in lower dies the cheaper it comes.
I´m sure the will find ways to low the price of those technologies in order to increase performance. Even if that means going out of silicon to carbon or something else.

21.AppleHateBoy (unregistered)

If you have been following the graphics/CPU industry, you would know better. CPU performance has stagnated for all intents and purposes ever since Sandy Bridge at 32 NM hit. Yes Ivy Bridge and Haswell decreased power consumption but performance has remained the same.

GPU makers have for the first time built two generations of products on a single node that is 28 NM. NVIDIA and AMD haven't got a new node for almost 2.5 years. And by the time 20nm comes 28 NM will have served for 3 years. This hasn't happened before.

TLDR, the article is correct in stating that 28/32 NM were the last nodes that followed Moore's law.

Silicon is as cheap as it gets. Graphene as carbon is quite cheap in the sense that is IS carbon, but making mass quantities in a nearly non-existent room for defect tolerance makes it extremely expensive for the foreseeable future. Don't just assume Graphene will work out.

Now that researchers have gotten more time to experiment with the material the rose colored glasses have come off and many see it's many faults. Graphene is already seeing proof of concepts in certain types of applications but nothing even remotely close to proof that it can be used for digital logic. Even producing graphene with the required purity is painstaking, because even just one slightly crooked atom in graphene can drastically alter it's properties.

I think it's time for people who believed some of those air-headed futurists to face reality. Thinking that exaflop computing would be in their phones and they could play Crysis 130913 with real time ray tracing in 8k resolution beaming it to a dry wall with their smartphone light projector at the end of the decade. Psh... the nerve of these people.

That's because Intel wasn't in it ;-) They have gone from 32 nm to 22 nm and will soon be releasing 14 nm with 10 nm design underway. Good enough application processors will never be good enough because there will always be more software functions that require more compute power.

Wrong. Moore's law is about transistor density in the same die area, effectively making transistors cheaper and economical while also improving computational power and efficiency as a nice plus.

Qubits and quantum computing are a little different in that a 'finalized' design for them isn't even decided on. Hell, there isn't even a proper quantum computer out. Just D-Wave, which uses quantum annealing and doesn't work the way a traditional one is theorized to.

I don't know who upvoted you, but you should read more about quantum computing first... its interesting. Moore's law was not about computational power so much as it was economical. It was somehow interpreted into the former by the masses at some unknown point in time though...

And yes, Moore's law talking about transistor density in the same die area, effectively making transistors cheaper and economical while also improving computational power and efficiency. But the Moore's law result is that the computational power will keep getting doubled gradually but yet maintaining an afforable price for consumers at the end, that's is what I mean on the first comment....

Yes! At some point, quantum barrier-tunnelling features will wash out any advantage of making the junctions any smaller. Now, quantum computing will revolutionize the handling of data, but the junctions will still have a minimum size due to the fact that, at some point, electrons cannot avoid crossing over barriers into places where they are not wanted. The question is: How small can the junctions be made and still serve as an effective barrier to hold out quantum effects?

All content (phone reviews, news, specs, info), design and layouts are Copyright 2001-2015 phoneArena.com. All rights reserved. Reproduction in whole or in part or in any form or medium without written permission is prohibited! Privacy . Terms of use . Cookies . Team