This is what the death of Moore’s law looks like: EUV rollout slowed, 450mm wafers halted, and an uncertain path beyond 14nm

There have been a number of events over the past few weeks that collectively point to serious problems ahead for the semiconductor market and all the players in that space. While some companies will be impacted more than others, the news isn’t great for anyone. The entire economic structure that was supposed to support both Intel and the major foundries as they moved to next-generation manufacturing technologies, such as 450mm wafers, extreme ultraviolet lithography, and 20nm CMOS, is on the verge of coming apart.

450mm wafers and EUV

18 months ago, TSMC, Intel, and Samsung made headlines when they all poured money into ASML’s efforts to develop 450mm wafers. These were major announcements at the time and they signaled that all of the major logic manufacturers were on the same page when it came to 450mm wafer development. Then, in December, came news that ASML had hit “pause” on this project. Intel’s 450mm installation at Fab D1X is reportedly on hold, as is the company’s Fab 42 in Arizona.

The CEO of Applied Materials, Gary Dickerson, has stated that the 450mm wafer timeline “has definitely been pushed out from a timing standpoint.” That’s incredibly important, because the economics of 450mm wafers were tied directly to the economics of another struggling technology — EUV (extreme ultraviolet lithography). EUV is the follow-up to 193nm lithography that’s used for etching wafers, but it’s a technology that’s spent over a decade mired in technological problems and major ramp-up concerns.

TSMC’s talk on EUV at SPIE wasn’t kind – Image courtesy of EETimes

One of the single greatest problems is source power. To put this simply — no one, including ASML, has yet demonstrated an EUV tool capable of reaching anything like the necessary power concentrations or of sustaining production volumes. Instead, we’re stuck at the red dot shown above. The enormous costs of shifting to EUV and 450mm wafers were meant to be partly offset by making the jump at the same time.

That might not make sense at first, but remember — EUV was expected to reduce lithography costs by allowing manufacturers to move away from expensive double patterning. The high cost of 450mm wafers and equipment would be offset by superior economies of scale from the larger wafer sizes. 450mm also offered less loss at the edge and better throughput in terms of wafers per hour.

EUV was expected to bring lithography costs under control

But if EUV doesn’t debut as a 450mm-only feature, then one of the major reasons to update goes down the tubes — and the more time goes by, the harder it gets for EUV to catch up. Without good EUV delivery, 450mm wafers may never make much sense. Samsung is reportedly balking on taking delivery — it’s not willing to risk losing market share to other memory manufacturers if a 450mm installation drive means less 300mm production.

The death of Moore’s law

At the SPIE Advanced Lithography conference at the end of February, a group of lithography engineers — men and women who have spent their careers pushing the boundaries of Moore’s law — toasted its death. This is the economic reality we predicted 15 months ago — flat scaling at 20nm compared to 28nm, and only marginal improvements predicted thereafter.

The video is meant to be funny, but the point isn’t being argued any more. Moore’s law is no longer expected to deliver improved transistor cost scaling at or below the 20nm node. It’s important to understand how that plays into concerns about EUV and 450mm wafers put together. For decades, it’s been a given that GPU and CPU transistor counts would increase every generation and that this would be economical because increased density allowed for a cheaper cost per square millimeter and more chips per die.

If per transistor costs rise, suddenly higher transistor counts become a liability. It’s still possible to pack more transistors per square millimeter, but that mindset becomes a financial liability — every increase in density increases costs instead of reducing them.

This is where 450mm wafers and EUV were supposed to come in. EUV relieves the need for double patterning and the tremendous additional costs that entails. 450mm wafers offer manufacturers far more area to work with, increasing fab productivity and boosting economics by allowing for much greater efficiencies of production. Improve these factors, and the higher per-transistor costs can be carried for a few more generations.

Instead, TSMC was sharply critical of ASML’s progress on EUV at the SPIE conference, and plans for 450mm wafers may have been delayed another nine years. In this industry, a 2023 roadmap for deployment is a polite way of saying “never,” though ASML itself has been quick to characterize these developments as nothing but a pause [Dutch].

Tagged In

The shrink will happen lol!what messed everything?Cooling issue at 22nm will get worst at 14 ,so industry need a new savior !industry didn’t expect to see graphene potentiality or idea of real world use until much later.sadly grapheme is here!and maker world wide aren’t gonna wait for Samsung,Intel ,amd ,IBM or nvidia or arm allies to make their mind ! Lithography if I read correctly will be required ,but change will be needed!this is why everything is on hold !if thee first gen of those CPU get their power via graphene instead of regular way tool will be needed!since graphemes fix temporarely cooling issue I be the giant see this change long term as a mitigating past and actual damage $ wise,and as a way to save money future wise!

Taneli123r423r4234

The volume is at low voltage SOC’s with low TDP. Cooling is not a problem; power draw not decreasing significantly from current nodes definitely is.

jaimie bisbee

My Uncle James recently got a new black
Mazda MAZDASPEED3 Hatchback by working at home online. you can try here F­i­s­c­a­l­P­o­s­t­.­ℂ­o­m

With so many graphene, nanotubes and optical transistor breakthroughs it is hard to imagine that we already reached the end of moore’s law. Certainly we will have to shift to new technologies but then Moore’s law could continue after a brief transition period.

Joel Hruska

Marcel,

Everyone was betting on 450mm to improve scaling and EUV to improve lithography. Lithography currently accounts for 50% of all cost in manufacturing — it’s a huge driver.

Without improvements in lithography, the benefits of those other technologies are sharply reduced and they all require years, if not decades of further study. Furthermore, they do not improve the problem of transistor cost on a per-mm basis.

This is what you have to understand: Up until now, the cost per transistor of a new technology has *always* been cheaper. Always. Now we face a scenario in which the cost of new transistors will be more expensive. That means more transistors = bad, which puts profound pressure on the R&D pipeline in different ways. That’s the antithesis of Moore’s Law.

Marcel Klein

Thank you Joel for your clarification. I think I now understand the problem. But allow me a question: Even though Moore’s Transistor Law may be at an end, does that necessarily mean the exponential growth of computer power in general will slow down? Computer power correlates with Moore’s Law but is it really the only true driver? Many predictions of future developments of AI and robots depend the assumption that we won’t slow down.

Joel Hruska

The problem, Marcel, is that the other drivers of Moore’s Law — clock speed and Instructions per Clock — plateaued a long time ago.

This means that the rate of meteoric increases is over. Developers can still tweak knobs. Designs will focus on improving the performance of existing transistors, or simplifying layouts, or improving efficiency. These gains will carry us forward, but at much slower paces.

dc

Not necessarily. It means we aren’t adding more transistors every 18 months AKA Moore’s law. While more is typically better, that isn’t the end of the story. CPU speeds can be dramatically increased using graphene and other tech. That means we won’t have more transistors but clock speed will be a lot faster when these technologies come to fruition.

That has nothing to do with Moore’s law, but it has a lot to do with the speed and power of the CPU. In fact, we may actually see faster/better CPUs in 5 years if we dump trying to add more transistors and go back to working on clock speed by using new materials. Adding more transistors hasn’t had very impressive effects on the personal computer, in the past 4 years or so. If we could increase clock speed again though, then I predict much more impressive performance gains.

Eventually, of course, people will figure out how to add more transistors economically.

Joel Hruska

“While more is typically better, that isn’t the end of the story. CPU speeds can be dramatically increased using graphene and other tech.”

Yeah. That’s just not happening. Graphene is decades away from commercial integration into CPUs.

The “old” version of Moore’s law incorporated Dennard scaling as part of its assumptions and therefore predicted a doubling of transistor density *and* clock speed improvements *and* better power consumption.

You say Intel took its eye off the ball with regard to new materials, but I wonder if you’re aware that strained silicon, high-k metal-gate, immersion lithography, III-V semicoductors, 157nm lithography (canceled) and a host of other advances were pioneered and deployed at Intel first.

Graphene is decades away. So are carbon nanotubes. SiGe doesn’t scale well; InGaAs is fragile and has poor manufacturing characteristics, a host of other problems dog every other alternate substrate.

dc

Well so you say, personally, I think you are way off.

Quenepas

Yeah, whatever Joel said he’s wrong. LeL

Joel Hruska

I don’t say, personally. I’ve read the ITRS reports, spoken to lithography engineers, chatted with the head of Intel’s research fab, followed announcements, and attended events where these issues were discussed.

It takes 15-20 years to move tech from theory to consumer hardware. There’s a lot of theory out there, and no commercialized replacements. We are approaching the point where feature sizes can be described by atomic widths. There is no room to get much smaller or more perfect.

Mainarynox

Yeah I’m beginning to see where Joel is coming from and I can’t say I disagree with him. But god I hope he’s wrong.

Joel Hruska

So do I. That’s what sometimes seems to get lost in all this. :P

This is my industry. This is my career. Who loves an exciting performance story more than a journalist? Digging into these things, finding the hidden story, that’s what we like *doing.*

But I get a bit grumpy when people come back with: “Well, that’s just your opinion.” Because while I can certainly, absolutely, unquestionably be wrong, I’m not in a position to pass judgment on the state of semiconductor research. I go to the people who *are* in those positions, I read what they write in trade journals, I examine predictions, and I look at what’s going on in the entire market. The opinions I channel are the opinions of scientists and professional engineers.

No one will be happier than me if graphene, EUV, III-V’s, and gate-all-around transistors emerge and solve all our problems. I will happily eat the biggest dish of crow anyone can find. But as near as I can honestly tell, that’s just not going to happen.

James Hedman

Who cares about your career? Do something else if it no longer suits you.

Joel Hruska

James, the point of the comment was that I have every interest in seeing a return to the good old days of scaling — not a request that the universe reshape the laws of physics to suit my personal professional choice. :P

James Hedman

Meh on scaling. I just plugged in an 20+ year old 25Mhz Mac Quadra 700 that was sitting around in the garage and had been upgraded at some point to Mac OS 8. The text editor was downright snappy in its performance and the web browser can actually still render a lot of static web pages A-OK.

Which makes me think that faster processors just lead to bloated code.

Joel Hruska

Sure, to some extent. Faster processors also lead to languages that are easier to program and allow for greater amounts of flexibility and scope. There’s a reason why we started with machine code, then moved to assembler, then to languages like C. Java, HTML5, Perl, Ruby — these are languages that offer advantages that didn’t exist in languages of 20-30 years ago. They absolutely trade some high-level overhead for some flexibility and ease-of-implementation.

The thing is, nobody wants to go back to the bad old days when the programs you ran had to be hand-tuned for incredibly specific architectures and couldn’t be recompiled to run on X or Y platform with the push of a button. I am not a programmer, but I’ve had enough programming to know that the modern world, while still cantankerous in spots, allows for a great deal of flexibility that simply didn’t exist decades ago.

Final note? One of the reasons that Intel built an out-of-order execution engine in the Pentium Pro back in the early 1990s is because they realized programmers were never going to be great at writing ideal code. Instead of expecting people to do all the work handing proper instructions to the CPU, they designed a chip that could do its own re-ordering on the fly.

Worked out pretty well.

James Hedman

Well I am a programmer and have been doing so since 1972 and I think it has turned out very badly indeed. Current programming languages suck balls and Java, XML, and C++ are uselessly complex abominations as is the incredibly ugly architecture of Intel processors and their baroque and wasteful multi-pipeline architectures. I hope hitting the Moore’s Law wall puts them out of business.

Software design should drive hardware design, NOT the other way around as we have now.

Give me LISP 1 any day of the week.

PS – Intel started the whole out of order execution and multiple pipelines scheme because they never knew which result a boolean comparison would return and when a loop was going to terminate so they hedged their bets by just blindly executing multiple branches, not because of any programmer’s inability to write code “ideal” code.

Joel Hruska

James,

Intel engineers tell that story differently, I’m afraid. I am not equipped to argue it with you, either in age (I was 12 at the time) or experience. I am not a programmer and I’m not an Intel engineer — much less an Intel engineer during the early 1990s.

However, I’d say this: It seems obvious that the concept of superscalar architectures, large caches, and out-of-order execution “won” the CPU race for a reason. I suppose you could argue that they won because they enabled lazy programming. You might even have a point. It’s certainly true that we relied upon clock speed and Moore’s law to deliver performance gains — where was the impetus to optimize when ML would drop a 40-50% performance gain in our laps 18-24 months later? For that matter, why spend time optimizing for the 486 in 1993 if you knew the Pentium would be out by 1995?

Consoles are an interesting example of how learning the in-depth capabilities of an architecture allows for much improved performance over time. Compare 2005 launch titles on the Xbox 360 and PS3 (2006) against the games that came out in 2013, you’ll swear you’re looking at two different generations.

But now, well, you may get your wish. ;) I don’t see anyone rolling back the clock to programming in x86 assembly, but the burden is definitely on software to carry us forward.

James Hedman

I’d say it was more of a case of predatory and monopolistic pricing and being a half step ahead of everyone else in lithography. As for the earlier era than the 1990’s, the guys who came up with the idea for a segmented memory architecture and little endian-big endian nonsense, they should all be burned at the stake.

As a journalist you must have realized by now that the superior architecture doesn’t always win out don’t you? Just look at Windoze. It’s NEVER been any good.

Joel Hruska

“As a journalist you must have realized by now that the superior architecture doesn’t always win out don’t you?”

I started paying serious attention to CPU tech in 1999 and was writing by 2001. So I had a ringside view to that party. ;) Clearly the best man doesn’t always win.

But I disagree that Windows has never been “any” good. I’m willing to cop to a certain amount of familiarity bias, but I’ve owned Macs and learned Linux. Both of them are certainly capable. Neither ever drove to me to want to switch, even after I was fluent in the first. I know there are OS evangelicals, I expected to be wowed by OS X. I wasn’t. It was a different way of doing things with different characteristics, and that’s about it.

But when we talk about things like cache structure, OOoE, and superscalar design I think there’s a reason we see ARM moving towards these ideas with their higher-end chips. No one seems to have come up with a better way of implementing a CPU design in hardware to create a superior software solution.

James Hedman

Unix is a multi-user, multi-tasking OS. MS Windows is only multi-tasking and was never reliable until NT 3.51. The Mac has always had the best looking and acting user interface, at least it used to until they started to cruft it up with each new release. The new one is truly atrocious with a flatness and fatness like some sorry ass cell phone. I’m seriously thinking of going back to a command line mail client that doesn’t put everything about my system in the message header.

Beats punch cards I suppose. It doesn’t look like I’ll live long enough to get my flying car though. ;-)

King Rocker

Say what? Dumby, what world do you live in? I’ve hated Microsoft in my time, but I did myself a favor and tried the alternative. Linux, Mac… I came back RUNNING… Oh the Linux is so stable! (really? put a GUI on it and watch it crash every hour) Oh, the Mac is so fast? (yeah, no one has such fast processors… oh wait we’ll switch to Intel and double the power). Wake up and smell the coffee, James.

Shamoy Rahman

That’s why I will be the Graphene Semiconductor Engineer of the future. :D I’m gonna start a kickstarter when I’m 18 years old to fund this huge dream I have to change the world with semiconductors and technology when I’m older.

notpoliticallycorrect

God I hope he’s right. In fact I hope none of the alternatives pan out… We need a hard lesson that nothing continues forever, and computing is the poster boy for perpetual exponential improvement.

I say this as a person who loves computing and computer science. We’ve completely lost our heads in figuring out what the point of all this advancement is. It’s time we thank our lucky stars that scaling was so smooth for so long, and start allocating economic resources sanely. Deemphasising the ridiculous share of attention taken by the semiconductor industry and putting it in material efficiency might be a start

That’s the best case scenario from people who believe graphene is the Next Big Thing. Even the people who believe graphene will conquer the semiconductor industry aren’t predicting it to happen in less than 11 years.

disqus_5WNzR6XWBG

Hmm, so as you say within 50 years we’ll get a 30x improvement. You say graphene & other technologies are decades off. So I’d consider the next 50 yrs a “lull”. 50+ years from now, do you think that with things in deep theory today becoming commercialised in the future we might see a resurgence in Moore’s law or whatever is similar/applicable?

Joel Hruska

Impossible to say, but I suspect not.

Specifically, I suspect that whatever alternate means of computing we use (spin, magnetic charge, measuring electron positions) will be confined to huge labs and expensive installations. You can do quantum computing today, but not without a hefty supply of liquid nitrogen.

I think we’ll see a bifurcation in which handheld devices and consumer equipment gets faster much more slowly than government equipment and ultra-expensive commercial hardware. No scaling lasts forever. I think we’ve hit the slow growth phase of computing for the forseeable future.

But I could be wrong. I am not a futurist, I make no claim to be able to accurately predict what the world of 2064 will look like. I’ll be lucky to still *be* here in 2064.

Moore’s law is only part of the CPU equation. Graphene does not fix Moore’s law. Graphene allows transistors to run faster. It doesn’t let you put more transistors on the die, at least not that I am aware of. Moore’s law will die if we don’t add more transistors every 18 months (doubling the number). That doesn’t mean future CPUs won’t be better, more effective and faster, but it does imply that we have maxed out the number of transistors, at least for now.

http://prettycoolgraphics.blogspot.com/ Emanon Suomynona

Flat 2d chip construction driving moore’s law was going to hit and end in a few years anyway at atomic scales. Graphene reduces heat generation making 3d volumetric chips more viable, the only question is if some alternative low cost manufacturing can come in time, if it can then 3d volumetric chips can occur, and reduced costs would come from improving manufacturing tech.

Joel Hruska

3D manufacturing will happen long before graphene.

Paltu

Can someone briefly explain what we mean by “3D manufacturing”, here?

Is it simply the projecting of flat silicon wafers in an extra dimension? I already thought the chips have some breadth to them.

Joel Hruska

This gets a little complicated. Obviously the chips aren’t literally 2D; they have depth. There are metal layers within each chip and there are copper interconnects route across it.

There are multiple ways to talk about 3D manufacturing, and all are accurate. Each describes a different attribute.

First and simplest is the idea of stacking one chip directly on top of another and connecting them with wires along the outside edges. This is called POP, or Package on Package.

You can see here how the fin is “3D.” It bisects the gate. These 3D transistors, or FinFETs, are still built on a 2D planar chip. The entire industry is moving to these FinFET designs over time. Intel debuted the use of 3D fins in 2011 with the launch of Ivy Bridge.

The third type of 3D manufacturing is the use of through-silicon vias, or TSVs. Remember in the first example, how those 3D chip stacks had external wiring around the edges to connect them? TSVs move that wiring directly into the stack itself:

Each of these approaches can be called “3D” according to some definitions of the term, but as you can see, they don’t all mean the same thing. Building a chip with 3D transistors (FinFETS) isn’t the same as using TSVs to stack multiple dies directly on top of each other.

VirtualMark

If one thing has remained constant through history, it would be that we make progress. I don’t see that slowing down in the long run – this is merely an obstacle.

Sure, Moore’s law may have ended, but computer performance will continue to improve over time. Just maybe not at the rate we experienced in the 80s and 90s.

Joel Hruska

VirtualMark,

Sure, in the big picture, we’re still going to improve. Best rate I’ve read is that computers will be 30x faster in 50 years. Compare that to the supercomputer performance rankings from 1995 to 2010, where the top supercomputer of 2010 was 15,100x faster than the top computer in 1995.

(This graph is running faster than it should be; we aren’t expected to break exaflops until after 2020).

We need those advances to keep occurring if we want to model the human brain, continue our research into aging and fighting cancer, and model the structures of the universe. We need it for things like climate change and a hundred other tasks.

Ulrich Werner

Speaking in Kurzweil’s terms of technology cycles, we are probably seeing the mature end of silicone: The technology is cheap, reliable and ubiquitous. And it started to plateau.

For new exponential growth, there need to be new technologies. It’s probably going to be a new Wild West of architectures and chip designs very soon where manufacturers start to produce chips with new designs/technologies for (expensive) niche products to “see what sticks”. The consumer market will probably stagnate for a while, but that is to be expected.

The 30x in 50a is laughable, if you look at similar estimates about the performance of computers 50 years down the road (from the 60’s/70’s) you can see how wrong they were, and the same will happen now. Scaling existing technology in the future is normally done linearly, and usually a flawed exercise.

Joel Hruska

That 30x in 50 years prediction was made by Intel’s former chief architect and by the current head of DARPA’s exascale computing initiative.

It is a fallacy to assume that because a trend looks a certain way in the past, it will continue to look that way in the future. Furthermore, it’s telling that no one in semiconductor manufacturing, research, or development agrees with you.

No one — not one single researcher or engineer I have ever spoken with — believes that we are on the cusp of the kind of discovery that would be required to drive computing performance at the levels we previously enjoyed. No known technology or group of technologies is forecast to enable such scaling. Not graphene. Not carbon nanotubes. Not quantum annealing. Not HSA, not many-core, not TSX, not 3D transistors, not chip stacking, not TSVs, not III-V silicon, not a switch to SiGe, not silicon photonics.

grand_puba

Joel, remember that “nobody expects the Spanish inquisition”…

Ulrich Werner

And no one (except indeed for Moore) predicted the kind of scaling we saw. I agree with what you say, also none of the researchers I know have a candidate for the next step necessary to uphold Moore’s Law in its present form. But that’s not the point I wanted to make. (NB: It’s also a fallacy to assume that a trend will be linear because it looks linear on small timeframes.)

I’m sorry I sounded arrogant. Mea culpa. I agree that there will be a lull and a temporary stop to Moore’s Law. And it will take probably quite a while until we pick the current pace up again. But just because we cannot see a new technology now, doesn’t mean there won’t be one.

Joel Hruska

I’ll tell you why I’m dubious of this: Because nothing in the history of mankind has ever scaled like semiconductors did.

Consider buildings. For thousands of years, the Great Pyramid is believed to have held the title of tallest structure in the world. We start to surpass that in the Middle Ages with some cathedrals, (it depends a bit on how you count “structures,” versus “buildings,). We don’t start seriously punching holes in it until the late 1800s and the invention of skyscrapers.

In 1947, we build the first transistor. Let’s pretend that’s the equivalent of building the Brooklyn Bridge. In 2014, we have achieved the rough equivalent building the Brooklyn Bridge to the Moon (a feat which would require a BB some 200,000x larger than the current model.

I don’t know what the exact ratio is, but transistors are the only technology we’ve ever found to scale like that. Nothing else has. Nothing else does. Food supplies that feed 1 person can’t be scaled upwards to seamlessly feed 200,000 people. You can’t build a building 200,000x larger. You can’t make a room 200,000x cooler or 200,000x hotter. We can’t make a plane 200,000x larger or 200,000 smaller. An electrical system that serves a hamlet of 2000 can’t just be scaled to feed a nation of 2 billion.

Only transistors have obeyed this scaling. And so I’m fundamentally dubious that we’ll find something to continue scaling a technology that — in all of human history — has been utterly unique.

DWisehart

I think Joel is right on here: we have never seen this sort of scaling before. When I heard, in 2000, about the creation of single-electron transistors–transistors you can turn on or off with a single electron–I realized that the day is coming when we will have to use something other than the controlled movement of electron charge if Moore’s law is to continue: http://web.mit.edu/physics/papers/kastner_885.pdf

All of the comments that propose technologies that rely on electron mobility to create logic will give us just another generation or two of advances in user visible performance.

The question to me is, what will be next? I think photonics over quantum computing, but there are a lot of hard questions there: how do you build a photonic capacitor? Charge storage is important in any flavor of current logic gate design. Optical communications is still showing impressive gains in bandwidth and latency. Perhaps someone will figure out a way to create an ALU based on the principles of continuous carrier communications. Whatever the path, if Moore’s Law is to continue we have to look beyond electronics.

Guest

I’m really tired of Disqus eating my posts.

Long story short: Nothing in human history has scaled like transistors. Ever. I don’t see a reason to conclude that this scaling will go on, because it’s an aberration to start with. It’s like being able to build the Brooklyn Bridge in 1947 and then build the Brooklyn Bridge to the moon in 2014.

grand_puba

No, but they landed on the moon 22 years later. They achieved the same objective with a different set of technologies which is exactly the point of the conversation.

Joel Hruska

*laugh* My point is not that the best way to reach the Moon was to build a bridge. My point is that we have no other examples in history of a technology as flexible and incredible as semiconductors. We can’t build steel 200,000x stronger. You can’t squeeze 200,000x performance out of a rocket engine.

When we talk about scaling in any other field, we talk about differences of 10x to maybe 100x. If I told you that we can make steel that’s 10x more resilient, or stronger, or resistant to shattering than the steel we made 150 years ago, you’d believe it. If I told you we’d found some ways to make new compounds 100x stronger than what we had 150 years ago, in some edge cases, you’d believe that.

But if I told you we’d found a way to make steel so strong that a millimeter of modern material could armor a modern battlecruiser more effectively than 10 feet of Krupp armor? You wouldn’t buy it. And that level of scaling would still be “just” 3,048x stronger.

dc

Lots of things are fallacies. Just because something is a logical fallacy doesn’t mean it won’t happen. Humanity went hundreds of thousands of years without metal tools, but then it happened. We have seen expedited growth in science during the past 200 years. Is it a fallacy to think this will continue? Yes it is a logical fallacy to point to the the past to prove the future. I can not say absolutely that any thing will ever be invented again. It may well be that humanity has advanced as far as it will ever go and no future inventions will ever come to fruition.

Is it probable that it will continue though…. ? That’s a completely different argument. With the amazing results we have had the past 100 years, i feel confident that things will continue in stride. My point is not based on absolute logic, but rather posteriori evidence based knowledge.

Using absolute logic to prove a point, proves very little, unless you can establish absolutely that something will or won’t happen.

Welcome to logic 101.

Moshan

Joel wrote:

“It is a fallacy to assume that because a trend looks a certain way in the past”

And Joel, you yourself are committing a fallacy, the fallacy of the appeal to authority.

Oh, someone must be wrong because someone in a position of authority says so! Of course, this is an attractive fallacy, they must be knowledgable because they have attained a high position.

The problem with this fallacy is that there are plenty of instances where the conventional wisdom of the intellectual elites were dead wrong.

Darwin’s ideas were mostly rejected for decades and were slowly accepted into mainstream. As one of his contemporaries said: progress will come one funeral at a time.

There was more or less agreement across the Western economic elite that austerity in Europe was not a bad thing in 2009/2010 and many felt the same way about the U.S., even if Obama(luckely) didn’t. Now more or less everyone agrees that austerity was a horrible thing, even if the situation of Europe’s financial difficulties are more complicated than that(no fiscal union, the way the euro was set up, structural issues etc).

Furthermore, as Milton Friedman stated, economists are horrible at predicting where the future will be, even Nobel-level econmists. And you could try to deflect this by saying this is a problem with economics and doens’t apply to tech. But look at how people overestimated where we’d be in 2000 back in the 50s. They then underestimated how progress would look like in the next 20-30 in the 70s and 80s because they overlearnt those lessons from the 50s and didn’t want to be overoptimistic.

It is likely that we’ll see a slowdown in Moore’s law over the next 10, maybe 15 years. But even trying to predict the next 50 years and try to say, like you do, with even a whiff of authority that 30x is a good guess when it is completely impossible to project this far into the future(it has been attempted over and over and failed every time), is laughable. If you think This Time Is Different, then you should have a better argument than “oh I can mention a lot of prominent people who agree with me”.

I could have mentioned a lot of prominent people who agreed with the notion that was Galileo was an idiot and a heretic back in the 1500s. Dramatic example? Maybe, but it should help you understand in a more visceral way how merely appealing to authority isn’t a very intelligent thing to do in the long run.

Can I guarantee we’ll see the same kind of progress in the 50 years from now? Here’s the answer I’m trying to say: nobody knows. But to even assume that you can give answer, like you do, is laughable. And no, appealing to authority isn’t going to help you.

le sigh

he very clearly states that it’s a “prediction” and not a definitive answer or absolute truth. you even quoted him saying, “It is a fallacy to assume that because a trend looks a certain way in the past…”. saying someone else’s prediction who has extensive and insider knowledge of the subject at hand is probably more correct than someone who’s opinion is based on conjecture isn’t an appeal to authority.

conservativemind12

Austerity worked pretty well in Britain which now has the fastest growing economy in the western world. Austerity is a stupid word anyway, it basically means to live within your means, what is horrible about that? I tell you what cancer is horrible though.. rampant socialism.

bertgoz

I have just came across this old discussion threat while reading the reactions to the Intel announcement of their new 14 nm chip. I would like to add my grain of sand to Joel’s comment about the outlook of the future of the electronic industry.

There may be a candidate that will allow scaling to continue further. Memristors. Together with cutting edge material nanoengineering memristors should allow geometric, speed and power inverse scaling. Eventually it would be possible to take advantage of the optical properties of some memristive compounds and step into hybrid optical-electrical computing. No wonder why HP is betting all to the development of this technology.

Joel Hruska

Memristors are an awesome technology but are not yet proven in logic. That’s something a lot of people fail to grasp when it comes to the gulf surrounding various uses for a technology.

Basically, the gap between building simple regular structures, communication structures, or entire chips is really really huge. Memristors have the potential to change the industry, but they’re going to make their debut in storage first. We’re still years away from a mainstream *logic* solution in shipping hardware — possibly as much as a decade even in a best-case.

Only 30x in 50 years? Obviously you made that up. Let’s not confuse each other more then necessary. My best guess is – computer chips 50 years from now will be much more advanced then todays chips are compared to 50 years ago! Huge parallelism and very different algorithms, will give future computers almost brain like powers. 50 years in the future is like 100 years in the past.

Joel Hruska

Much more advanced? Sure. Best prediction of 30x faster in 50 years.

prikko

You don’t think there’s going to be major advances in quantum computing within that time-frame? Or perhaps making computers work more in a non-linear fashion like the brain? (off course, non-linearity might make computers as fallible as humans, which may not be a good idea at all!)

VirtualMark

Yeah I totally agree, I don’t think that we can ever have enough computing power. Scientists will always have something to model, and the models tend to require more computing power as time goes on.

It’s a shame that things are slowing down, but I’m really holding out hope for a breakthrough technology. My money is on finding a better material than silicon, as it would seem that the speed limit has been reached for this material.

Joel Hruska

Here, you have to understand the complexities of “better.”

We know of many materials with superior performance to silicon. III-V semiconductors. Silicon Germanium. (SiGe). Indium gallium arsenide (InGaAs). And those are just a few. Some of them, like SiGe, are already used in special cases.

The problem is, while all these materials offer superior performance to silicon, there are other major problems that make them unsuited to mass production and huge scaling. If it costs 10x as much for SiGe as silicon and it’s impossible to produce in the same volume, that doesn’t help anyone.

We haven’t found a replacement substrate that’s cheap as sand, easily grown, easily doped, *and* has all the other advantages of silicon. That’s what’s choking growth.

Asdf Ghjk

Best rate you’ve read is 30x faster in 50 years? Because this NVIDIA’s presentation shows how exascale can be achieved by the next decade, and this means more than a 30x speed bump. Note that the majority of perf. gain comes NOT from transistor scaling.

Maybe it isn’t right now, but Intel, NVIDIA and several others have all made predictions of exascale computing by 2020 (actually it was 2018). The process size was projected to be 10nm at the time and maybe that won’t be possible by then, but it was all about reducing overhead and locality to get things down power-wise. Meaning even if process tech plateaus, if the system architects manage to get things done the way they say they are, we will still be somewhere on the exascale order of magnitude by 2020.
This pic is what it is all about:

This (and a several hundred page DARPA report) are why I doubt it. Though I should be clear that hitting exascale at *some* power budget is certainly possible, it’s the overhead and power consumption that are crippling.

Best case seems more like 2022.

fairchij

It may be worthwhile to explore the potential offered by
this company. POET Technologies (TSXV- PTK) they have achieved monolithic integration of both active and passive photonics and electronics on a single substrate using III-V. The work has been through US defense department SBIR grants same start that QUALCOMM had) and they have basically been in stealth mode working at their UCONN lab and under joint development with BAE Systems. BAE has an interest in the military applications. They have had many successes and are now working with unnamed industry partners.

The key differentiator is that the inventor Dr. Geoff Taylor, UCONN Optoelectronic Professor and Chief Scientist for POET has designed this technology to be compatible with existing foundries. They have successfully transferred the POET Design Rules to BAEs Nashua III-V foundry and are having significant success. Currently they are reducing the electronics side of POET to 100nm at the request of their development partner(s) as a bolt on for silicon CMOs replacement. A new patent was
just issued by POET which has not been publicly declared which reveals among other things the capability of POET to produce single electron transistors (quantum computing).

The optical thyristor has in addition to multispectral detection and lasing functions (why the US defense department has funded this technology) can be configured for memory. And can support SRAM, DRAM and NVRAM concurrently and allows for massive simplification at the system level due to the elimination of NVRAM backup/recovery. This memory claims much lower bit error rates than silicon based memories (several orders of
magnitude).

They have not as yet named their main commercial partner or their POET Development Alliance Members who are assisting
them in the preparation of Technical Development Kits.

I believe you will be hearing much more about this company in the very near future.

100nm. It is III-V and will be at least 10X faster than current silicon because of the higher mobility of III-V material so it does not have to be small to be fast. The expectation is that they will be able to reduce POET to silicon transistor densities however the optical side requires 500nm and of course photonics are multitudes faster than electronics. What Geoff Taylor has done is produce a P channel for the first time in III-V which of course is required for complimentary logic. The N channel can operate at speed in excess of 300GHz the P channel is much slower but still significantly faster that silicon at current nodes. III-V materials generate a fraction of the heat so there is no requirement to throttle down the speed to avoid melting the chip like the issues with silicon. Because they produce minimal heat they use a small fraction of the energy that silicon uses. This is a very different technology using quantum wells.

No, a magical heretofore unheard-of technology with supposed enormous improvements and a mesh of photonics and III-V semiconductors will not emerge triumphant and usher in a new world of prosperity and performance scaling.

That doesn’t mean silicon photonics are bad or stupid. Doesn’t mean III-V semiconductors or InGaAs are bad ideas for research. It means that silicon photonics, III-V, and InGaAs do not represent a solution.

According to the company website, POET technology is 10-100x faster, easily integrated into CMOS, flexible, and requires no expensive retrofit.

Do you know how I know that’s BS?

Because if it actually *worked* as advertised, Apple, TSMC, Intel, Samsung, ASML, Applied Materials, GlobalFoundries, IBM, UMC, or the mainland-China conglomerate (forget their name) would have snapped the tech up *already.*

If Intel thought a company already had the secret to advancing Moore’s law and returning to old scaling, they’d pay tens of billions for it. Few prices would be too high; whoever owned that tech would own the next 20-30 years of semiconductor design.

RobVanHooren

funny how Mansfield (AAPL semiconductor guy) vanished onto some “Special Project” within 48 hours of them listing on TSX, and he hasn’t been heard from since. let’s see whether your gut call of BS changes in the next 9-12mos.

I didn’t say the technology was entirely BS. I said the *claims* were BS. As in, the claims that this represents the “solution” to the industry’s problems.

I’m not disputing anyone’s expertise or abilities. I’m really not. This is the gap between what marketing promises and science can deliver.

There are. no. free. rides. The average long-term trend between new technology announcements and deployment in shipping silicon is 15-20 years. It’s been 15-20 years for the entire length of the 1960s to the present day.

I know for a fact that you can build semiconductors with InGaAs. I know you can build semiconductors with III-V. But building *logic* circuits out of optoelectronics is an entirely different question.

What I *dispute* is the idea that this technology will emerge as a deus ex machina, a magic turnkey solution that just happens to solve everyone’s problems, integrates into existing silicon, and provides a seamless transition to an alternate driver of semiconductor performance.

I dispute this because there is zero evidence to suggest any such miracle approach exists. Anywhere. And I have read the literature on these technologies. Having patents and incredibly intelligent people is not the same as having a seamless solution to the most pressing problems of the modern semiconductor industry.

RobVanHooren

ah, Saul … I understand. Don’t worry; that’s Damascus there, just three or four days up the road.

Joel Hruska

This is not a matter of faith, Rob. It’s a matter of physics. And physics, in the words of Leonard Nimoy, “Is a bitch.”

(Not to mention the fact that no one is building or proposing to build logic circuits out of optoelectronics. Even POET’s technology links don’t mention it).

I’ve read the slide deck. Here’s something you may or may not know — I get pitches like this on a weekly basis. About once a month, someone claims to be on the verge of breaking the semiconductor bottleneck and reinventing a new paradigm for Moore’s law. (POET hasn’t contacted me).

Sometimes I hop on the phone with companies and write up the announcements but always with an eye towards the difficulty of the technology and the size of the claim.

POET is a research firm, not a company planning to bring the technology to market. It lists acquisition as one of its paths to monetization and if if it ends up acquired by a foundry or major semconductor designer, I agree that’ll be a shot in the arm.

In these realms, however, I am a professional skeptic until one of two things happens:

1). A technology shows up on ITRS roadmaps or equivalent documents with strong independent support from multiple vendors as the agreed-upon method for transforming the future of the industry.

2). Someone like Intel, Samsung, or an equivalent acquires the rights to a technology and announces plans for a major rollout.

#1 is better than #2. But the slowdown on EUV and the pause on 450mm wafers is proof that not even #1 is immune to delay.

Joel Hruska

For some reason your comments are being flagged for moderation and I can’t respond to them. Maybe something to do with the link formatting? I can’t tell.

I am not hurt by not being “on the radar.” Trust me. However I will tell you what I’ve said before: I am a firm believer in the need for long-term solutions and if Poet turns out to have the key to a long-term solution, that’s a wonderful thing.

I will note, however, that being stuck on 100nm is an unacceptable alternative for better performance in a microprocessor. You can’t port a modern 20nm chip back to 90nm without a dramatic size *increase* with knock-on effects for power consumption, device size, signal propogation, and current draw.

It’s not enough to be 10x-100x faster than CMOS. Whatever technology we shift to must enable functionally better chips on equivalent process nodes. In this case, (so long as we’re talking microprocessors), I’d say to call me when POET has hit at least 40nm. Because at that point, it might be worth building high-performance CPUs in a new configuration at a higher node. But 90nm? No one is going back to those sizes for consumer electronics — particularly not when the push is in mobile, where every square millimeter matters.

dc

Or maybe CPU performance will improve even faster…..

Moore’s law has become an obsession for Intel which has, to some extent, caused the company to lose interest in finding other ways to make CPUs better. If they can’t add more transistors right now (although I am sure that they will down the road), then they can focus on using new materials which would allow for faster clock speeds. I am sure that Intel intended to do this eventually, but it seemed to keep putting it off and instead focusing on its proven track record of doubling transistor count every 18 months.

VirtualMark

Yeah that’s what I’m hoping, as it seems that silicon has reached it’s speed limit. I’d love to see the return of clock speed increases, chips have been stuck at 3-4Ghz for years now.

Public Opinion

“If cost per square millimeter holds flat but transistor density rises, that means the cost per transistor has increased.” — That’s not true.

Joel Hruska

You’re right. Fixed that.

Heruka

Exactly right. I’ve explained in my comment below.

Joel Hruska

Heruka, I fixed that phrasing like two hours ago. Check comment above. I’m not sure why you haven’t seen the update.

Wow&wow

Moore’s law should be Moore’s anticipation, nothing more than just a person’s anticipation.
My anticipation about 450mm, ever since people started talking about it, was and still is “450mm won’t happen.”

Joel Hruska

Moore’s law held steady for something like 50 years. Not a bad run.

pelov lov

Good article, Joel. It highlights the importance of moving to a new model en masse. The reality for the foundries is, whether they’re pure-play or IDM, they all face the same bottlenecks and issues. They all rely heavily on the economies of scale, and consequently they all suffer exponentially when advancements slow down.

“Both companies are moving to FinFETs, even if GF is also doing some work on FD-SOI.”

Wasn’t this canned? Iirc, I thought FD-SOI processes vanished from GloFo roadmaps. They, too, have adopted FinFETs for 20nm and below despite the increase in cost and number of masks that would entail.

Although I stated that IDMs and pure-play foundries are going to suffer for this, I believe that it’s the IDMs that will hurt more. While GloFo and TSMC have the option of expanding outward and increasing capacity // wafer volume, the IDMs are at an inherent disadvantage due to competitors not wanting to go to the competition (or can’t, as is the case with Intel). The reality is that there will always be a company out there willing to buy wafers, but IDFs have the added disadvantage of having to assure that it’s their own chips that are in high demand.

If you’ve been keeping close eyes on developments over the past ~5 years, this isn’t news to you. The writing has been on the wall for a while now. There has been talk of triple and quadruple patterning for 14nm/10nm for at least a couple of years, and hopes were pinned on EUV + 450mm as the savior. Although demand is strong, it isn’t strong enough to warrant a move to 450mm, and certainly not without EUV and a considerable decrease in cost that’s coupled with a significant increase in volume for customers.

Intel closed//paused Fab42 because of weak demand and fluctuating capacity (50-80%? and the mid range of that seems most likely). They still haven’t signed a WSA of any significant volume for 14nm, and 10nm is looking less likely as the days pass. A good portion of their capacity is being given to Atoms that are being subsidized and sold at below-cost just to get rid of them. Given the increase in costs (double/triple patterning) and the stagnation in volume, it’s going to mean higher prices for chips just to keep gross margins from plummeting and R&D money will go down the drain either way.

It seems to me that we’re at a tipping point with respect to foundries and process tech. There’s a considerable demand for performance at relatively sane prices. But as the limits of what traditional silicon can offer approach the end, the economies of scale make less and less sense for foundries and their costumers. For example, even low end chip makers like Allwinner and Rockchip would move to 20nm provided Moore’s law still reigned, but because of the relatively meager improvements 20nm offers but inflated cost, they’ve decided to stay at 28nm. Although EUV and 450mm may have extended its life for a while longer, it would have been but a bandaid on a gaping wound that would sooner or later fester.

I’m actually more interested in the process side now than I have ever been. I think the slowing down of die shrinks and the evening out of processes is going to mean that a lot more emphasis will go on the microarchitecture. No longer can a company just shrink a die and bump clock speeds up by 20%+ and call it an improvement. This is suddenly a whole new ball game.

There’s been no talk of them doing any FD-SOI for other customers, so I’m assuming its a specialized deal. Maybe they want to keep a hand in for possible deployment later, but no word on that.

The problems of EUV have been discussed at length before, but Intel’s roadmap predictions and other releases still showed deployment at 10nm. The implications of current delays is that this could slip further. ASML remains committed to EUV and to be fair to them, I’m not aware of any other options. Unfortunately, that doesn’t mean EUV automatically happens — it just means no one else has created a cost-effective alternative approach.

The question of whether Intel or the other foundries are at a greater disadvantage in these scenarios depends entirely on how you evaluate several key questions:

1). How much does the PC market shrink?
2). Can Intel maintain a process node advantage and charge commensurately for it?
3). Can Intel exploit its IDM status and the design advantages of same to create meaningfully better products on future nodes compared to its competitors?
4). Can Intel enter the mobile market successfully?
5). Can Intel enter the mobile market successfully and maintain desired margins thanks to the advantages of #2 and #3?

I would argue that the way you answer these five questions essentially determines whether Intel or TSMC/GloFo are better positioned for the current situation.

No one seems optimistic about finding what Google engineers once called a “fundamentally new driver of semiconductor utility.”

pelov lov

I don’t believe anyone’s bitten on STMicro’s FD-SOI. Both STMicro and SOITEC have been waxing poetic about its advantages, and there certainly are some, but even with a plethora of SoC makers there hasn’t been any news of anyone actually using nor planning to use it. It appears to be dead in the water.

The road to 10nm involved multiple patterning, so you’re right in that there is no cost-effective approach. Whatever we get at 10nm, and that stretches Xeons with 3 QPI links to mass market Atoms, will surely cost a lot. So much, in fact, that I’m not sure it’ll have mass market appeal. Either Intel’s gross margins will nosedive or people just won’t buy them. And this applies to everyone making chips at 10nm using triple and quadruple patterning.

– The PC market is likely to shrink until 2018 according to recent estimates. So there’s still a good bit of shrinkage left (Thanks, Constanza)

– Intel won’t have a choice but to charge a commensurate amount for their 14nm, and in particular their 10nm chips. Costs have increased exponentially on the process side, and they’re footing nearly all of the bill themselves. They’ve actually cut back considerably on their R&D spending as well. Losing Fab42 means they’ve lost volume and consequently flushed R&D down the toilet — about 30k wafers a month of volume.

– This is an interesting one, and one I’d actually agree with. They’re positioned to take advantage of 3D stacking or TSVs before competitors because of the ‘copy exactly’ mantra and the close ties between their chip architects and the bunny suits. But whatever they bring forth in that regard will have low volume and high cost. Broadwell + eDRAM is a perfect example of this.

– No. Their costs, even on lagging nodes, are frankly just too high. Intel has margins to worry about, and mobile is filled with low margin ‘good enough’ alternatives. The volume isn’t there unless you’re Apple or Samsung, and Intel would need both to stem the bleeding of the PC market. Intel’s “contra revenue” Atoms were estimated to be at about $51 per device. This isn’t the path to success.

– Again, I disagree. They’ve got a FinFET + node (half?) advantage on competitors and yet they still can’t produce a product that outperforms its rivals. Samsung is now moving to an in-house architecture and Apple has already made that step. Intel, though they’ve improved with respect to baseband/LTE connectivity, is still no Qualcomm. That leaves them in the second tier, and the margins there aren’t high enough. Furthermore, that process lead isn’t increasing but shrinking and it’s inevitably going to level out. In fact, that’s the conclusion of the article :P

Pure-play foundries have more freedom, particular in dire times where demand is low. This might sound like a contradiction given they rely on high volume and high demand, but the reality is that there’s always a certain base level of demand. Samsung has been smarter about this, imo, and they’re taking steps to insure that they’re flexible by going after the DRAM and NAND market aggressively. Samsung is losing Apple, but they’re avoiding being a ‘loser’. Intel, on the other hand, has been peddling their 14nm and 10nm nodes to anybody willing to listen. That lure has had no bites.

Joel Hruska

Keep in mind, I didn’t take positions on those questions. I just said the answer depends on them. But I’ll answer you as if I had. ;)

1). I think Intel already charges a commensurate amount for its processors. If they follow their own historic pattern, they won’t ramp 14nm for production until its production costs match the trends they want to see. They have the luxury of doing things that way… for now. Whether they can continue to take such an approach for 10nm is a very open question.

1a). They haven’t “lost” Fab 42 by deciding not to scale it at this point. I agree that it’s nto a good sign, but it’s not as if Fab 42 would’ve been in high production in 2014.

2). I don’t think we know how much Crystalwell costs Intel. We know that they’ve certainly demanded a high premium for it. Whether that premium reflects the actual cost or not is open to debate (I’m inclined to think they opted to make an insane killing on the capability rather than being fundamentally crippled by cost structure).

3). I don’t know how much we know about Intel’s lagging node cost. It’s my understanding that Intel doesn’t really have lagging nodes the way TSMC does — it uses older nodes for chipset production, but I’m not sure the company is still running anything above 45nm in any great volume, whereas TSMC is still driving some sales on 130nm.

4). If we assume Intel ships out 14nm at the same time as *companies* start shipping volume on 20nm it implies that Intel has a bit less than a full-node lead. I’d call it 18 months, down from 24 months due to the 14nm slip.

5). You raise interesting questions here with complicated scenarios. Does Bay Trail fail to match its rivals because Intel wasn’t aggressive enough with the core design? Is it because Intel stuck with dual core when it should’ve gone quad for tablets and phones? Is it the lack of an integrated modem?

I suspect that it’s all three, with the mixture depending on the market.

As for Intel “peddling” nodes, the reason I’ve heard for why no one is biting is because Intel’s design rules are extremely strict, and few people are interested. On the other hand, rigid adherence to those design rules is credited for why it believes its 14nm will deliver superior improvements.

pelov lov

I’ve heard the same regarding Intel’s strict design rules.

If a hypothetical maker wanted to use Intel’s fabs, they’d be making something that would at least somewhat resemble what Intel’s producing on their own fabs. Copy exactly works brilliantly for Intel, but their fabs, tools, nor their employees are used to working with third parties. They just weren’t built for it. That philosophy isn’t embedded within them as a company.

Furthermore, they would have to work hand in glove with their competitor, chipzilla. That’s a very dangerous scenario. One that Apple has learned comes with serious consequences and tilts the scale in their competitor’s favor. It’s just bad business, and even worse with Intel’s margins.

Fab42 was meant to be retooled as a 450mm fab in ~2017/2018 after producing 14nm and 10nm chips. So Intel has not only had to reassess their volume (and thus increase in cost), but now they must account for 450mm being stalled indefinitely. Bear in mind the reason that Fab42 was stalled was because they were expecting weak demand. These are multi-billion dollar investments, and every single day they’re not up and running to their full capacity means money is being thrown away.

Joel Hruska

No word on whether Intel canceled the 450mm wafer rollout at Fab D1X. Presumably they’ve got some test capacity still running there. And I agree that with the drop-off in the PC space, pushing Atom into mobile becomes more important except, of course, Atom hasn’t made much *headway* in mobile. Problematic.

I think the fundamental tension here that goes ungrasped by many people is that fabs and foundries only invest in new technologies if those new technologies can justify it. It’s not a question of whether FD-SOI + FinFET can work, or if gate-all-around can work. It’s a question of whether the cost of those improvements will be born by the market in the absence of per-transistor price scaling.

We know that there are small markets that *will* pay for these improvements, but the size of that market works against the same cost trends that historically sustained semiconductor growth. Speciality processes to feed speciality spaces never lead to lower prices.

pelov lov

In part, that’s why I’m quite skeptical of FD-SOI and even TSVs and stacking. These are novel approaches to skirting around the complexity issues that traditional die shrinks face. They do add considerable cost, and therefore must be thought of as low volume processes//options available to a small selection of partners at high prices.

You’re spot on regarding the importance of having massive scale and volume for whatever succeeds quadruple patterning when we brush up against quantum tunneling using traditional (costly) methods. Although at that point, even the exotic solution might seem much more favorable.

While there are many ideas that are floating about, none of them make any economic sense otherwise they would have had significant investment from the foundry side several years ago. Instead, IDMs and pure-plays alike were banking on EUV and 450mm as the most cost-effective and likely answer and poured money into both. If neither of those has panned out and funding has been pulled back, how good are the chances for anything else with far less momentum, investment, and research?

Whatever the answer is, it has to trump the untold billions spent on traditional silicon or complement it better than EUV an 450mm would have. That’s a tall task that has no hope of being answered in the next several years.

Joel Hruska

To clarify — 450mm has pulled back. ASML is still pushing hard for EUV. And there’s still some hope — Chris Mack thinks that taking the focus off litho will encourage scaling in other areas, like etch. 3D transistors will be of use. There’s still the possibility of III-V adoption.

I think part of the question facing foundries below 14nm (because we’re going to get to 14nm no matter what) is whether we still talk about lithography changes at all. Who knows? It’s not impossible to me that we start seeing companies push to improve on some metrics by re-adapting older nodes. Maybe III-V 3D chip structures on 28nm actually makes better sense than trying to push III-V 3D chip structures on 10nm because you don’t have to deal with the same degree of double/quadruple patterning.

magnimus1

I don’t think that it is such a forgone conclusion that all nodes sub14nm are dead

“For decades, it’s been a given that GPU and CPU transistor counts would increase every generation and that this would be economical because increased density allowed for a cheaper cost per square millimeter and more chips per die. If cost per square millimeter holds flat but transistor density rises, that means the cost per transistor has increased — and suddenly, those higher transistor counts aren’t nearly so desirable.”

I think these sentences give the wrong idea, even though what you’re trying to say is generally correct. Wafer cost at the same diameter rises over generations, as clearly indicated in your graph. What this means is that the cost per mm^2 actually increases as more advanced process technologies are ramped up. At the same time, increasing transistor density (because of the very same advanced tech) allows dies to be smaller (assuming same transistor count). Overall, this works out to be a cheaper cost per die as there are more dies per wafer, even with the increased cost per area, which is how scaling is justified. Additional benefits come from the ability to tolerate more random defects, giving better yield -> more profit.

What is happening now is that the cost per mm^2 is rising rapidly every generation because of multi-patterning etc., while transistor density (and thus no. of dies per wafer) is not increasing fast enough to offset this. This is what 450 mm is supposed to solve, say, by increasing per wafer cost 1.5 times for the same process, while doubling the number of dies. This should technically allow things to be profitable again, which can then be used to swallow the cost EUV.

In sum, your last sentence doesn’t convey the correct picture. Wafer cost and therefore cost per area is increasing considerably. Having transistor density increase (and therefore die area decrease) is actually a good thing and can offset this to some extent. The question is whether it can increase enough for each increasingly costly advance.

Joel Hruska

I replaced that paragraph with this one: “If per transistor costs rise, suddenly higher transistor counts become a liability. It’s still possible to pack more transistors per square millimeter, but that mindset becomes a financial liability — every increase in density increases costs instead of reducing them.”

Does that address your concern? The replacement went through several hours ago, but apparently some folks are seeing the old version.

Heruka

I refreshed and got the update.

However, I still feel the idea is not being conveyed correctly.

It’s not the cost per transistor that matters (for comparison between nodes), it’s the cost per die. These two are obviously interrelated (I’m not denying that fact). The comparison is done by assuming the number of transistors in the die remains the same between nodes, and then seeing if overall die cost has come down. After all, we sell dies, not transistors.

It isn’t that higher transistor counts become a liability, it’s that the same number of transistors can become a liability on modern nodes.

You have said it in a different way – which is not incorrect, but less intuitive.

Sorry for being nitpicky.

Joel Hruska

All the cost comparisons I’ve seen on this topic have focused on one of two things: Cost per sq. mm or cost per transistor. I suspect that’s because the institutions that talk about it know that too many other factors impact die size.

We know, for example, that a GK104 costs Nvidia less money than a GK110. But NV doesn’t want to talk about direct die costs (for obv competitive reasons). So I suspect talking about scaling in terms of transistor cost or sq. mm cost is preferred.

The use of VO2 as the basis of the optical switch comes as recent reports seemed to throw into question whether the material is really suitable as a replacement for silicon transistors.

Nope. Optics are not expected to restore Moore’s law scaling. Even if it works, it has to work for billions of transistors, not a few switches.

Francis Short Jr

Read the whole article

Joel Hruska

I did.

dc

Before we all make dire predictions, lets take a step back and breathe deeply. This article might be accurate, or it might not be. Lets wait a few months before we declare everything dead. Officially the company has said it is pausing. Rumors are that it will be a nine year pause, but at this point that may not be true. Even if it is, I suspect Intel has some contingency plans that they keeping under wraps.

pelov lov

EUV + 450mm were the contingency plans.

Zzzz

It doesn’t matter if you go back in times 20 years or 2 months
It doesn’t matter if you go back in times 10 years or 10 months
It doesn’t matter if you go back in times 5 years or 20 months

in any point of time you will find those who claim that Moores law is dead…

Joel Hruska

Cost scaling was the critical component of Moore’s law. It’s gone.

Zzzz

Quad core ARM SOCs go for as low as $10. And I quite sure there are $5 dual cores. Processor transistors never been cheaper…

Yeah again you are yet another oracle claiming that Moores law is dead. History have not taught you a lesson? :)

pelov lov

You’re not adhering to Moore’s law. Moore’s law states that the number of transistors doubles for a given amount of money spent (say $20) every two years. Consequently, the more popular definition of processing power doubling every two years originates from the notion that one pays the same $20 on the new(er) process that offers the benefit of double the number of transistors. (Feel free to butt in and simplify my convoluted definition)

Processors have been cheaper, although they’ve been on lagging nodes; and I assume your $5-$10 refers to the same lagging node. Technically speaking, Moore’s law died at 28nm when shrinking to 20nm didn’t offer the same density nor power/voltage improvements as the shrink to 28nm did. Along with the sub-optimal density and electrical properties, the 20nm node is also *more expensive in both relative and independent terms than the 28nm node*.

This means that after 28nm, a company signing a WSA for 20nm will not see the transistor count doubled for the same amount they paid for the previous node two years ago — 28nm, and it was more than 2 years. Therefore, not only has Moore’s law “slowed”, but it’s actually slowing exponentially due to the exponential increases in cost and decreasing density.

There have been advancements made to offset this, like FinFETs, but a ~30% improvement in transistor performance (generally just a decrease in power but increase in complexity due to the poor upward scaling) isn’t anywhere close to keeping Moore’s law alive. 450mm and EUV were invested in heavily by the foundries to keep Moore’s law alive and the economies of scale favorable, but neither of those has panned out.

Your example only works if you were to state that the sub-10$ SoCs would double in transistor count (and consequently performance) at the 20nm node. They won’t.

**nVidia’s Tegra 3 was a cheap processor at its time and sold for ~$10-$15 at 40nm while 28nm SoCs sold for more and at lower volume. Not sure where you’re getting $5

Joel Hruska

This gets tiresome. Transistors are no longer getting cheaper. Period. Don’t want to take it from me?

Get a clue, man. This isn’t what Intel says. It’s not what TSMC says. It’s not why I say. It’s what everyone says, from the lithography tool manufacturers to the independent analysts to Intel engineers. Even Intel, who formerly laid out a roadmap for some limited scaling at 14nm, has been pushed back off it by mounting difficulties.

thx1138v2

Early in the 20th century there was concern that the world would run out of coal based on the growth of its use – peal coal if you will. A magazine sponsored a project where the greatest scientific minds of the time got together to discuss alternatives. All but one got it right. The others discussed solar power, wind power, biomass fuels, and conservation. Sound familiar doesn’t it? Thomas Edison said not to worry about it because the Amazon river basin contained enough forest to power the world for 50,000 years.

The one man who got it right, Rear Admiral R. B. Bradford, said man’s ingenuity would solve the problem. And so it did by something not unknown but in little use – petrochemicals.

Expect more of the same.

Matt Menezes

I agree that there will be a lull on the hardware front. This will be the perfect time for software to start improving and catching up to the hardware. For so long, “throw more hardware at it” has been an acceptable workaround due to Moore’s law.

Since clockspeed and IPC improvements have significantly slowed recently, process node shrinkage is hitting a wall, and CPU’s have become more parallel, software design paradigms need to be rethought. As humans, we think sequentially, but new CPU’s necessitate a more parallel approach. If there were a magic compiler switch that could multi-thread any code so double the threads means half the processing time, we could see massive boosts in performance. The problem is, with the way current languages are setup and the way code is generally written, automatic speed improvement from multithreading/adding more cores just doesn’t happen.

If, in the coming years, you can’t just throw more hardware resources at a given piece of software to cheaply gain performance, perhaps placing more resources into software optimization will result in the increase in performance we’ve come to enjoy. It will just come from a different direction.

Joel Hruska

Multicore scaling is fairly easy at the dual-core level and gets substantially more difficult after that. I think it’s fair to say that no one has managed to create the tools that would allow software threading to deliver this kind of improvement. We have seen baby steps in this direction with features like Intel’s TSX, and of course compilers have become incrementally better. But it’s a long, slow slog.

If you look at gaming as an example, back in 2005 we see the first dual-core chips start hitting market. By 2008 – 2009 there’s still no real advantage to going quad-core — very few games are using four threads at this point, though there are a few.

It’s not until 2010-2011 that we really start to see four threads making a significant difference in many titles — and scaling above 4 threads is still quite limited.

Now many core could change these trends in the long run. But it’s a much slower process than the old model.

Dave4321

Battlefield 4 is certainly using more that 2 cores so no reason why other game makers can’t. As you can see. here i3’s are at 95% core usage.

Yeah, this is exactly why I think in order to keep seeing big performance improvements that hardware traditionally provided, software will need to pick up the slack. There is no magic bullet to make programming parallel code easy. In my world, Visual Studio and .NET recently have given more tools to write parallel code, but you still have to think the right way from the get go.

This could also help when TSV’s and stacked chips come to the mainstream. It seems to me like those technologies will result in even more cores/resources running in parallel – but what good are they if they aren’t properly/fully utilized. If new languages/tools/compilers could be created to easily facilitate good scaling of multi-threaded code from 1 to n cores, we could see some nice speed ups.

jburt56

Call Los Alamos!!!

johnwerneken

I would think Intel at least CAN afford to bet huge on ALL the options – and they better lol.

polistra24

I have no fucking idea what all your fucking idiot acronyms mean.

I only know this: On actual desktop computers running actual Windows 7, Moore’s Law has run backwards at warp speed for several years. My reasonably “modern” computer sometimes takes 15 seconds to open a directory and find a file. THIS IS SLOWER THAN FINDING A PHYSICAL PAPER FILE FOLDER IN A PHYSICAL METAL FILING CABINET.

Joel Hruska

If you can’t be bothered to either 1). Google acronyms or 2). Ask for an explanation in a polite fashion, then I am not surprised you have such difficulties finding the solution to a problem with a Windows installation.

Search performance can be boosted in Windows 7 by adding files and location to the index, ensuring that no other programs are thrashing this disk, upgrading to an SSD if you don’t use one already, ensuring you have an appropriate amount of RAM in the system, uninstalling or deactivating software that chews up a significant amount of memory, and making certain that you aren’t using a 5400 RPM HDD.

Dalbert Onyebuchi

what we cannot achieve with hardware, we can compensate with a more efficient software design. The real estate of chip manufacturing has serious challenges now. I am still hopeful though.

fairchij

It may be worthwhile to explore the potential offered by
this company. POET Technologies (TSXV- PTK) they have achieved monolithic integration of both active and passive photonics and electronics on a single substrate using III-V. The work has been through US defense department SBIR grants same start that QUALCOMM had) and they have basically been in stealth mode working at their UCONN lab and under joint development with BAE Systems. BAE has an interest in the military applications. They have had many successes and are now working with unnamed industry partners.

The key differentiator is that the inventor Dr. Geoff Taylor, UCONN Optoelectronic Professor and Chief Scientist for POET has designed this technology to be compatible with existing foundries. They have successfully transferred the POET Design Rules to BAEs Nashua III-V foundry and are having significant success. Currently they are reducing the electronics side of POET to 100nm at the request of their development partner(s) as a bolt on for silicon CMOs replacement. A new patent was
just issued by POET which has not been publicly declared which reveals among other things the capability of POET to produce single electron transistors (quantum computing).

The optical thyristor has in addition to multispectral detection and lasing functions (why the US defense department has funded this technology) can be configured for memory. And can support SRAM, DRAM and NVRAM concurrently and allows for massive simplification at the system level due to the elimination of NVRAM backup/recovery. This memory claims much lower bit error rates than silicon based memories (several orders of
magnitude).

They have not as yet named their main commercial partner or their POET Development Alliance Members who are assisting
them in the preparation of Technical Development Kits.

I believe you will be hearing much more about this company in the very near future.

Joel you have an opportunity to look at POET and the only reason your name mentioned because I liked how you responded to the comments on your article and a description of yourself.

“This is my industry. This is my career. Who loves an exciting performance story more than a journalist? Digging into these things, finding the hidden story, that’s what we like *doing.*”

I am wrong about your “BS” journalism ….

Joel Hruska

There’s a difference between looking for things that are likely to succeed and serving as a PR mouthpiece for companies looking to make a name for themselves.

I am glad Poet and many other companies are doing research. I am glad they continue to work on finding solutions to difficult problems. But when 3-5 people show up in a forum thread to play evangelical chorus for a given technology, that puts my radar up.

I read Poet’s technological ideas and press deck. It’s mildly interesting. It’ll be a lot more interesting if one of two things happens.

1). They’re acquired by a major vendor who announces a plan to deploy their technology in logic or memory semiconductors.

2). They figure out how to scale it below 40nm.

Until those two things happen, Poet is a cool idea with some specialized applications. And there’s nothing *wrong* with that. But it’s not the fundamental new driver of semiconductor utility.

No one will ever return to building chips on 90nm for consumer electronics, HPC, or mobile phones. Until they solve that bottleneck, it’s of limited interest to a consumer market.

Sure. That’s about 200x too large to be competitive right now. But it’s progress!

TechnoRain

This is amusing to me for a couple of reasons:
First, when previously confronting “scientists” with the fact that Moore’s” law isn’t quite so dependable as a law (how about Moore’s “I noticed a pattern for a short while theory”) I was laughed out of the conversation. Secondly, that the “law” or the supposed surety that man created was hampered by his poor performing industry.. what then should we call a “law of man’s greed halting and even reversing technological gains”? Politics.

Joel Hruska

The scaling predicted by Moore and Dennard held true for about 50 years. That’s pretty damned impressive by any standard.

notpoliticallycorrect

What’s clear to me here is the desperation that people have that *somehow* computing continues to get more powerful exponentially. These people want a never ending party… Meanwhile virtually every other industry suffers from such lack of suitable talent, and woefully inadequate public interest.
Mostly this desperation is born out of our world economy centred on infinite growth… And I sincerely wish that the party is on a long pause now, maybe for next 50 years. Then perhaps we’ll deal with the basic problems of the economy, the mind, education, etc. The lesson we’ll probably have, is that this time, things get harder instead of easier.
Till now we’ve been exporting our problems to the experts, so that our lives always got easier. Now we’ll see what happens when you HAVE to know stuff in depth to get more performance.

Ultimately it’s only passion that will lead to the most elegant and beautiful technology, not money. I’m waiting for the day when all problems are allocated enough resources to elegantly solve it, not just those that are “worth” the resources.

Joel Hruska

“And I sincerely wish that the party is on a long pause now, maybe for next 50 years. Then perhaps we’ll deal with the basic problems of the economy, the mind, education, etc.”

Never happened in human history, before or after semiconductors. I wouldn’t hold your breath.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2016 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.