It is all obvious or trivial except…

The end of Moore\’s Law

Interesting piece about how silicon fabs ar getting more and more expensive and so we might see Moore\’s Law slow down.

I\’m not sure he\’s entirely right in detail, I think there might be some confusion between cramming the transistors on closer together and using bigger disks to make the chips from. From what I understand it\’s that latter that dramatically raises the costs.

Still, I think there\’s one huge thing missing from his piece:

Moore\’s law has seen computing power grow exponentially for 40 years – but soon economics, not physics, may stop it.

He\’s using the word \”economics\” there to mean huge costs. But buried in there is the real argument, that the huge costs are indeed there but no one is sure that they can sell enough of the new chips to make those costs worthwhile. That is, the proper economic point being that the costs will be greater than the benefits (or the benefits the consumers are willing to pay for perhaps) and thus this is an investment that we don\’t in fact want to make. That is, economisc is stopping us from wasting scarce capital.

Post navigation

3 comments on “The end of Moore\’s Law”

What affects costs most is yield: the number of live chips to dead ones. This is dependent on the size of the chip and the size of the wafer: a flaw in the silicon on a big wafer of small chips knocks out only one of many. A flaw on a small wafer knocks proportionally more. The size of the chip is, of course, dictated by the size and number of transistors.

I think Moore’s law is quite specifically about the density of transistors. Those who anticipate its imminent end often fail to predict new techniques.

Ray Kurzweil points out that Moore’s law is part of a bigger picture of increasing computing power, traces exponential growth back to mechanical computers, and considers new technologies like vacuum tubes and silicone transistors the paradigm shifts that broke the limits of their preceding technologies.

So I suppose Moore’s law might come to an end but don’t see any reason for computers in general to stop getting more powerful.

As for costs and benefits, I can *imagine* infinite benefits — like being able to live forever in a universe of your own creation — from theoretically possible computers. So I don’t see an endpoint where everyone is content with what they have.

What might cause trouble are what might be called local minima in the cost/benefit curve between here and there. Sort of like space travel not catching on because although you *could* make lots of money mining the asteroids, intermediate technology you need to develop first isn’t cost effective.

I’d like to think that development would just slow down — space flight continues to advance, even if slowly — but I suppose it’s possible we could get stuck in a cost benefit minima, which would be disappointing.