Counterpoint: Is Moore's Law really the industry's misfortune?

Since its pronouncement in 1965, Moore's Law has been the defining paradigm of the electronics industry. ("Law" is a misnomer, it's a "conjecture"; F=ma is true law).

Under its guidance, we relentlessly drive towards ever smaller features, higher densities, larger wafers and more chips per wafer. Yet there's a problem with living under this law and its corresponding roadmap.

True, the implications of Moore's Law have enabled the spectacular growth of the industry, but it has come at an enormous cost, literally and figuratively. Before you start flinging silicon wafers at me, let's step back and take a different perspective.

The R&D needed to advance process technology to the next node is extraordinarily costly, and matched by the billion dollar fabs needed to take it to full production. In turn, vendors need extremely high volumes to pay for all this, while the market opportunities at these very high volumes are fading. How many products exists that will need the tens or even hundreds of millions of a given IC?

What we have is a circular situation: High up-front costs are paid for by large volumes, but at decreasing per-chip pricing. All this while the low pricing drives volume, which will pay for everything. Then the cycle starts again, except the cost of developing that next-generation of process is ten times higher than the previous one.

Since capital costs are so high, the industry loses a fortune when fab utilization and yield aren't close to 100 percent.

At some point, the merry-go-round stops, for three reasons: costs simply get too high to support; the market opportunities shrink; and most inviolate, the laws of physics dictate that this can't go on forever.

That's the point I am afraid we are reaching. The vaunted, celebrated roadmap has been both our guide and our straitjacket

The undeniable success of process technology advances based on Moore's Law have squeezed out funding for other approaches, such as biological-based components, analog computing or integrated electro-optics, to name just a few. The money and opportunities available for significantly different approaches, which may offer a way around or beyond Moore's Law, are relatively miniscule compared to what is spent on our conventional process technology.

Our relentless pursuit of semiconductor process improvements has squelched technological diversity, which history shows is a very risky way to grow and survive, offering no pathway out at its endpoint. As an industry, we may now be paying the price.

We may not be able to spend our way past the "Moore's Law ends here" sign and the corresponding implications. After all, there is a law even stronger than Moore's Law, namely, the law of unintended consequences.

I guess at this point, the question that need to be answered is not whether or not MOOR?s Law will continue to hold true or whether it will cease to become a Law. The question we should ask is what specific applications need new technologies and what are their market size? In other words, if there is a need (i.e. new Market) new technology will be developed irrespective of the presence of the Wall in front of MOOR?s Law.

Several years (or was it some decades?) ago researchers such as Carver Mead and others showed that "bio-inspired" circuits e.g. silicon CMOS circuits (which when operated in sub-threshold region) can be made to mimic biological neurons & do many difficult tasks orders of magnitude faster than digital CMOS. At a more mundane level power efficient analog CMOS implementations of Turbo/Viterbi decoders have also been implemented, although not exactly using "bio-inspired" principles.
I am wondering why analog designers (and academic committees/paper reviewers and the like) have abandoned potentially viable avenues of original research (bio-inspired or otherwise), in their obsession to make faster and higher-resolution data-converters in increasingly expensive(and difficult to design for) technologies just to keep up with Moore's Law knowing very well (who should understand MOS transistors better than analog designers) that the party will not last for long!

Similar pronouncements of gloom and doom relative to Moore's Law have been made every five or ten years. Let me see at five year intervals 40 years thats 8 times the pronouncements have been wrong. What are the odds that this time you have it right?

Bill...I think if the premise of your article is to get us to think about the economic benefits of Moore's Law then you have struck the right cord. I think the idea of steadily marching toward new technology has its own set of drivers. But the health of an overall industry is not just based on technologies advance. There was always a nice benefit to Moore's law when it came to cost. If that is changing, because of this S-curve that others have brought out, then some creativity should be put into thinking of ways to improve performance metrics and lower costs. Smarter may be better, also as mentioned by others. One of the great talents we have is to try and simplify the complex. Maybe posing the right questions will present new paths forward.

The ?Deep Submicron Wall? and Moore?s ?Law?
Moore's "Law" (or "conjecture" or WHATEVER one prefers to call it!) only deals with the PHYSICAL aspect of semicon technology and the advantages realized through ever-shrinking process geometries. THIS part of it indeed has a practical limit: when there's no longer enough electrons/holes on the base of a conventional transistor to (statistically and reliably) tell it if it's "on" or "off". That PHYSICAL wall is still several process generations off, but it does form a genuine theoretical limit in terms of process geometries.
Several other commenters have suggested we?re simply pushing the problem into software, and clearly, there is HUGE room for improvement in that domain.
On the physical front, several things can be done to vault the ?deep submicron wall?. Some of these concepts are already under investigation. First, the move to true 3D structures is well underway. The problem here is that ? despite the extra dimension ? current tools and development strategies are ill prepared to take full advantage of the opportunities therein. INTERCONNECT must ALSO fully exploit the 3D realm, and here again, current EDA tools are sorely lacking.
Another area is chemical and/or organic computing, although a COMBINED chemical-electrical system (think: ?human brain?) may ultimately prove to be the best solution.
For me, coming from an FPGA/reconfigurable computing background, one of the most promising (and short-term) ways ?around? the deep submicron wall is through adaptive circuits and interconnect. Some modicum of FIXED functions ? memory and PHY interfaces come to mind ? are generally required, but BEYOND that, I envision an amorphous ?blob? (3D of course!) of computational and interconnect resources that can sample an incoming data stream, determine from the CONTENT what the processing requirements are, and then automatically and in REAL-TIME configure itself to best support those requirements given it?s available resources. Imagine a video processor that monitors an incoming stream, buffers up a line or two at the input and says ?AHA! This is an H.264 High 10 Profile stream? and then automatically configures itself for decompressing that stream. Sweet, eh?!
Taken to the extreme, such amorphous silicon (or OTHER material!) devices could analyze ANY incoming data ? video, audio, OBJECT CODE (for ANY processor type) or even SOURCE CODE ? and automatically create an optimized computational platform for that stream. In the case of SOURCE code, the ?device? could even create ? as part of the process ? it?s own compiler/assembler to generate the executable form.
Such a ?device? could ultimately be taught to ?learn? ? based on previous inputs it has seen and the processing requirements it knows are required for THOSE inputs ? to handle new, different stimulus, much like the way the human brain learns how to handle different tasks and even different languages. THIS could help alleviate the much bally-hoo'd (sp?) SOFTWARE "problem".
So? is Moore?s Law DEAD? Well, the deep submicron wall will clearly smack it in it?s head, but INNOVATION and thinking ?outside the box? (or ?inside the blob? as the case may be!) can keep us moving forward with ever-more powerful and power-efficient computing machines.
robertcklein@msn.com

Hi,
A storm in a tea-cup !
What is all the fuss about?
yee, yee, yee, Moore?s law, one day? I?ve been earring that for 20 years now.
But what about the software to feed these little babies ?
We are still in the dark ages of software, from CAD to OS; whiles we have so many gates per square-micron and do not know what to do with them...

I recently threw out some old IEEE mags from the early 1980s. A couple of the articles were about how fundamental theoretical limits meant that feature sizes would never shrink below 130nm, or possibly 100nm at a push. Today, I see more and more reports on carbon nanotubes and graphene appearing. All it means is that in the next 10-15 years (the SciFi far future) there is going to be a number of major shifts in technology. The world as we know it will not end.

With all due respect Bill, your article is total BS. Of course Moore's Law (actually an empirical observation turned self-fulfilling prophesy, not a conjecture or a law) will end eventually for the planar silicon process. This is not news. Pundits have been predicting it's demise for many years. But a straight jacket? On what data do you base that incendiary observation? The $$$ spent developing new process nodes have not in any way squeezed funding for the alternatives. Research into all of the alternatives you mentioned is humming along quite nicely, thank you. Do they receive billions of dollars a year in funding? No. But they are all in the conceptual stage and don't justify or require that level of funding yet. Research funding levels are governed by supply and demand like everything else. When/if a promising alternative shows up, some forward thinking entrepreneurs will commercialize it, rejuvenating Moore's Law, and getting rich in the process. That's a powerful motivator for creative minds. A better questions to ask is: what will happen to our consumer electronics economy if an alternative doesn't emerge on time and Moore's Law does slow down?

Bill, I agree. I would add that all "technology modes" (horse-drawn buggies, steam locos, vacuum tubes, semiconductors, chemical rockets, gas-powered cars, bio-fuels, etc, pick one...) tend to follow an S-curve of growth (however measured) up to a "utility plateu" at which point they get replaced by the next technology. Moore's "conjecture" (like it!) defined the lower part of the "S" curve (an exponential) but all exponentials come to an end at somepoint. They stop being exponential, move through linear, then they plateau - hence the "S" shape. Now, just don't ask me what's next! All I can can conjecture is that our successors will, at some point, find a way to compute and store at the molecular level. "Beam me up, Scotty".

I design for a semiconductor and I have touted rising fab costs and diminishing prices (and also the failure of Moore's law to provide speed and power benefits in the last few nodes) to the people I work with. No one agrees with you Bill where I work, except me. Dual core, quad-core and higher platforms are pushing our fundamental problems to software. It is my strong belief that we have hit a wall, actually multiple walls (cost, power and speed) and without a major reinvention of ourselves only those of us who can optimise their designs to reduce all these to an acceptible level will find the ability to compete in the near future. The next couple of technology nodes will be reserved for a very small club ... We have riden the wave of Moore's law creating inefficient designs that can compete simply because in 18months they are cheaper, faster and more power efficient at no design cost. We bet on it, we relied on it, we even designed for it.