Moore’s Law is dead, long live Moore’s Law

This site may earn affiliate commissions from the links on this page. Terms of use.

Moore’s Law turns 50 this coming week — making this an opportune time to revisit Gordon Moore’s classic prediction, its elevation to near-divine pronouncement over the last 50 years, and the question of what, if anything, Moore’s Law can teach us about the future of computing. My colleague David Cardinal has already discussed the law itself, as well as the early evolution of the integrated circuit. To get a sense of where Moore’s Law might evolve in the future, we sat down with lithographer, instructor, and gentleman scientist, Dr. Christopher Mack. It might seem odd to talk about the future of Moore’s Law with a scientist who half-jokingly toasted its death just a year ago — but one of the hallmarks of the “Law” is the way it’s been reinvented several times over the past fifty years.

IBM’s System/360. Photo courtesy of Wikipedia

In a recent article, Dr. Mack argues that what we call “Moore’s Law” is actually at least three different laws. In the first era, dubbed Moore’s Law 1.0, the focus was on scaling up the number of components on a single chip. One simple example can be found in the evolution of the microprocessor itself. In the early 1980s, the vast majority of CPUs could only perform integer math on-die. If you wanted to perform floating point calculations (meaning calculations done using a decimal point), you had to buy a standalone floating point unit with its own pinout and motherboard socket (on compatible motherboards).

Some of you may also recall that in the early days of CPU cache, the cache in question was mounted to the motherboard (and sometimes upgradeable), not integrated into the CPU die. The term “front-side” bus (which ran from the northbridge controller to main memory and various peripherals) was originally contrasted with the “back-side bus,” which ran to the CPU cache from the CPU itself. The integration of these components on-die didn’t always cut costs — sometimes, the final product was actually more expensive — but it vastly improved performance.

Digital’s VAX 11/780. In many ways, the consummate CISC machine.

Moore’s Law 2.0 really came into its own in the mid-1990s. Moore’s Law always had a quieter partner, known as Dennard Scaling. Dennard Scaling stated that as transistors became smaller, their power density remained constant — meaning that smaller transistors required less voltage and lower current. If Moore’s Law had stated we would be able to pack more transistors into the same area, Dennard Scaling ensured that those transistors would be cooler and draw less power. It was Dennard Scaling that broke in 2005, as Intel, AMD, and most other vendors turned away from emphasizing clock-based scaling, in favor of adding more CPU cores and improving single-threaded CPU performance.

From 2005 through 2014, Moore’s Law continued — but the emphasis was on improving cost by driving down the expense of each additional transistor. Those transistors might not run more quickly than their predecessors, but they were often more power-efficient and less expensive to build. As Dr. Mack points out, much of this improvement was driven by developments in lithography tools. As silicon wafer yields soared and manufacturing outputs surged, the total cost of manufacturing (per transistor) fell, while the total cost per square millimeter fell slowly or stayed about the same.

Moore’s Law scaling through the classic era.

Moore’s Law 3.0, then, is far more diverse and involves integrating functions and capabilities that haven’t historically been seen as part of CPU functions at all. Intel’s on-die voltage regulator, or the further integration of power circuitry to better improve CPU idle and load characteristics, could be thought of as one application of Moore’s Law 3.0 — along with some of Nvidia’s deep learning functions, or its push to move camera processing technology over to the same core silicon that powers other areas of the core.

Dr. Mack points to ideas like nanorelays — tiny, tiny moving switches that may not flip as quickly as digital logic, but don’t leak power at all once flipped. Whether such technologies will be integrated into future chip designs is anyone’s guess, and the research being poured into them is more uncertain. It’s entirely possible that a company might spend millions trying to better implement a design in digital logic, or adapt principles of semiconductors to other types of chip design, only to find the final product is just incrementally better than the previous part.

The changing nature of Moore’s Law

There’s an argument against this shift in usage that goes something like this: Moore’s Law, divorced from Gordon Moore’s actual words, isn’t Moore’s Law at all. Changing the definition of Moore’s Law changes it from a trustworthy scientific statement into a mealy-mouthed marketing term. Such criticisms aren’t without merit. Like clock speed, core counts, transistor densities, and benchmark results, Moore’s Law, in any form, is subject to distortion. I’m sympathetic to this argument — when I’ve called Moore’s Law dead in the past, I’ve been referring to it.

One criticism of this perspective, however, is that the extra layers of fudge were added a long time ago. Gordon Moore’s original paper wasn’t published in The New York Times for public consumption — it was a technical document meant to predict the long-term trend of observed phenomena. Modern foundries remain focused on improving density and cutting the cost per transistor (as much as is possible). But the meaning of “Moore’s Law” quickly shifted from a simple statement about costs and density trend lines and was presented as an overarching trend that governed nearly every aspect of computing.

Even this overarching trend began to change in 2005, without any undue help from marketing departments. At first, both Intel and AMD focused on adding more cores, but this required additional support from software vendors and performance tools. More recently, both companies have focused on improving power efficiency and cutting idle power to better fit into mobile power envelopes. Intel and AMD have done amazing work pulling down idle power consumption at the platform level, but full load CPU power consumption has fallen much more slowly and maximum CPU temperatures have skyrocketed. We now tolerate full load temperatures of 80-95C, compared to max temperatures of 60-70C less than a decade ago. CPU manufacturers and foundries deserve credit for building chips that can tolerate these higher temperatures, but those changes were made because the Dennard Scaling that underlay what Dr. Mack calls Moore’s Law 2.0 had already failed.

Transistor scaling continued long after IPC and clock speed had essentially flatlined.

Even an engineering-minded person can appreciate that each shift in the definition of Moore’s Law accompanied a profound shift in the nature of cutting-edge compute capability. Moore’s Law 1.0 gave us the mainframe and the minicomputer. Moore’s Law 2.0’s emphasis on per-transistor performance and cost scaling ushered in the era of the microcomputer in both its desktop and laptop incarnations. Moore’s Law 3.0, with its focus on platform-level costs and total system integration has given us the smartphone, the tablet, and the nascent wearables industry.

Twenty years ago, the pace of Moore’s Law stood for faster transistors and higher clock speeds. Now it serves as shorthand for better battery life, higher boost frequencies, quicker returns to idle (0W is, in some sense, the new 1GHz), sharper screens, thinner form factors, and, yes — higher overall performance in some cases, albeit not as quickly as most of us would like. It endures as a concept because it stands for something much larger than the performance of a transistor or the electrical characteristics of a gate.

After 50 years, Moore’s Law has become cultural shorthand for innovation itself. When Intel, or Nvidia, or Samsung refer to Moore’s Law in this context, they’re referring to the continuous application of decades of knowledge and ingenuity across hundreds of products. It’s a way of acknowledging the tremendous collaboration that continues to occur from the fab line to the living room, the result of painstaking research aimed to bring a platform’s capabilities a little more in line with what users want. Is that marketing? You bet. But it’s not just marketing.

Tagged In

This site may earn affiliate commissions from the links on this page. Terms of use.

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Email

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our
Terms of Use and
Privacy Policy. You may unsubscribe from the newsletter at any time.