Is Moore's Law Dead? Does It Matter?

Does US national security depend on the continuance of Moore's Law, or is innovation that addresses the realities of the technology business opportunity more important to progress and wealth creation?

Most of the time when I hear an end-of-the-world story, I just roll my eyes and move on. However, I heard a message at DAC this year that still has me thinking.

People have been talking about the end of Moore's Law for some time, but those discussions became a lot more urgent and heated at DAC in June. Many reasons have been postulated as to why Moore's Law might end, including not being able to overcome some physical limitation -- perhaps a design issue that is preventing the whole chip from being powered up at the same time. More recently the matter of cost has been raised, where it will become so expensive to design a chip at the next node that nobody will be able to afford it. The concern has been that, with fewer design starts using the latest technologies and lower chip volumes, manufacturers would then not invest in wafer fabs for the next technology.

I am not sure I fully get behind any of these arguments, but if we do stop making these advances what really happens? Is there no room for innovation if monolithically integrated devices cannot get more complicated? I am sure that some companies will be affected by this "crisis" as their commercial lead is contingent on being ahead of the design and fabrication curve rather than having the best design. Such an end may well transform our industry, but then we cannot expect the ride we have been on for 50 years to continue without some kind of change.

Robert Colwell, who works for DARPA, said at DAC that the end of Moore's Law would be a US national security threat. This is based on the assertion that if the US does not stay ahead of the rest of the world in terms of computing power and associated technologies, then the rest of the world will become as capable as the US and be able to do things without the US government finding out -- and they will be able to find out what the US is planning to do.

Similar assertions can and are made in terms of weapons, of course. My first reaction is a political one. Why can we not spend more time getting along with people so that this is just not an issue that we care about? OK, so I am idealistic and I understand that some people may not think this is realistic or pragmatic.

Does innovation die when we cannot create more complex devices? I hope this is not true. I hope that we would find ways to use our knowledge and the capabilities we have in better and more optimal ways, exploring different architectures where we have just accepted those in existence today because that is easier and faster. What about biological computing or coming up with computers that operate more like the brain rather than just accept that binary arithmetic is the way to go?

So, what happens if the whole world has equal access to technology? Does stability depend on one country having a bigger stick than everyone else?

Here's another thought: maybe the end of Moore's Law would actually help US national security. Why? Because unless there is a major shift in the tide, China will soon eclipse the US as the world's largest economy and will be able to outspend any country on the planet in defense. If there is a technological advantage to be acquired by purchasing faster chips, it would then go to the country with the deepest pockets -- China, to the detriment of the US. We would, in effect, be outspent in the chip race by the Chinese in the same way the Soviets were outspent in the arms race by Uncle Sam.

I'm not saying a richer nation can outspend us on their way to a technological advantage. Somebody could potentially outspend us and gain an advantage in numbers, perhaps, but before you just assume that, take a serious look at the world economics that would imply and ask how realistic that would be, at least any time soon. If their acquisition system is less constipated than ours, that would also help them get an edge sooner. In this forum, I haven't tried to address the question of whether one can spend one's way past Moore's Law and thereby dominate. We largely ride Moore's Law COTS components too, though not exclusively. But historically we've ridden the leading edge better, and that's what we're about to lose, since there won't be a leading edge any more.

The only reason I mentioned all the high tech wonders that non-peer-state folks are essentially getting for free (meaning no investment in the intellectual property development or the manufacturing infrastructure required of such devices) is that that situation has let such potential competitors "in the door". Yes, we are using technology that is derived from the same Moore's Law developments, as well as more exotic stuff that only peer states can reasonably attempt.

Finally, I don't recall saying Moore's Law is "essential to our national security." I'm saying the impending end of it will have implications to the U.S. Dept of Defense, and we can (a) handle those implications in intelligent ways or (b) by sticking our heads in the sand and simply hoping everything turns out all right. The gist of the talk I gave was that I'm actively pushing for (a).

I am not certain that in the 21st Century, the definition of "Moore's Law" is directly driven by (or dependent on) "National Security" concerns and/or "Defense [Offense?]" spending. A few good examples of the fact that Toto is no longer in Kansas can be seen in such "COTS" programs as the SpaceX Dragon spacecraft, DishNetwork/DirectTV/SiriusXM satellites, and even the Atom/SnapDragon processors. Ditto for scientific endeavors such as particle accelerators/colliders vis-a-vis weapons. To find a direct causal relationship between the military budgets and what Google has done to the internet would be a far stretch. In the previous decades, Moore's Law had a military driver but today's autonomous vehicles do not need a centralized conductor that is charting the course ahead. Unless, of course, we can ultimately attribute such 'progress' to John/Jane Q. Public (aka consumers).

Absolutely. If you're saying that an even richer nation in the future will be able to spend more on weaponry than we can afford, and obtain the advantage that way, then the argument would switch over. That makes sense. However that's not what the original contention had been.

My argument is that ever since Moore's Law became a factor, it has created a challenge to our national security. So unless the economic realities change drastically (and they well might, as you suggest), we can't convincingly claim that Moore's Law is esential to our national security. Can't have it both ways.

How can this be good for the US military? As I understand your argument, you're saying that military power will follow very closely the economic power of states. Given that most forecasters are predicting that China will have the biggest economy by far by mid-century or sooner, with the USA running a distant third by some estimates, means the US military will also lag significantly behind the militaries of the economic leaders. This can only imply that global US influence will shrink significantly by mid-century.

What I mean is, our military platforms and weapon systems, before Moore's Law was a factor (say, up until the 1960s), were still the best there were, RELATIVE TO what other countries had at the time. That's all that matters, for defense. It's always relative to what your adversaries have. We had the best jets, subs, aircraft carriers, and surface combatants. Mainly because we could afford the R&D to develop them, and the acquisition costs.

Your contention was that an end to Moore's Law is a threat to our national security. My contention is that an end to Moore's Law would instead be a great advantage for our national security, BECAUSE national security is a matter of relative military capabilties. An end to Moore's Law would prevent our enemies from building ever more powerful weapons on the cheap. Instead, we would be back to a situation in which building better weapon systems is difficult and costly. This is an advantage for countries that can afford it, and a disadvantage for countries that cannot afford the expense.

When you say this: "What I'm saying is that back then, non-peer-state folks did not have the ability to instantly and cheaply call each other from a mobile handheld device (smartphones). They didn't have the ability to make IEDs and set them off remotely. They didn't have ground to air missiles with competent seekers."

My response is, PRECISELY! Moore's Law is to blame for this. It hurt our national security, it didn't help it. Suddenly, the playing field was leveled. It gave our adversaries all manner of cheap high tech weapons. An end to Moore's Law would slow down our adversaries' ability to acquire ever more powerful weapons, which in relative terms helps those who can afford the cost of expensive innovation.

Bert22306, I'm not sure what you're referring to by stating that the U.S. "still had the best technology." Perhaps you mean big comms systems, radar systems, and satellites? I'd agree with you on those. (By "smart bombs" I mean laser-guided, not GPS. Laser guided bombs were not available until 1972 or so.) Certainly not in rifles, tanks, radios, and fighters. (Source: Boyd, by Robert Coram, and About Face, by Col. David Hackworth.) But that isn't really the point I'm going after.

What I'm saying is that back then, non-peer-state folks did not have the ability to instantly and cheaply call each other from a mobile handheld device (smartphones). They didn't have the ability to make IEDs and set them off remotely. They didn't have ground to air missiles with competent seekers. Radar was out of the question (as opposed to today: standard equipment on high end cars.) There was no internet to use for wide coordination of multiple people and cheap dissemination of directives and other information. Non-peer-states were not going to create jammers. We have put very powerful technology into the hands of people who wish us ill. I think your basic point is that, so far, Moore's Law may have helped them more than it has helped us, and I think that's probably right.

For various reasons, and despite the acquisition horrors, there are many examples where the U.S. has taken better advantage of the Moore's Law COTS bounty than non-peer-state actors. (And probably some that came out the other way.) We must also worry about peer threats, but I don't see that as COTS-related. My concern is that in general, stationary targets get hit, and the end of exponentially-improving electronics makes it way harder to keep moving.

"We did not have a high-tech military then: no smart bombs, no UAVs, no GPS, no computer-guided anything. Now we do. If our technology stops giving us an advantage, we have a real problem for which there was no counterpart back then."

And yet, we still had the best military technology. It just cost us more, and the potential adversaries had a harder time funding anything that was technically equivalent.

We did have smart bombs before GPS, btw. They were not GPS guided, but they were guided by onboard gyros. More expensive? Perhaps, however it's also more expensive for adversaries too. Those who can afford the extra expense have a leg up, and that would be us. My point is that any advantage Moore's law gives us, in terms of cheap and always better weapons, is the advantage it gives everyone else too. It makes it that much easier for the bad guys to carry shoulder-launched missiles, to build clever IEDs, to monitor our communications, and on and on.

The relative threat is everyone's concern, and Moore's Law, if it does anything at all with respect to defense, it evens the playing field. In defense, that's considered bad, not good.

MIT is correct when looking at progress in hindsight. They are mistaken when extrapolating into the future. All the switch from Moore's Law to Wright's Law does is change an uncertanty in time to an uncertainty in units. There are also blind spots with respect to technology transitions, which Wrights Law has trouble accomodating as it assumes the units are continuous over the same learning environment. Given that the semiconductor industry purposely creates discontinuities with a completely new process every few years, and that Moore's law is a cause, not an effect (see my comment, above), it's small wonder that Moore's Law is used in that industry rather than Wright's Law.

Finally, maybe you can make the case that your Units forecast is more certain than your by Time forecast. That would really inform which one to use. But I have never seen a forecast that was independent in only one of those dimensions, so the case for Wrights Law may be the theoretical underpinnings for your model, but it isn't going to replace current heuristics.