Why CPU Clock Speed Isn’t Increasing

There was once a time when CPU clock speed increased dramatically from year to year. In the 90s and early 2000s processors increased at incredible speeds, shooting from 60 MHz Pentium chips to gigahertz-level processors within a decade.

Now, it seems that even high-end processors have stopped increasing their clock speeds. Dedicated overclockers can force the best silicon to around 9 GHz with liquid nitrogen cooling systems, but for most users, 5 GHz is a limit that hasn’t yet been passed.

Intel was once planning to reach a 10-GHz processor, but that remains as out of reach today as it was ten years ago. Why did processor clock speed stop increasing? Will processor clock speed start increasing again, or has that time passed?

Why CPU Clock Speed Isn’t Increasing: Heat and Power

As we know from Moore’s law, transistor size is shrinking on a regular basis. This means more transistors can be packed into a processor. Typically this means greater processing power. There’s also another factor at play, called Dennard scaling. This principle states that the power needed to run transistors in a particular unit volume stays constant even as the number of transistors increases.

However, we’ve begun to encounter the limits of Dennard scaling, and some are worried that Moore’s law is slowing down. Transistors have become so small that Dennard scaling no longer holds. Transistors shrink, but the power required to run them increases.

Thermal losses are also a major factor in chip design. Cramming billions of transistors on a chip and turning them on and off thousands of times per second creates a ton of heat. That heat is deadly to high-precision and high-speed silicon. That heat has to go somewhere, and proper cooling solutions and chip designs are required to maintain reasonable clock speeds. The more transistors are added, the more robust the cooling system must be to accommodate the increased heat.

Increase in clock speeds also implies a voltage increase, which leads to a cubic increase in power consumption for the chip. So as clock speeds go up, more heat is generated, requiring more powerful cooling solutions. Running those transistors and increasing clock speeds requires more voltage, leading to dramatically greater power consumption. So as we try to increase clock speed, we find that heat and power consumption increase dramatically. In the end, power requirements and heat production outpace clock speed increases.

Why CPU Clock Speed Isn’t Increasing: Transistor Troubles

Transistor design and composition are also preventing the easy headline clock speeds we once saw. While transistors are reliably getting smaller (witness shrinking process sizes over time), they’re not operating more rapidly. Typically, transistors have gotten faster because their gates (the part that moves in response to current) have thinned out. Yet since Intel’s 45nm process, the transistor gate is approximately 0.9nm thick, or about the width of a single silicon atom. While different transistor materials can allow for faster gate operation, the easy speed increases we once had are probably gone.

Transistor speed also isn’t the only factor in clock speed anymore. Today, the wires connecting the transistors are a big part of the equation as well. As transistors shrink, so do the wires connecting them. The smaller the wires, the greater the impedance and lower the current. Smart routing can help reduce travel time and heat production, but a dramatic speed increase might require a change to the laws of physics.

Conclusion: Can’t We Do Better?

That just explains why designing faster chips is difficult. But these problems with chip design were conquered before, right? Why can’t they be overcome again with sufficient research and development?

Thanks to the limitations of physics and the current transistor material designs, increasing clock speed is not currently the best way to increase computational power. Today, greater improvements in power come from multi-core processor designs. As a result, we see chips like AMD’s recent offerings, with a dramatically increased number of cores. Software design hasn’t yet caught up to this trend, but it does seem to be the primary direction of chip design today.

Faster clock speeds don’t necessarily mean faster and better computers. Computer capability can still increase even if processor clock speed plateaus. Trends in multi-core processing will provide greater processing power at the same headline speeds, especially as software parallelization improves.

5 comments

We are approaching the limit of how small a transistor can be. What is needed is a revolutionary advance in technology analogous to the vacuum tube to transistor advance. Is the quantum computer the answer or is the next advance in CPUs going to be based on yet undiscovered technology?

“Increase in clock speeds also implies a voltage increase, which leads to a cubic increase in power consumption for the chip.” My understanding is that the slew rate of the signal transitions is a limiting factor for clock speeds. So as the clock gets faster, the voltage must be decreased to reduce the time for the state to change. This is one of the reasons for moving from 5v logic to 3.3v.

It will either be quantum computing, photonic circuitry, or silicon being replaced by graphene which will allow higher current densities with no increases in heat due to the lower resistance of carbon versus silicon in that application.But even that last will reach a physical limit. Photonics has the ability to use a 3-D architecture that will allow for a more efficient structure as compared to the present 2-D architecture.

It will be interesting to see what will be coming out of the labs over the next decade or so.

“Software design hasn’t yet caught up to this trend, but it does seem to be the primary direction of chip design today.” Thats not true- aren’t contemporary development tools like Visual Studio and INTEL’s own offerings designed to tackle the multi-core era? Now if you are saying that developers are not trained enough to tackle the challenge, that’s different and there are probably some good reasons for that (beyond the idea that they are all “lazy”). Perhaps, this is where our advances in machine learning can play a role in helping to optimize application development to better take advantage of all these cores. 🤔