Moore’s Law states that the number of transistors that can be placed inexpensively on an integrated circuited will double every two years, and it’s been the backbone of industry projections for the last twenty years.

But is Moore’s Law at an end? According to NVIDIA’s chief scientist and vice president Bill Dally writing for Forbes on the limitations of current CPU technology, he says yes, at least according to the power scaling part of Moore’s Law.

Luckily, Dally has a solution: parallel computing.

Going forward, the critical need is to build energy-efficient parallel computers, sometimes called throughput computers, in which many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem. A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance. Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance–at a tremendous expense in energy.

On one hand, Dally’s clearly a very prejudiced source: their business is based upon parallel computing solutions. On the other hand, you can’t daisy chain Intel’s multicore processors together and make sense in regards to power efficiency: they simply take up too much energy per instruction.

My guess is that Moore’s Law isn’t dead… but it’s dead for now, until energy can catch up.