…in about ten years or so, we will see the collapse of Moore’s Law. In fact, already, already we see a slowing down of Moore’s Law. Computer power simply cannot maintain its rapid exponential rise using standard silicon technology. Intel Corporation has admitted this.

It’s true. At the International Supercomputing Conference 2011 last June, Intel architecture group VP Kirk Skaugen said something about Moore’s Law not being sufficient, by itself, for the company to ramp up to exascale performance by 2018. But he went on to tout Intel’s tri-gate technology (the company’s so-called “3D” processors) as the solution, which Skaugen claimed translates to “no more end of life for Moore’s Law.”

Moore’s Law, introduced by Intel co-founder Gordon Moore in a 1965 paper, was never a law in any scientific sense — it’s always been more a rule of thumb (that, and “Moore’s Rule” sounds so much less authoritative). And as others have pointed out, given Intel’s dominance in the chip industry for much of the period in which Moore’s Law has applied (or appeared to), there’s a self-fulfilling prophecy angle in which the dominant industry player sets the pace for its own benefit.

Moore also clarified in a 2003 interview that the idea of computer power doubling every 18 months — sometimes mistaken as the basis of Moore’s Law — was advanced by Intel’s David House. While such performance gains could be achieved by Moore’s prediction that transistor counts would double every two years, House apparently calculated that transistors would get faster as well, resulting in computing performance doubling every 18 months (in a later 2005 interview, Moore admitted “we’re doing a little better than [24 months]”).

Despite Intel’s recent advances with tri-gate processors, Kaku argues the company has merely prolonged the inevitable: the law’s collapse due to heat and leakage issues.

“So there is an ultimate limit set by the laws of thermal dynamics and set by the laws of quantum mechanics as to how much computing power you can do with silicon,” says Kaku, noting “That’s the reason why the age of silicon will eventually come to a close,” and arguing that Moore’s Law could “flatten out completely” by 2022.

Where do we go once Gordon Moore’s axiom runs out of steam? Kaku hypothesizes several options: protein computers, DNA computers, optical computers, quantum computers and molecular computers. And then he makes a bet:

If I were to put money on the table I would say that in the next ten years as Moore’s Law slows down, we will tweak it. We will tweak it with three-dimensional chips, maybe optical chips, tweak it with known technology pushing the limits, squeezing what we can.

Kaku then invokes parallelism as another stop-gap measure, a concept that’s been around for decades, but assuming the exponential requirements for processing power hold, “Sooner or later even three-dimensional chips, even parallel processing, will be exhausted and we’ll have to go to the post-silicon era,” says Kaku.

How would a molecular computer work? Imagine molecules in the shape of a valve, says Kaku.

You turn the valve one way and the electricity stops through that molecule. You turn it the other way and electricity flows through that molecule just like a pipe and a valve because that’s what a transistor is, a switch, except this switch is molecular rather than a switch made out of piping.

But molecular computing has mass production issues because — surprise! — molecules are teeny-tiny. Why in the world, then, would Kaku invoke even smaller particle-based computers as a viable alternative?

Because quantum computing could produce the “ultimate computer.” Kaku doesn’t explain why, but I’ll summarize: Digital computers set bits to either “0” or “1,” but in a quantum computer, the bits can be “0” and “1,” at the same time, allowing for incredibly fast calculations according to a principle called “superposition.” The problem, and you knew there’d be one, is something called “decoherence.” Kaku explains:

Let’s say I have two atoms and they vibrate in unison. If I have two atoms and they vibrate in unison I can shine a light wave and flip one over and do a calculation, but they have to first start vibrating in unison. Eventually an airplane goes over. Eventually a child walks in front of your apparatus. Eventually somebody coughs and then all of the sudden they’re no longer in synchronization. It gets contaminated by disturbances from the outside world. Once you lose the coherence, the computer is useless.

Given that, Kaku says that when Moore’s Law finally collapses by the end of the next decade, we’ll “simply tweak [it] a bit with chip-like computers in three dimensions.” Beyond that, he says “we may have to go to molecular computers and perhaps late in the 21st century quantum computers.”

One thing we haven't done yet, is optimize software. We haven't done it, because we haven't needed to. We want bigger programs that do more and we want it now, so we (speaking as a software engineer) just write the code as fast as we can, and require a new processor to make it run as fast as people want it to run. If the processors aren't any faster, there's a million ways to make the code run faster, but it takes more time, much more time, to create this well-crafted code.

By offering components with increased speed, density, reliability, and lower costs, POET offers the semiconductor industry the ability to push Moore's Law to the next cadence level, overcoming current silicon-based bottlenecks, and potentially changing the roadmap for a broad range of applications, such as smartphones, tablet and wearable computers.

The POET platform is currently the basis for a number of key commercial and military projects now in the delivery pipeline - including optical code division multiple access (OCDMA) devices for avionics systems, combined RF/optical phased arrays, optoelectronic directional couplers, and ultra-low-power random access memory (RAM).

@limpingandroid Finally, someone said it! Thanks. Being a software engineer myself, I know that we can write code that can run several times faster on the same machine without changing a tad of the hardware. But guess what, we don't! Because upgrading the hardware isn't nearly as expensive as having programmers take more quality and high-performance software. There's always a push to instead finish the code ASAP, so managers can get their "well deserved promotions".

Another thing, I noticed that My Pentium III computer running windows 98 was no more slower than my Quad Core computer running Windows 7!! Both get stuck!!! How's that when the new chip is supposedly many times faster and I have at least 100 times more RAM on it?? Well, a task that a text based screen or simple graphics based GUI could do easily, now has to be developed with slick looking graphics. Looks have become so much more important as compared with utility even at workplaces, that most of the computing power that our processors add gets wasted in managing graphics. Then comes sound, heavy data transfer, heavy IO, all of that. There you have a monster of a machine not a bit more useful than that ignored junk box in your garage.