Report from the Trenches

Soaring at 20,000 feet above the sea doesn't get Dr. Frank far from his passion of reversible computing. He attended an industry sponsored meeting that focused on reversible computing and wrote down his thoughts. You need to read this if you don't want to see your designs come to a grinding halt because of the laws of physics. Dr Frank gives you an honest, down-to-earth, observation about the state of reversible computing, what needs to be done, and what the critics say about it. Onward to zetta

I'm writing this at 20,000 feet, on my way home from the MARCO-NCN workshop on Nano-Scale Reversible Computing, held yesterday at MIT. This was, as far as I know, the first industry-sponsored meeting to focus specifically on the topic of reversible computing, which is (as I have discussed in my previous blog entries, and as the very most well-informed physicists and engineers already know) the only possible way to continue improving computer power-performance indefinitely, circumventing various power dissipation limits that we are already beginning to grind up against today. [For the uninitiated, reversible computing refers to the recycling of energy in computing through the use of mechanisms that are almost thermodynamically reversible, dissipating negligible energy, and that must therefore also be logically reversible, mutating bits in-place in an invertible manner, rather than just overwriting them.]

For those who don't wish to see computer performance stall in the fairly near future, and who therefore should want to see reversible computing pursued aggressively, my report on the meeting brings mixed news, partially but not unequivocally good.

The good part is that there were enough people at the meeting who truly understand the relevant facts of fundamental physics that we managed (I think) to fairly effectively demolish the mistaken impression (which some attendees held prior to the meeting) that the very concept of reversible computing has some clear inconsistency with fundamental physics. The fact is, no such inconsistency has ever been found, despite the large number of very bright people (beginning with von Neumann) who have tried to do so over the last half-century. At the workshop, researchers from Likharev to Lloyd to Lent pounded into the audience again and again the point that a wide variety of quantum systems can quite clearly perform reversible transitions between logically distinct states with many orders of magnitude less energy dissipation than the von Neumann-Landauer bound of kT ln 2, which holds rigorously for all irreversible bit operations.

This is not to say that we can yet honestly claim to have proven that reversible computers that are extremely efficient in all respects (low-power, high-frequency, compact, cheap, 3D-scalable) can necessarily be built as a matter of mathematical certainty. To be fair to our opponents, as well as to be honest to ourselves, I think that we must admit that to firmly establish that "good" reversible computing is possible would at least require a fully detailed and completely realistic physical model of a fully functional reversible computer, something which frankly does not yet exist. The numerous theoretical models of reversible computers that do exist at the moment are either arguably not quite physically complete, or are dramatically inefficient in some respect (or both). But, we certainly have no good reason yet to think that a good, realistic physical model of a very efficient, highly reversible technology cannot eventually (with continued research and innovation) be found, prototyped, and eventually developed to the point of commercial mass-production.

The bad news is that there were also a number of people at the meeting who seemed strongly enough motivated to suppress reversible computing that I fear that these folks together may have managed to distract much of the audience's attention away from the critical points (which were emphasized by myself and Craig Lent of Notre Dame) that we must continue aggressively pursuing reversible solutions, even if this means that we will probably be forced to abandon the traditional semiconductor scaling path and instead follow a very non-traditional alternative path (e.g. 3D integration of highly-adiabatic, super-parallelized circuits comprised of relatively large, non-leaky FETs driven by high-Q resonant clocks), or even move to a new technology not based on the semiconductor field effect at all, such as Lent's quantum-dot cellular automata, or Y-junction electron waveguide approaches. The simple physical fact is, the only alternative to pursuing some path towards reversible computing is for computer performance (within any reasonable power constraints) to stop improving within a few orders of magnitude of the maximum levels already attainable today. In other words, the only alternative is to throw up our hands, give up, and allow technology to reach a dead end fairly soon.

To give the reader a sense of the aforementioned distractions, I want to quickly survey some of the misconceptions that were expressed (or perhaps "smokescreens" thrown up?) by some of our opponents at the meeting:

The myth that "nanoscale" necessarily implies "unreliable," minus the proper physical qualifications of this assertion. For example, a digital storage device that uses potential energy barriers to maintain logical or structural stability is only unreliable if the activation energy required for an undesired state transition is not large compared to the ambient temperature. But, there are plenty of nanoscale structures (e.g. many covalently bonded molecules) which incorporate multi-eV barriers against decay, which is large enough to keep these structures extremely stable even at room temperatures. (If this were not true, then ordinary molecular solids in our everyday environment that are held together by such bonds, such as plastics, would disintegrate spontaneously and rapidly.) Similarly, nanoscale quantum-dot, molecular and electron-waveguide computing schemes can be designed to incorporate comparably high energy barriers preventing undesired electronic transitions, and so can have negligibly low error rates even at room temperature. Even sub-eV barriers could be well tolerated by operating at lower temperatures and with improved shielding from external noise, thereby lowering error rates exponentially. As for manufacturing defects, these generally can be dealt with via manufacturing process improvements, or by post-fabrication processes of device calibration or fault isolation.

The misconception (stemming from the previous myth) that we must necessarily pursue approaches to nanocomputing that have inherent reliability problems requiring rather inefficient concatenated error-correcting codes to solve, when this is not the case. Instead, we can choose to continue to insist on highly reliable devices. (In fact, von Neumann showed long ago that inherently reliable devices are greatly preferred to extensive error correction, which is why we use extremely reliable devices today.)

Even to say that our logic devices must be aggressively "nanoscale" is not really a valid goal in and of itself, but is only worthwhile to the extent that this works out to enable better system-level figures of merit, such as good cost-performance and power-performance. (Personally, I think that these goals can be met even by deep-nanoscale devices, though probably not by nano-FETs.)

The myth that the optimal CMOS devices must be nanoscale ones in which leakage power is a huge problem, when in fact, the optimal MOSFET size actually becomes larger (since this suppresses leakage) when highly adiabatic logic design is considered as a viable alternative to voltage scaling for reducing the switching energy. The optimal device size continues increasing for as long as resonator Q factor can be improved, and device cost can decrease for reasons independent of device size (such as equipment depreciation and process improvements). However, it is also true that the advantages along this particular alternative path (which can potentially continue making CMOS better almost indefinitely) mount only very slowly.

Unjustified insistence that CMOS is the last word in digital logic and the only technology worth considering, with cavalier dismissal of any alternatives. For example, one self-proclaimed "CMOS bigot" at the workshop dismissed quantum dot technologies as "fru-fru" without any technical justification. Such attitudes hardly seem a basis for sound decision making regarding whether a given long-term research investment is worth making. At this early stage, novel post-CMOS technologies are of course nowhere near as mature as CMOS itself, but if they can potentially reach performance levels that CMOS itself can certainly never attain, then they may nevertheless be well worth pursuing.

I wish that I could have done more at the meeting to dispel these and other misconceptions, but unfortunately I was not invited to present a complete talk, and was reduced primarily to heckling from the audience. I was only given the chance to present a couple of slides, provided by my colleague Erik DeBenedictis of Sandia labs, who points out that reversible computing will in fact be necessary if we wish to reach the zettaflops performance levels desired for future supercomputing applications of national importance. (A topic of a recent workshop on "extreme supercomputing," see www.zettaflops.org.)

It is always difficult to predict after these meetings what the ultimate outcome will be. Will the most influential people who were present (who include some high-up SRC/ITRS folks) take away the key lessons and remember them? Will a new drive towards reversible post-CMOS technologies begin to accelerate? Or, will forces that are heavily invested in the status quo continue to successfully squelch any energetic thrusts to upset it, regardless of whether enormous efforts may in fact be required if we are to make substantial future progress? Only time will tell But, in this fight for the very soul of all future technology, the stakes are very high, and so I think that the struggle to bring the concept of reversible computing to fruition will likely only be won gradually, over the course of a long, drawn-out series of battles. However, I, for one, intend to keep fighting the good fight for as long as I can, observing what happens, and reporting on it. So, dear reader, stay tuned, for future reports from the trenches.