Are "reversible" computers more energy efficient, faster?

A group of researchers at the Department of Computer and Information Science and Engineering (CISE) at the University of Florida is working to make a reality a radical idea for making computers more energy efficient " as well as smaller and faster.

Gainesville, FL. --- As the U.S. Congress continues work on a federal energy bill, a group of researchers at the Department of Computer and Information Science and Engineering (CISE) at the University of Florida is working to make a reality a radical idea for making computers more energy efficient " as well as smaller and faster.

The goal is to re-engineer the integrated circuits that perform all computing operations to re-use, or recycle, most of the large amount of wasted energy they currently throw off in the form of heat. So-called "reversible computing" would not only reduce computer chips' power consumption, it also could boost their speed, because these chips are becoming so fast that the heat they generate limits the speed at which they can operate without overheating and malfunctioning.

The research comes at a time when computers are estimated to consume as much as 10 percent of electricity in the United States, and chips are rapidly reaching the upper limits of their heat tolerance, said Michael Frank, an assistant professor at CISE. "The fastest processors available today dissipate on the order of 100 watts of power in the form of heat," he said, or about as much as a large light bulb. "The main reason you can't run them faster is because they get too hot. If you could make them produce less heat in the first place, you could end up running them faster overall, especially if you want to pack a lot of chips together."

Frank and collaborator Huikai of the ECE department were just awarded a $40K grant by Semiconductor Research Corporation, a consortium of major chipmakers to design a resonant MEMS-based power supply for adiabatic circuits.

Frank, who first worked on reversible computing as a doctoral student at the Massachusetts Institute of Technology, heads UF's Reversible & Quantum Computing Research Group. Among other recent publications and presentations, he presented three papers dealing with topics related to reversible computing this summer, including "Reversible Computing: Quantum Computing's Practical Cousin" at a conference in Stony Brook, N.Y.

Reversible computing, the intellectual seeds of which date back to the early 1960s, means setting up logic operations " which manipulate the 0s and 1s at the core of digital computation -- so they can be undone or reversed. The process differs from the current approach, which performs operations but later discards the result. For example, when a computer "erases" something, what it does physically is ground one part of a circuit that holds a charge, in effect converting the charge " and the information it represents -- into heat, Frank said. When chips perform millions or billions of erasing and other operations in a short time, the total amount of heat becomes substantial, limiting both the performance of the chip and the number of chips that can be packed together in a small space, he said.

In fact, unless reversible computing is achieved, computer chips are expected to reach the upper limit of their performance capabilities within the next three decades, effectively halting the rapid advances in speed that have driven the information technology revolution, Frank said. "Reversible computing is absolutely the only possible way to beat this limit," he said.

Reversible computing seeks to configure integrated circuits in such a way that they can use their current state to recover previous states " in other words, rather than building up and tossing away unwanted information, the chips "uncompute" it fluidly, with little power expenditure or heat generation. Researchers hope to achieve such results by incorporating tiny oscillators, or spring-like devices, in the circuits. In theory, these oscillators could recapture most of the energy expended in a calculation and reuse it other calculations. The concept is somewhat analogous to hybrid cars now on the market that take the energy generated during braking and recycle it into electricity used to power the car.

"Rather than throwing away all the circuit's energy constantly, it essentially bounces back and forth, in a more elastic fashion," Frank said.

While he was at MIT, Frank worked on a team that built several simple prototypes of reversible chips. At the Department of Computer and Information Science and Engineering at the University of Florida, he's advancing the field through adapting resonators, oscillator-like devices, from microelectromechanical systems to power computer circuits. Microelectromechanical systems, or MEMS, are tiny mechanical and electronic devices currently found in cell phones, air bag sensors and other products. Frank and other researchers he collaborates with at CISE plan to reconfigure these components, tailoring them to drive reversible logic circuits, he said.

Frank also has been analyzing the extent to which reversible technologies can become more economical than traditional ones for high-performance computing. One of his most recent theoretical studies indicated that reversible machines could potentially become thousands of times faster, more energy efficient, and more cost-effective than other approaches over the course of the next few decades.

My major (past & present) research area concerns the use of low-power techniques in computing. Not only is low power important in portable/embedded devices, where energy supplies may be limited, but also, it is becoming increasingly important for all compact, high-performance systems, as heat dissipation increasingly becomes a limiting factor on performance. For example, you can run today's Pentiums much faster if you cool them aggressively - a fact which is already the basis of some existing businesses.

However, given necessary limitations on cooling technologies, it turns out that with normal computing techniques, raw computer performance is ultimately limited in proportion to a system's outer surface area. This severely limits our ability to pack circuits densely into small-diameter spaces. Unfortunately, compact system designs are vital, not only for increased portability, but also in order to reduce communication delays in aggressive parallel algorithms.

To enable better performance scaling for parallel machines, and higher performance per Watt in energy-limited systems, my colleagues and I have been investigating the theory, design, and application of novel, thermodynamically reversible computing techniques. Recently-developed "adiabatic" digital electronics enables most of a system's active electrical energy to be reused from one clock cycle to the next, rather than being dissipated on every cycle to heat.

The technique allows us to buy increased energy efficiency, at the cost of lower transistor-count efficiency. Overall, it permits (1) a portable device to perform much more total computation before its battery runs out, (2) a compact heat-limited system to compute much faster given the same size limit and cooling system, or (3) a large supercomputer to run aggressive parallel algorithms faster, period.

The scaling advantages of the reversible approach have been shown to follow from fundamental principles of physics, and therefore, these advantages will remain, regardless of any future developments in computing technology! (Superconductors, quantum computing, molecular computing, etc.) So this is definitely an important research direction to explore for the long run.

Interestingly, in this long term view, taking optimal advantage of thermodynamically reversible techniques requires not just different electronics, but also changes to all higher levels in computing - different logic designs and CPU architectures; different programming languages and algorithms. The reason is that in the reversible approach, as in the laws of physics themselves, information is a conserved quantity - it cannot be destroyed, although it can be uncomputed by careful, deliberate design. If information is simply discarded, as occurs continually (and implicitly) at all levels in traditional computing, it inevitably becomes heat.

Therefore, in the long run, a whole new paradigm within computer science will eventually need to develop (supplementing, not supplanting the old one), in order for us to take full advantage of reversible techniques. Fortunately, we do not have to wait for this "paradigm shift" to happen; we can start taking advantage of these techniques in a more limited fashion, for real applications today.
Reversible Computing Trends

Results from a detailed numerical model of cost-efficiency of reversible versus irreversible computers in future generations of technology.

The cost-efficiency of irreversible computing eventually hits a thermodynamic brick wall, and cannot improve further as long as the cost of energy (and/or the heat flux limit of the cooling technology) is fixed. In contrast, the cost-efficiency of reversible computing can continue to improve far beyond this point, limited only by achievable energy leakage rates, which have no known fundamental lower limit. (However, for generating this graph, an arbitrary lower limit of 1 kT/ms/bit-device was assumed.) Note that the advantages of reversible computing could rise to 1,000-100,000x by the 2050s. This model even takes into account the algorithmic overheads of reversibility and the proportionality of energy dissipation to speed in adiabatic processes. These results were first published in Michael P. Frank, "Nanocomputer Systems Engineering" (.doc,.ps), 2003 Nanotechnology Conference & Trade Show, Feb. 23-27, 2003, San Francisco, CA.