"The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency," said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. "Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter– understanding us, being more context-aware and having more natural user interfaces. On the other hand, there is an explosion in digital data searched, interpreted, and mined by data centers."

A growing number of applications are designed to tolerate "noisy" real-world inputs and use statistical or probabilistic types of computations.

"The nature of these computations is different from the traditional computations where you need a precise answer," said Srimat Chakradhar, department head for Computing Systems Architecture at NEC Laboratories America, who collaborated with the Purdue team. "Here, you are looking for the best match since there is no golden answer, or you are trying to provide results that are of acceptable quality, but you are not trying to be perfect."

However, today's computers are designed to compute precise results even when it is not necessary. Approximate computing could endow computers with a capability similar to the human brain's ability to scale the degree of accuracy needed for a given task. New findings were detailed in research presented during the IEEE/ACM International Symposium on Microarchitecture, Dec. 7-11 at the University of California, Davis.

The inability to perform to the required level of accuracy is inherently inefficient and saps energy.

"If I asked you to divide 500 by 21 and I asked you whether the answer is greater than one, you would say yes right away," Raghunathan said. "You are doing division but not to the full accuracy. If I asked you whether it is greater than 30, you would probably take a little longer, but if I ask you if it's greater than 23, you might have to think even harder. The application context dictates different levels of effort, and humans are capable of this scalable approach, but computer software and hardware are not like that. They often compute to the same level of accuracy all the time."

Purdue researchers have developed a range of hardware techniques to demonstrate approximate computing, showing a potential for improvements in energy efficiency.

Recently, the researchers have shown how to apply approximate computing to programmable processors, which are ubiquitous in computers, servers and consumer electronics.

"In order to have a broad impact we need to be able to apply this technology to programmable processors," Roy said. "And now we have shown how to design a programmable processor to perform approximate computing."

The researchers achieved this milestone by altering the "instruction set," which is the interface between software and hardware. "Quality fields" added to the instruction set allow the software to tell the hardware the level of accuracy needed for a given task. They have created a prototype programmable processor called Quora based on this approach.

"You are able to program for quality, and that's the real hallmark of this work," lead author Venkataramani said. "The hardware can use the quality fields and perform energy efficient computing, and what we have seen is that we can easily double energy efficiency."

In other recent work, led by Chippa, the Purdue team fabricated an approximate "accelerator" for recognition and data mining.

"We have an actual hardware platform, a silicon chip that we've had fabricated, which is an approximate processor for recognition and data mining," Raghunathan said. "Approximate computing is far closer to reality than we thought even a few years ago."