A Google engineer has warned that if the performance per watt of today's computers doesn't improve, the electrical costs of running them could end up far greater than the initial hardware price tag.

That situation that wouldn't bode well for Google, which relies on thousands of its own servers.

"If performance per watt is to remain constant over the next few years, power costs could easily overtake hardware costs, possibly by a large margin," Luiz Andre Barroso, who previously designed processors for Digital Equipment Corp., said in a September paper published in the Association for Computing Machinery's Queue. "The possibility of computer equipment power consumption spiraling out of control could have serious consequences for the overall affordability of computing, not to mention the overall health of the planet."

Barroso's view is likely to go over well at Sun Microsystems, which on Tuesday launched its Sun Fire T2000 server, whose 72-watt UltraSparc T1 "Niagara" processor performs more work per watt than rivals. Indeed, the "Piranha" processor Barroso helped design at DEC, which never made it to market, is similar in some ways to Niagara, including its use of eight processing cores on the chip.

To address the power problem, Barroso suggests the very approach Sun has taken with Niagara: processors that can simultaneously execute many instruction sequences, called threads. Typical server chips today can execute one, two or sometimes four threads, but Niagara's eight cores can execute 32 threads.

Power has also become an issue in the years-old rivalry between Intel and Advanced Micro Devices. AMD's Opteron server processor consumes a maximum of 95 watts, while . Other components also draw power, but Barroso observes that in low-end servers, the processor typically accounts for 50 percent to 60 percent of the total consumption.

Fears about energy consumption and heat dissipation first became a common topic among chipmakers around 1999, when Transmeta burst onto the scene. Intel and others immediately latched onto the problem, but coming up with solutions, while providing customers with higher performance, has proved difficult. While the rate at which power consumption increases has declined a bit, the overall rate of energy required still grows. As a result, a "mini-boom" has occurred for companies that specialize in heat sinks and other components that cool.

Sun loudly trumpets Niagara's relatively low power consumption, but it's not the only one to get the religion. At its Intel Developer Forum in August, Intel detailed plans to rework its processor lines to focus on performance per watt.

Over the last three generations of Google's computing infrastructure, performance has nearly doubled, Barroso said. But because performance per watt remained nearly unchanged, that means electricity consumption has also almost doubled.

If server power consumption grows 20 percent per year, the four-year cost of a server's electricity bill will be larger than the $3,000 initial price of a typical low-end server with x86 processors. Google's data center is populated chiefly with such machines. But if power consumption grows at 50 percent per year, "power costs by the end of the decade would dwarf server prices," even without power increasing beyond its current 9 cents per kilowatt-hour cost, Barroso said.

Barroso's suggested solution is to use heavily multithreaded processors that can execute many threads. His term for the approach, "chip multiprocessor technology," or CMP, is close to the "chip multithreading" term Sun employs.

"The computing industry is ready to embrace chip multiprocessing as the mainstream solution for the desktop and server markets," Barroso argues, but acknowledges that there have been significant barriers.

For one thing, CMP requires a significantly different programming approach, in which tasks are subdivided so they can run in parallel and concurrently.

Indeed, in a separate article in the same issue of ACM Queue, Microsoft researchers Herb Sutter and James Larus wrote: "Concurrency is hard. Not only are today's languages and tools inadequate to transform applications into parallel programs, but also it is difficult to find parallelism in mainstream applications, and--worst of all--concurrency requires programmers to think in a way humans find difficult."

But the software situation is improving as programming tools gradually adapt to the technology and multithreading processors start to catch on, Barroso said.

Another hurdle has been that much of the industry has been focused on processors designed for the high-volume personal computer market. PCs, unlike servers, haven't needed multithreading.

But CMP is only a temporary solution, he said.

"CMPs cannot solve the power-efficiency challenge alone, but can simply mitigate it for the next two or three CPU generations," Barroso said. "Fundamental circuit and architectural innovations are still needed to address the longer-term trends."

About the author

Stephen Shankland has been a reporter at CNET since 1998 and covers browsers, Web development, digital photography and new technology. In the past he has been CNET's beat reporter for Google, Yahoo, Linux, open-source software, servers and supercomputers. He has a soft spot in his heart for standards groups and I/O interfaces.
See full bio