Google server hardware revealed

CNET reports on Google revealing its once-secret server design at a data center efficiency summit held this week at Google HQ in Mountain View. The intriguing difference is that they have a backup battery on each 2U server unit and 12-volt-only power supplies that force additional voltage conversion to take place on the motherboard.

That adds $1 or $2 to the cost of the motherboard, but it’s worth it not just because the power supply is cheaper, but because the power supply can be run closer to its peak capacity, which means it runs much more efficiently. Google even pays attention to the greater efficiency of transmitting power over copper wires at 12 volts compared to 5 volts.

Why then do power supplies continue to be built to produce multiple voltages? The answer is simple: because the standard never changed, and because the actual

voltage needs of many chips in a computer change every year as they become more energy efficient themselves. But the changing voltage needs of chips are now met

by voltage regulator modules (VRMs) that computer manufacturers put on their motherboards. These VRMs take one of these voltages (say, 5V) and transform

them down to the actual voltage needed (say, 1.7V) making multiple voltage output capability of power supplies unnecessary.

The battery on the motherboard acts as a distributed, on-board UPS and is there to keep the server running during power sags or for the several minutes it takes backup power generators to come online after a power blackout. Velcro is used to fasten components that are likely to fail most often like the SATA hard disk drives.

1,160 of these servers are then crammed into a standard freight shipping container making for an inexpensive, modular approach to managing air flow and data center expansion.

Google managed to keep many of these details secret since 2005, an Apple-under-Steve-Jobs level of discipline. Google Japan employees I’ve spoken to refuse to divulge where Google’s Japanese data center is, or even acknowledge that there is one on the islands at all.

The full presentation will be posted to YouTube shortly. In the meantime, here is an attendee’s shaky video of a portion of Google’s presentation, including exterior and interior shots of a very spartan data center, an employee riding a kick scooter to get to a service point, etc.

“I imagine a standard low-voltage distribution system inside buildings having alternate energy supplies like solar,” said Lee Felsenstein, the designer of the Osborne 1 and Sol personal computers. “Google’s proposal would make that a practicality.”

Another example : these are the power adapters just lying around our office. I’m sure most of you have things like this under your desk too. It’s a real hazard. You could electrocute yourself – if one in a million adapters catches fire and you have a thousand adapters, it starts to be an issue. And it’s also a big hassle for the manufacturers because every one of those devices now has this thing that’s in the box that’s specific to a country. And so they have to repackage the boxes and maintain stock for different countries. It’s just silly, and also really inefficient, because guess what? They are sort of subsidized by the devices you buy, so people try to provide the cheapest ones possible. So they all suck power.

In a rational world, these efficiency concerns along with the inevitable growth in point-of-consumption power generation over the next decade would result in a new world-wide standard for on-premises DC power, a modern denouement to the War of Currents. It’s already happening for low power devices with the proliferation of the 5v/500mA USB connector.