Google Unveils Its Container Data Center

Four years after the first reports of server-packed shipping containers lurking in parking garages, Google today confirmed its use of data center containers and provided a group of industry engineers with an overview of how they were implemented in the company’s first data center project in the fall of 2005. “It’s certainly more fun talking about it than keeping it a secret,” said Google’s Jimmy Clidaras, who gave a presentation on the containers at the first Google Data center Efficiency Summit today in Mountain View, Calif.

The Google facility features a “container hanger” filled with 45 containers, with some housed on a second-story balcony. Each shipping container can hold up to 1,160 servers, and uses 250 kilowatts of power, giving the container a power density of more than 780 watts per square foot. Google’s design allows the containers to operate at a temperature of 81 degrees in the cold aisle. Those specs are seen in some advanced designs today, but were rare indeed in 2005 when the facility was built.

Google’s design focused on “power above, water below,” according to Clidaras, and the racks are actually suspended from the ceiling of the container. The below-floor cooling is pumped into the hot aisle through a raised floor, passes through the racks and is returned via a plenum behind the racks. The cooling fans are variable speed and tightly managed, allowing the fans to run at the lowest speed required to cool the rack at that moment.

“Water was a big concern,” said Urs Holzle, who heads Google’s data center operations. “You never know how well these couplings (on the water lines) work in real life. It turns out they work pretty well. At the time, there was nothing to go on.”

Google was awarded a patent on a portable data centerin a shipping container in October 2008, confirming a 2005 report from PBS columnist Robert Cringley that the company was building prototypes of container-based data centers in a garage in Mountain View. Containers also featured prominently in Google’s patent filing for a floating data center that generates its own electricity using wave energy.

Holzle said today that Google opted for containers from the start, beginning its prototype work in 2003. At the time, Google housed all of its servers in third-party data centers. “Once we saw that the commercial data center market was going to dry up, it was a natural step to ask whether we should build one,” said Holzle.

The data center facility, referred to as Data Center A, spans 75,000 square feet and has a power capacity of 10 megawatts. The facility has a Power Usage Effectiveness (PUE) of 1.25, and when the container load is measured across the entire hangar floor space, it equates to a density of 133 watts per square foot. Google didn’t identify the facility’s location, but the timeline suggests that it’s likely one of the facilites at Google’s three-building data center complex in The Dalles, Oregon.

Data center containers have been used for years by the U.S. military. The first commercial product, Sun’s Project Blackbox, was announced in 2006. We noted at the time that the Blackbox “extends the boundaries of the data center universe, and gives additional options to managers of fast-growing enterprises.”

It turns out that containers have developed as key weapons in the data center arms race between Google and Microsoft, which last year announced its shift to a container model. Microsoft has yet to complete its first container data center in Chicago,

Get Daily Email News from DCK!

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

20 Comments

Power above, water below? What for what? What couplings is he talking about? Is there water cooling involved? And presumably heat exchangers? And they still have to run at 81F? There is some relevant information missing here. Anyone got a link to some better info?

I have read on http://perspectives.mvdirona.com/ that 81F was on the cold aisle which would make sense according to last ASHRAE recommandations, but back to 2005 ?
Could someone as well confim that ?

Brandon ByersApril 9, 2009 at 9:41 pm

I first heard about this in November 2005 at the WebmasterWorld conference, when Robert X Cringely spoke about it. He also wrote a column about it around that time:
http://www.pbs.org/cringely/pulpit/2005/pulpit_20051117_000873.html

In today’s climate, companies are looking for a more cost effective, flexible and future proof alternative to a typical data centre build.
We are seeing a rise in interest around Green IT and how we can help organisations become more efficient with how they operate and deliver IT resources from the data centre.
We have seen a rise in demand for our bespoke modular data centres, which tick all of the above boxes.
See here – http://www.redvista.co.uk/services/modular-data-centre/