From no-frills servers to free cooling, the Open Compute Project reveals the many secrets behind Facebook's high-efficiency data centers

Facebook has opened the vault to its data center efficiency secrets like no organization before it via its open source Open Compute Project. The OCP provides detailed specs and guidelines for high-efficiency server components -- from motherboards to chassis -- as well as facilities hardware, including electrical and cooling. All the systems are implemented in Facebook's state-of-the-art data center in Prineville, Ore.

Any organization looking to build a more efficient new data center or to cut costs in an existing facility should take a closer look at secrets shared by Facebook and OCP contributors.

OPC provides two sets of specs for Intel motherboards, all power-optimized and with bare-bones design, free of many watt-wasting features. Version 1.0 [PDF] is a dual-socket (Intel Xeon 5500 or Intel Xeon 5600) motherboard with 18 DIMM slots. The V2.0 specification [PDF] uses two next-gen Xeons per board, doubling compute density. To save energy, unused features such as PCIe lanes, PCI lanes, USB ports, and SATA/SAS ports are all disabled. The BIOS is tuned to minimize system power consumption. The spec also calls for a BIOS setup menu with settings to adjust component speeds and power stats. Five thermal sensor monitor the temperatures of the CPUs, the PCH, the inlet, and outlet. They also support auto-fan speed control to ensure efficient cooling.

As with the Intel motherboard, OCP provides two sets of specs for AMD boards. The Version 1.0 spec [PDF] describes a dual-socket AMD Opteron 6100 Series motherboard with 24 DIMM slots. The V2.0 design [PDF] doubles the compute density, supporting two AMD G34 Magny-Cours or Interlagos CPUs per board. The specs for the AMD and Intel boards share similarities, including the BIOS settings for power optimization and internal thermal sensors. Both include a direct interface with the power supply, and both call for power-up delay to prevent a server's two boards from powering up at the same time and drawing a larger-than-normal current. The CPU VRM is optimized to increase the efficiency of the power conversion system.

OCP's 450W power supply [PDF] is designed to eliminate the number of watts the go to waste during electricity-conversion process. It uses a single-voltage, 12.5VDC-supply, closed frame, with self-cooling supply and two input connectors. The primary connector accepts 277VAC power and operates at higher, more efficient voltage levels than a traditional 208VAC system. The second connector accepts 48VDC input power, supplying power in the event of a power outage. All told, the supply has a efficiency rate of 94.5 percent.

The OCP chassis [PDF] serves as no-frills, highly serviceable housing for up to two custom motherboards and a power supply. It requires no screws and boasts quick-release components, enabling an admin to easily snap motherboards into place. Hard drives and cooling fans slide in via snap-in rails. The chassis is 1.5U tall, providing space for larger heat sinks, which are better at efficiently removing heat from components. The design also allows for larger fans, which use less energy. An OCP server weighs six pounds less than a standard 1U server and can be assembled by hand in less than nine minutes, according to OCP engineering manager Amir Michael.

OCP servers are designed to live in what Facebook dubs "triplet racks" [PDF], composed of three adjoining 42U columns. Each triplet has two switches at the top, and each of the three columns can hold up to 30 servers, for a total of 90 machines per triplet.

Facebook did not want to pay for extra network ports, "so the number of servers in each rack was specifically tailored to use the ports on our switches," said OCP's Michael.

As with the OCP servers, the racks are designed with serviceability in mind: Instead of sliding servers in and out on traditional rails, admins mount servers on shelves punched out of sheet-metal walls. Spring-loaded plungers hold the machines in place.

The stand-alone battery cabinets [PDF] provide backup DC power from a series of batteries in the event of an AC outage. A single cabinet feeds two triplet racks via a simple system of cables and power strips. Each cabinet contains an AC/DC rectifier for charging batteries, as well as a battery-health monitor. When a battery needs to be replaced, an alert is sent over the network so that a technician can be dispatched. The battery system is 99.5 percent efficient, according to the ODC's Michael; traditional UPS systems have an efficiency of between 90 and 95 percent.

Facebook implemented the OCP's high-efficiency electrical system [PDF] in its Prineville, Ore., data center. It's a 48VDC UPS system integrated with a 277VAC server power supply. Among its features is a novel approach to electrical distribution from an on-site substation, which eliminates unnecessary losses from transformations and conversions. Typical energy loss during conversion runs at 21 to 27 percent, according to Facebook; at the Prineville data center the loss is 7.5 percent.

Other design elements include a diesel backup generators, battery monitoring, server-battery backup, and energy-efficient LED lighting systems. The system is designed to meet or exceed an array of standards, including those from the National Electrical Code, National Fire Protection Code, Institute of Electrical and Electronics Engineers, and Underwriters Laboratory.

Facebook's Prineville data center also serves as a showcase for the OCP's high-efficiency cooling system [PDF], which uses 100 percent air-side economization with an evaporative cooling system. There are neither chillers nor cooling towers. The approach allows for a ductless overhead air distribution that can operate in temperature and humidity ranges beyond those prescribed by ASHRAE, according to Facebook.