Slashdot has a great in-depth article about Facebook' solution, called the Disaggregated Rack. Though it may sound like an elaborate torture device, it's actually a clever solution which will make Facebook's search system more flexible and efficient. Essentially, Facebook is breaking down its computational power down into separate modules that can be easily switched in and out:

Compute: A server with 2 processors, 8 or 16 DIMM slots, no hard drive, a small flash boot partition, and a "big NIC" with plentiful throughput to enable network booting.

RAM Sled: Facebook wants to replace the leaves and run it on a RAM sled with between 128 GB and 512 GB of memory, and for $500 to $700 per sled. Only a basic CPU would be needed. Total queries would be 450,000 to 1 million key queries per second.

Storage: Facebook's solution here is based on its Knox storage design (PDF). The I/O demands are low: 3,000 IOPS or so, Taylor said. But Facebook only wants to spend $500 or $700 apiece, excluding the cost of the drives.

Flash Sled: Facebook would like between 500 Gbytes to 8 terabytes of flash, with 600,000 IOPS. Excluding flash costs, Facebook would like the solution to cost around $500 to $700 apiece.

Facebook anticipates that Graph Search will initially use 20 compute servers, 8 flash sleds, 2 RAM sleds and a storage sled; in total, that'll provide 320 CPU cores, 3 terabytes of RAM, and 30 TB of flash. The beauty of the set-up is that it'll allow Facebook to easily upgrade in the future: right now, for instance, its RAM-to-flash ratio is 1:10, but it'll have to climb to 1:5 to meet future targets. In other words, Facebook will be able to wheel in more grunt with minimal fuss—and get on with the job it really cares about. [Slashdot]