US mulls Linux for world's biggest computer

Linux is in the running to power the world's biggest computer, we learned this week at LinuxWorld Expo. A bid is being prepared to provide the computing power behind the US government sponsored Project Purple, which will pool a vast server farm to the three leading US research labs, which is scheduled to come on stream by the end of 2004.

But this differs from your usual Linux-employed-in-big-lab story in a particularly interesting way, which is orthogonal to the interest in file system architecture raised by our Longhorn story.

"It should not exceed half an acre, or consume six megawatts for the compter and four megawatts for cooling," says Seager, who manages the ASCI Platforms Program.

Objects everywhere

A bid being prepared with Peter Braam, who gave his cluster file system talk at LinuxWorld this week, explained the approach of the Intergalactic File System:

"ASCI requires a shared file system that write data through the network at thousands of gigabytes per second," he said. "A problem is that with 50 Petabyte files, spread over many disks, they have to put a little bit of metadata everywhere."

So the approach adopts object-based storage, or intelligent disk drives. OBSD has won wide industry approval, but not so much practical support to date. The premise is that a disk drive is an intelligent computer device (in fact every disk drive has a 32bit RISC chip and comparable memory to a PDA), and can be expected to bear more of the workload.

The OBSD spec, devised by Seagate but blessed by other vendors, allows the device to handle metadata. The drive would know when to back up a file, for example, as it would be aware of the file attributes, or replicate the contents to another drive. Software authors to have would have virtualized direct access to data - yes, that sounds like an oxymoron, but bear with us - without having to go through a file system.

OEMs have viewed OBSD as overkill, although it becomes a practical necessity when handling compute problems on the scale of Purple, thinks Braam:-

"It's not that these problems haven't been solved - an airline reservation system can handle ten thousand concurrent writes - but file systems have solved them in the wrong way," he said. So in the 'Intergalactic File System' block level writes will not be handled by the file system but by the device itself. The metadata is all handled by the I/O target.

Braam's IGFS - it's a nickname right now but could just stick - will underpin one of several contenders, with the procurement process running into the spring. If it's successful, it could mark a significant win for an open source approach to tackling big computer problems. It might also prompt manufacturers to offer object-based disks for the rest of us, although that's another question altogether.

As Braam said in his LWE talk, drive manufacturers measure profit margins in terms of cents not dollars, and are keen to see drives take on more capabilities; although system builders and driver writers have been pretty sniffy about OBSD, as it encapsulates the complicated SCSI command set. It doesn't have to, though, and it will be fun to see if it can gains some momentum. ®