Sorry

More servers, more racks, more UPSes, more users -- the reasons for expanding a datacenter are the same everywhere. Today's datacenter projects, however, have the additional component of modernization. Rebuilding takes place for tighter integration, greener power usage, greater redundancy, and especially more control. Datacenter administrators would control individual dust motes in their racks if they could.

Obviously, there's lots of how-to meat in this space, and we used to discuss it endlessly at InfoWorld's annual edit retreats. But actually working this idea into a hands-on lab story was simply too impractical … until Brian called Oliver one fateful day late in 2006. [Editor's note: That's right, 2006. A significant datacenter update and migration is not an overnight process -- especially when most of a continent and half an ocean separate datacenter and key vendors.]

It turned out that the University of Hawaii was putting Brian on a project to turn a weathered old server-and-storage room into the SOEST (School of Ocean and Earth Science and Technology) College's brand-new datacenter in the Hawaii Institute of Geophysics (HIG). Having the parasitic instinct common to magazine editors, Oliver latched InfoWorld onto the HIG project as deeply and intractably as a tick burrowing into a Labrador's hide.

The goal was simple: Follow the construction of the new HIG datacenter, turning that experience into the golden copy you'll read at the links below. We had a rare opportunity to see a datacenter project from the inside out, and the chance to work with datacenter vendors far and wide to pimp out HIG 319 with some of the glitziest and most functional gear known to datacenter-building man.

To make this project a reality would obviously require Oliver to fly to Honolulu in person for final construction and to do a lot of writing, cable pulling, knee scraping, and recuperating. Especially recuperating.

A datacenter project presents many opportunities to goof up, and we certainly made our share of mistakes. Many of the gotchas were mundane details we thought we had nailed down. Others were last-minute surprises that shouldn't have been. We did get our little project completed, but not on time and certainly not under budget.

Even so, the end result is impressive if we do say so ourselves. Fifteen vendors contributed to pimping out the new SOEST datacenter so that it fairly gleams with techno-wizardry. All of the vendors brought products that could help most datacenters upgrade their capabilities. Some vendors brought products that we can recommend without reservation to anyone building an enterprise datacenter. All told, our little server room benefited from nearly $400,000 worth of thoroughly modern datacenter gear. Read about the solutions and how we used them in the related articles. Or hear about it straight from Brian Chee in our series of short "Pimp my datacenter" videos.

The main goals driving our choice of gear were more efficient use of space through rack and cable management, power savings through more efficient cooling and a more efficient UPS, and higher security including better physical access control. We also aimed to help the SOEST IT staff avoid getting bitten by late-night glitches and driving in at the wee hours of the morning, by designing the new datacenter with remote management in mind.

Today HIG 319 is not only a showcase of 21st-century datacenter management, but the benefits we've seen have spurred the university to make similar improvements in other SOEST datacenters. We'll soon be putting our hard-won pimping skills and thick binder of notes to work on other projects.

Will we do some things differently the next time? Oh, you'd better believe it. Meanwhile, you can learn from our mistakes, and make plans for your own datacenter pimping by following these links.