Businesses could be wasting money expanding datacentres instead of squeezing more servers into existing premises.

A growing firm that fails to redesign its datacentres will rapidly run out of space to house its servers, says David Cappuccio, chief of research for infrastructure teams at Gartner.

"Many times, datacentre managers or facilities teams start with the following assumption: We are out of (or near) capacity in our datacentre, therefore when we build next we will need more space," he writes in the report The Case for the Infinite Datacentre.

"If we have 5,000 square feet today we must need at least 7,500 or more to sustain our growth. The error is that the focus is on square footage, not compute capacity per square foot (or per kilowatt).

"The first mistake many datacentre managers make is to base their estimates on what they currently have, extrapolating out future space needs according to historical growth patterns.

"It sounds like a logical approach, but there are fundamental problems; the first being an assumption that the floor space currently used is being used properly, and the second is a two-dimensional view, or the assumption that usable space is a horizontal construct, rather than a combination of both horizontal and vertical space."

A new approach

Cappuccio gives the example of a firm running a "typical" 1,200 square foot IT room with 40 42U server racks containing a total of 520 2U servers, one to two generations old and working at about 60 percent load capacity.

Within 10 years the business, assuming a 15 percent compound annual growth rate, would need at least 160 racks holding more than 2,000 servers, which would require almost 5,000 square feet of premises to house.

But the need for additional premises is based on the assumption the datacentre should continue to run as it always has, Cappuccio says.

He suggests that rethinking datacentre design could allow this hypothetical company to keep its server farm in existing premises for at least eight years.

"What if we thought both vertically and horizontally? The above assumes things stay at the status quo and the same type of equipment is acquired and the same configuration policies are applied throughout.

"Instead, assume whatever floor size you design was created to allow full use of rack space without the fear of hot spots (and there are many ways to do this without a great deal of expense).

"Taking the same 40 racks, if pushed to 90 percent capacity on average (leaving some room for switches, etc.) and upgrading the existing server base over the next two years to 1U servers, the datacentre would support 1,520 physical servers.

"Now the question becomes, do we build it bigger to support the original target of 2,000 servers, or will a future technology refresh within the next eight years double our capacity yet again?

"Doing some simple spreadsheet exercises and asking these 'what if' questions can yield some startling results when it comes to capacity estimates. The logic works with servers as well as storage, as each device category continues to decrease in size, improve in capacity and performance, and reduce the power consumption per unit of work with each new generation.

"If we were to look at these performance and density trends and make the assumption that the curve will continue, even at a much slower pace, it becomes clear that even small datacentre environments can have significant growth rates (well more than 20 percent CAGR), while maintaining the exact same footprint over the next 15 to 20 years."

The benefits of experimenting with datacentre design has been demonstrated by Facebook and its partners in the Open Compute Project, under which companies are experimenting with new ways of designing servers and datacentre cooling, power and networking infrastructure.