Meta

Running out of capacity and need to make a decision whether to upgrade, build new, or lease? Keep it simple, here are some tips based on having performed this exercise…

Having gone through the process of determining whether we should build, upgrade, or lease, here are some things to consider if you are facing the same challenge.

Consider hiring a professional data center design engineer – doing so will allow you to focus on the big picture, and help you avoid making any major mistakes.

With or without a professional, you will need to start by documenting/determining two factors:

Square footage: rack space to house current and future equipment. This should include rack types and sizes, quantity, etc.

Power load: current load, plus load based on future equipment. This includes documenting current types e.g.: single phase, 3 phase, 110 or 220 volt, amperage, and plug types.

With this information in hand, you can begin to shop around for leased (colocation or managed) solutions. Find at least three vendors and issue an RFP. Be prepared to answer questions. The range of options will vary from vendor to vendor, e.g.: shared space vs. dedicated, bandwidth needs, special security requirements, etc, etc. Many colocation vendors have options that could prove valuable to some organizations, e.g.: managed firewalls, remote hands, etc.

We made the decision to build. We hired a data center design professional that has helped us on several small projects in the past…this turned out to be a great decision. Why? Because it is important to understand that going the build route will require you to project manage a construction effort that will include contractors, architects, engineers, electricians, plumbers, HVAC techs, and the list goes on. This is not to discourage going the build route. There are many reasons why building your own data center is justified, in our case CAPEX vs. OPEX was a big part going the build route, we owned the property and had the space.

Build/upgrade considerations:
The same space and power requirements used in the RFP for the leased option are also the starting point for building/upgrading your data center. With space and power requirements, the cooling, uninterruptible(UPS) and backup (generator) power requirements can be determined. Additionally, there will be ancillary systems required e.g.: security, fire detection/suppression, and management system/s required to monitor and administer these systems.

Lower data center power consumption and increase cooling efficiency by grouping together equipment with similar head load densities and temperature requirements. This allows cooling systems to be controlled to the least energy-intensive set points for each location.

Implement effective air management to minimize or eliminate mixing air between the cold and hot air sections. This includes configuration of equipment’s air intake and heat exhaust paths, location of air supply and air return, and the overall airflow patterns of the room. Benefits include reduced operating costs, increased IT density, and reduced heat-related processing interruptions or failures.

Under-floor and overhead cable management is important to minimize obstructions within the cooling air pattern.

Prevent mixing of hot and cold air by implementing a hot aisle/cold aisle configuration. Create barriers and seal openings to eliminate air recirculation. Supply cold air exclusively to cold aisles and pull hot return air only from hot aisles.

Higher return air temperatures extend the operating hours of air economizers.

Choose an enclosure configuration that supports your cooling method.

If using raised-floor cooling, carefully consider the location of perforated floor tiles to optimize air flow.

Managing a uniform static pressure in the raised floor by careful placement of the A/C equipment allows for even air distribution to the IT equipment.

Finally, having solid documentation on our existing infrastructure was a tremendous help in planning and executing the device (server, comms, storage, etc) migration. We use a product called Device42 for our data center infrastructure management. We are now in the process implementing the data center power management module (an add-on to Device42 core product) which will give us visibility into power utilization and enable us to start optimizing power consumption in our new data center.