The Background of DCIM Tools And Their Role In The Enterprise

DCIM tools now play a critical role in keeping data center operations running reliably and efficiently, so buyers should pay close attention to their deployment.

Data center infrastructure management tools arm administrators with deeper insight into the performance of their IT infrastructure and facilities. As DCIM tools evolve, and the market grows, they offer more advanced features that provide an even deeper look into the heart of the data center.

What is DCIM?

The definition of data center infrastructure management (DCIM) can vary greatly. At a high level, DCIM is a software suite for managing data center infrastructure and the resources it uses. DCIM software collects data from IT and facilities, consolidates the data into relevant information and reports it in real time to enable the intelligent management, optimization and future planning of data center resources such as capacity, power, cooling, space and assets.

Capabilities of DCIM tools

DCIM tools range from relatively straightforward power and cooling monitors to highly sophisticated products that reach into every facet of the enterprise. Since tracking power usage effectiveness (PUE) was the origin of the industry, all DCIM packages include power and cooling monitors, but their features extend beyond that.

Capabilities go by many different names, but can be classified in the following categories:

Energy monitoring: This feature includes real-time readouts and historical tracking of power utilization and can provide PUE calculation and tracking if total energy information is available from the facility. DCIM tools allow drill-down to the most detailed levels available from the sensors, from major levels like UPS input, output and batteries, and cabinet power strips, to deeper insight such as individual outlets, or even power used from inside computing hardware.

Environmental monitoring: This applies to real-time readouts and historical tracking of temperature and humidity via strategically placed sensors throughout the room and in air conditioner supply and return air paths. Deeper data might include sensors on the fronts and backs of cabinets, and even temperatures from multiple locations inside computing equipment. Readings can also be available from individual air conditioners, pumps, chillers, cooling towers and piping fluid flows.

Asset management: This refers to database management of computing hardware and software resources, integrated with the other monitored aspects of the data center. Database input and updates may be manual or automatic via such methods as radio frequency ID tags, in-rack sensors that report rack position, as well as internal reporting from the computing hardware. When combined with energy monitoring, asset management capabilities can also include alerts to end-of-lease or end-or-useful life terms, and total cost of ownership computation.

Structured cabling management: This includes both database and graphical tracking of network cabling and patching, with either manual or automated data entry. Automated tracking requires special patch cords and panels.

Capacity planning or “what if?” scenarios: These features can examine the impact on power, cooling and equipment cabinet capacities by simulating the installation in various locations. Some tools can show the locations where admins can add particular assets in response to a query.

Data center visualization: This refers to both 2D and 3D representations of the data center floor and its equipment, with the ability to select and view individual cabinets, the equipment mounted in them, and in some cases, their usages, software complements and cable connectivity. When combined with a computational fluid dynamics air flow modeling program, it can also reveal hot spots and available cooling capacities.

Event management: This feature automatically logs repeated occurrences to enable analysis and correct problems before they become disasters. These tools may also provide trending information such as the timeline of temperature rising in a particular location or the number of times an alarm occurred.

Workflow optimization: The ability to use predictive tools in combination with work orders to optimize and track equipment movement, additions, changes and software upgrades for maximum operational efficiency. Systems may include a historical database of changes, upgrades and equipment component replacements and repairs that can be useful to maximize uptime.

Centralized, remote monitoring and reporting: An intuitive user interface that provides operational information in an easily understandable form, along with the ability to drill down to any level of detail. Information should also be available remotely via APIs with a high level of security. Remote monitoring may be optionally read only to prevent unauthorized access to actual control functions.

The various capabilities of DCIM tools will vary from vendor to vendor. But above all, a solid DCIM platform in today’s market should meet several expectations. The DCIM platform should be a single integrated software suite that offers centralized monitoring and management of both the facilities and the IT aspects of the enterprise. It should also be modular, enabling users to acquire only what they can initially support and providing the option to expand these capabilities as an organization grows. It should be compatible with any vendor’s equipment, such as UPS systems, liquid flow sensors, network and storage devices and so on. A DCIM system should also provide secure remote access. The information that DCIM tools deliver should be easy to understand, with the opportunity to drill down to granular details.

The data center is one of the most valuable assets any organization can have. Effective data center management has become a complex task. Improper management can be costly and even catastrophic, as some DCIM platforms can influence every part of the enterprise. Those who are currently without a DCIM platform should begin to investigate the capabilities of DCIM tools to stay ahead of the curve as optimizing computing resources and maintaining reliability in the data center becomes increasingly critical.