According to the Uptime Institute (an independent division of the research company 451 Group), “the market for Data Center Management Systems, of which Data Center Infrastructure Management (DCIM) systems represent the largest share, will grow by $1.18 Billion by 2020.” What will drive this growth? IT and business executives have realized that millions of dollars in data center energy and operational costs can be saved through improved physical infrastructure planning, aided by state-of-the-art data monitoring and advanced data analytics.

On-premise DCIM tools can proactively identify potential physical infrastructure problems and predict how they might impact specific IT loads by correlating power, cooling and space resources to individual servers. Through model-based simulation, these tools simplify complex issues such as capacity planning and server placement. Simulation programs factor in variables such as power utilization, heat dispersion, and network access. Questions such as, “What would be the impact if I move that server?” or “What would happen if this component were to fail?” are answered. In the case of a loss of cooling capacity, for example, simulations can answer the question of what happens if the data center temperature rises past a given threshold.

In addition to on-premise tools, a new class of “cloud-based DCIM” or “Data Center Management as a Service” (DMaaS) tools are gaining prominence. These new tools monitor, gather data and perform analysis so that data center administrators can understand, at a component level, how their data center is operating. One example of these tools is an offering from Schneider Electric called StruxureOn. The tool collects physical infrastructure raw machine data on a continuous basis. As a cloud-based data center monitoring solution, it looks for patterns and detects anomalies and can draw conclusions regarding future equipment behavior.

Access to more (lots more) performance data is the new critical success factor

It is now possible for any data center, whether colo, on-premise, or cloud, to capture performance data on a daily basis. The potential also exists to benchmark that data against similar outside data centers.

Past efforts at tracking this data, and benchmarking it have been both limited and cost prohibitive. However, DMaaS tools can provide much larger scale data collection. By leveraging performance data from a larger quantity of data centers, owners and operators will be able to make more informed decisions regarding which parts of their data center need improvement.

How might such a benchmarking system be deployed? A third party could gather the data from multiple data centers and then utilize that data to provide anonymous benchmarking information. That information would then help participating data center owners to gain access to more precise, field tested physical infrastructure performance data.

The current and future benefit of big data and predictive simulation

Both on-premise DCIM simulation tools and DMaaS tools improve IT room allocation of power and cooling, provide predictive impact analysis of various IT room components, and leverage historical data to improve future IT room performance.

One benefit of incorporating both on-premise DCIM and DMaaS is the possibility of performing predictive maintenance. The ability to say “all the signs tell us that this UPS will fail within the next 3 months so I’m going to do something about it now” saves money through reduced downtime.

]]>https://blog.se.com/datacenter/dcim/2017/06/29/predictive-simulation-data-centers/feed/0New Data Center Prediction and Simulation Tools Cut Costs and Boost Uptimehttps://blog.se.com/datacenter/dcim/2017/01/23/data-center-prediction-tools-boost-uptime/
https://blog.se.com/datacenter/dcim/2017/01/23/data-center-prediction-tools-boost-uptime/#respondMon, 23 Jan 2017 13:00:50 +0000http://blog.schneider-electric.com/?p=34664When it comes to managing data center operations, system administrators often prioritize uptime. Business line executives, on the other hand, accept uptime as a given and often focus on operational... Read more »

]]>When it comes to managing data center operations, system administrators often prioritize uptime. Business line executives, on the other hand, accept uptime as a given and often focus on operational cost. One tool that satisfies both of these requirements is data center infrastructure management (DCIM) software, and it has evolved in recent years to become a critical component of both uptime and cost control.

According to the Uptime Institute (a division of the 451 Group) the market for data center infrastructure management systems will grow to $7.5 Billion by 2020. Why such growth? Newer management tools are designed to identify and resolve issues with a minimum amount of human intervention. By correlating power, cooling and space resources to individual servers, DCIM tools today, through simulation and prediction, can proactively inform IT management systems of potential physical infrastructure problems and how they might impact specific IT loads. In virtualized and dynamic cloud environments, this real-time awareness of constantly changing power and cooling capacities is important for safe server placement.

Modern planning tools can predict the impact of a new physical server on power and cooling distribution. Planning software tools also calculate the impact of moves and changes on data center space, and on power and cooling capacities.

These more intelligent tools also enable IT to inform the lines of business of the consequences of their actions before server provisioning decisions are made. Business decisions that result in higher energy consumption in the data center, for example, will impact carbon footprint and carbon tax. Charge backs for energy consumption are also possible with these new tools and can alter the way decisions are made by aligning energy usage to business outcomes.

Below are some examples of the practical uptime enhancing and cost saving advantages of DCIM systems:

They provide an up-front assessment of risk based on calculation-driven simulation, rather than making decisions based only on “gut feel”. By simulating the consequences of power and cooling device failure on IT equipment, they help to identify critical business application impacts.

They help to avoid potential downtime resulting from overloaded branch circuits or hot spots. This is accomplished by generating recommended installation locations for rack-mount IT equipment. The location selection is based on available power, cooling, space capacity, and network ports.

They help operators to immediately identify which servers will be affected if a particular rack or UPS happens to fail. Therefore, discovery through trial and error is avoided. They illustrate power path–from UPS to rack to individual devices–within the rack. They also measure load, and rack capacity.

They help provide factual evidence, rather than conjecture, when an operator needs to determine which equipment was moved and when. They achieve this by creating an audit trail for all changes to assets and work orders for a specified range of time, including a record of alarms raised and alarms removed.

They can help save energy costs by indicating which IT and/or cooling assets are being underutilized in the data center or server room. This is accomplished through the identification of excess capacity (either IT or cooling) so that operators can determine which particular assets can either be decommissioned or used elsewhere.

They help the operator to analyze whether management’s cost cutting, energy saving strategies are actually working. This can be achieved because they provide a Power Usage Effectiveness (PUE) value on a daily basis and track historical PUE.

They help operators to make informed decisions on which power and cooling sub-systems within the data center to optimize. Besides generating an overall PUE number, they provide a breakdown of how much energy each of the particular sub-systems is consuming.

Legacy reporting systems, designed to support traditional data centers, are no longer adequate for new “agile” data centers that need to manage constant capacity changes and dynamic loads. New DCIM tools improve IT room allocation of power and cooling (planning), provide rapid impact analysis when a portion of the IT room fails (operations), and leverage historical data to improve future IT room performance (analysis). For more information, download Schneider Electric White Paper 107 “How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Costs”.

To learn more about DCIM and Schneider Electric’s StruxureWare for Data Center Solutions, click here.

]]>https://blog.se.com/datacenter/dcim/2017/01/23/data-center-prediction-tools-boost-uptime/feed/0A Piece of the Efficiency Puzzle: Integrated DCIM and Virtualization Managementhttps://blog.se.com/datacenter/2012/11/20/a-piece-of-the-efficiency-puzzle-integrated-dcim-and-virtualization-management/
https://blog.se.com/datacenter/2012/11/20/a-piece-of-the-efficiency-puzzle-integrated-dcim-and-virtualization-management/#respondTue, 20 Nov 2012 16:08:08 +0000http://blog.schneider-electric.com/datacenter/?p=3589If data center efficiency was ever off the map, it’s certainly back on it now. More than ever, companies across the board are being pushed to be “green” not just... Read more »

]]>If data center efficiency was ever off the map, it’s certainly back on it now. More than ever, companies across the board are being pushed to be “green” not just to save money, but to be good corporate citizens and live up to new expectations.

However, that effort is at odds with a stark reality: data centers are woefully underutilized with respect to compute power. A recent New York Times story on data centers quoted the former CTO of Viridity Software (now part of Schneider Electric) as saying that in a sample of 333 servers in one data center, nearly 75% of them were using less than 10% of their computational power. But even at 10% load, a server will still consume roughly 50% of its maximum energy load. And the net effect is actually even worse, because the entire data center infrastructure is similarly inefficient at low loads.

Virtualization is part of the answer, and most of our customers have begun – and in many cases completed – initiatives to consolidate servers and decommission old hardware. But even among the most mature data centers with respect to virtualization, more can be done.

One largely untapped way to increase efficiency is to take advantage of the abilities present in most virtualization software to dynamically shift loads from one physical server to another and dynamically provision hosts – that is to power on and shut down host servers to meet demand. Done correctly, this would enable a company to load up some servers such that they are highly utilized, then shut down servers that aren’t needed at any given time. And it would all happen automatically, based on the current need.

This capability isn’t new but to date data center operators have been wary of implementing it. The idea of virtual machines moving from host to host on their own, essentially, is too scary – operators worry that they won’t know where a given job is running at a particular point in time or that a VM might move to a host that is in some sense “unhealthy.”

This worry is not unfounded because the virtualization software is unaware of many aspects of data center operation. Take for instance the act of powering on a server, for which most data centers have a rather lengthy approval process. While this may seem cumbersome to some, the checks and balances are there for a reason: to ensure the availability of the data center. Before powering on a server, operators need to ensure the availability of sufficient power, cooling capacity and many more elements that are effectively unknown to the virtualization software.

A good but unfortunate example of this comes from a customer I know who was experimenting with dynamic virtualization allocations. In a small area of the production data center, this customer was allowing the virtualization software to switch servers on and off. At one point, when the virtualization software switched on several servers at once, there wasn’t enough cooling in the area, which caused all of the host servers to overheat and shut down.

Data Center Infrastructure Management (DCIM) software offers a solution to the problem. DCIM tools give data center operators information about the physical state of their servers and their surroundings, including power and cooling, maintenance windows, planned changes, and many more outside events that can impact a server.

Indeed, DCIM also makes it possible to calculate which servers would be most beneficial to consolidate and which hosts it makes the most sense to turn on or off. For example, using DCIM tools, you might find that if you shut off a group of servers that are close to one another, you can lower cooling demand in that area, saving even more energy.

When DCIM tools integrate with virtualization management software, it gives data center operators the information they need to implement the kind of automation that can provide real improvements in server utilization and energy efficiency – without the risk and the worry.