Blogs

When averages don’t tell the whole infrastructure performance story

By Nick York, Product Manager, Analytics –

Statisticians have long since moved passed raw averaging as a method for organizing and analyzing data. In some cases, it still offers a fair assessment of a dataset, but more frequently, it paints a very incomplete picture. For example, in the first few weeks of a baseball season, players’ batting averages are typically either very high or very low. A few good games means a batting average can skyrocket quickly while a few bad days at the plate quickly diminish a batting average. Furthermore, players who don’t play the proper number of games are not eligible for awards related to their batting averages. Averages are very sensitive to the sample size used in their calculation. Whether it’s baseball or an IT worker trying to assess performance, using averages can be very misleading.

This same process of finding averages has long been a standard practice for performance monitoring in IT. However, the rapid shift of IT to cloud adoption and the real-time expectations of end users mean it’s even more important for companies to manage their IT infrastructures meticulously. Critical application workloads must function properly at all times, especially since so many of these applications are tied directly to operations and business performance. Infrastructure performance management (IPM) that analyzes all of the data inputs – not just averages – makes monitoring less of a guessing game.

Averages often suggest that everything is working properly because a majority of transaction requests go off without a hitch. A few instances of unacceptably greater latency can be offset by averaging, and it is those problem processes that must be addressed to provide a positive end user experience. By averaging, these anomalies would be ignored, but the problems caused by the latency issues would still exist.

Finding more effective methods of managing IT performance is vital in the modern enterprise IT landscape. Businesses are very conscious of optimizing their IT costs, and they balance this need with the ongoing mandate for consistent innovation to improve processes. By understanding true infrastructure performance, not just averages, IT teams alleviate the burden of outages and other problems by allowing staff to identify and fix even the most unusual problems in real time.

When teams look beyond the averages, they gain a deeper understanding of performance issues and can make improvements to eliminate peaks and valleys that might otherwise not be noticed. Moving from outdated forms of performance monitoring to real-time problem-solving provides a better overall experience of IT teams and end users alike.