As networks become more virtualized, software defined, and therefore automated, visibility into those networks is also making some equally dramatic shifts.

An automated data center still must be “monitorable,” but not in the same way that has been done in the past. What role does visibility play in managing automated data centers, and how can future NPMD solutions help NetOps overcome the challenges associated with them? In this article we will explore automated data centers and how they’re being impacted by machine learning, software-defined networking, and public/private cloud migrations. And, look at the role next-gen NPMD solutions play in delivering visibility.

To be clear, “automated” does not mean lacking human interaction or oversight. In fact, humans are the ones driving the automation in response to broader, more complex networks and less staff for network management. Data center automation is the efficiency that is achieved by adding some level of automation to any set of routine network management processes or procedures. It employs technologies, from scripting to network virtualization and software-defined networking, but it is not itself a technology. A human must be present to determine key areas for automation, and to determine the best technologies to use in each unique situation.

Machine learning is a key technology in driving, and accelerating, data center automation. For example, let’s examine network baselining, one of the most manual operations employed in NetOps today. Baselines come in many forms, with overall utilization and application usage and performance being the big two. Developing baselines involves many steps, including collecting and storing the appropriate data over a statistically-significant time frame; extracting and analyzing the data; visualizing the results in simple, actionable formats; comparing current data with baseline data; and determining appropriate courses of action depending on the deviation of current performance indicators versus baselines.

This overall process is quite onerous and given all of the other responsibilities in NetOps today, it is often overlooked. Machine learning, with its ability to gather data over time, automatically model a system, and then predict future trends, is exactly what is needed to automate baseline analysis. Though the technology is still evolving, based on current developments in the market it appears that solid machine learning products to automate baseline analysis are only a few years away.

Any machine learning engine is only as good as the data that are put into it, so networks need comprehensive visibility tools that provide high-quality data in order to get value out of machine learning engines. All networks are different, so there will be a great deal of baselining and learning that a machine learning algorithm will need to do before it can produce good recommendations, and all of that requires network data, the same data that provides input into network visibility solutions.

Flow-centric data is the best data available today to feed the machine learning engine, but it must be more detailed than the typical 5-tuple data that comes from NetFlow. For example, technologies like Cisco Flexible NetFlow (FNF), Cisco Application Visibility and Control (AVC), and Cisco Medianet build on the basic 5-tuple flow data of the past, and there are several network visibility tools that can collect and analyze these data sets, providing both better visibility and better data for machine learning algorithms.

Software-defined networking (SDN) is another emerging technology that is having a significant impact on data center automation. Before SDN, most data center automation took the form of scripting CLI commands for all of the equipment in the network. This is a tedious task and was typically only implemented in cases where the automation was absolutely essential.

But what if there was a control layer in the network that could provide a single, simple, and modern control interface for all of the equipment in the infrastructure layer? Enter SDN. Although the promises sound a bit grandiose, SDN is delivering on the promise, and industry adoption is proceeding much faster than many imagined.

In a software-defined network, IP addresses and the number of instances of servers change quickly, which limits traditional methods of network monitoring and makes visualization based on flow data vitally important. And as with the data needed to feed machine learning, depth of flow-based data is essential for SDN network monitoring, requiring much more than the simple 5-tuple data from NetFlow. Flow data provides what’s needed to map out the network, and packet data will still be required for in-depth troubleshooting. More consolidated NPMD tools that use multiple types of data will be better equipped to deal with data center automation based on SDN.

Unlike machine learning and SDN, which are enabling data center automation, public/private cloud migration is driving the need for more data center automation. Just about every enterprise is considering and implementing some level of public/private cloud deployment. To fully take advantage of these deployments, network routes to and from remote offices are changing rapidly away from hub and spoke to direct access from remote offices to cloud offerings, whether public or private. Though this might be seen as a simplification of the network, especially from the user’s perspective, it greatly increases the configuration, monitoring and management tasks for NetOps. Any automation that can be implemented regarding direct access to cloud resources is tremendously helpful to the network team.

Both direct cloud access by end users as well as the intercloud operation of applications puts a strain on today’s network visibility solutions as well. Direct cloud access from remote offices is driving the need for SD-WAN solutions, which both optimize the end-user experience and reduce the overall costs of network connectivity provided through service providers.

But, SD-WAN creates highly dynamic network routes which most legacy network visibility solutions are ill-equipped to handle. For example, a CRM application requesting data from a database all within the same virtual environment creates a blind spot for both network and application performance management. When migrating to the cloud, not only is more data center management required, but new and modern solutions are required for network visibility.

Automation requires visibility into the process – is it working as it should?

Data center automation, and the associated technologies driving it, are putting increased demands on network visibility, which means multiple types of network data will be required to monitor and troubleshoot effectively. In most cases, enhanced flow-based technologies provide the data needed to monitor and manage these increasingly complex networks. But, the specifications for flow-based data are designed with speed and breadth in mind.

Flow-based data can indicate when, and even where, a problem is happening, but for complex issues it lacks the detail needed for troubleshooting. In these complex cases, IT needs network packets to get to the root cause of problems. As data center automation and the associated technologies that enable it to become more prevalent, network monitoring also needs to be reconsidered, with the goal of reducing tool sprawl and finding a single solution that can provide both breadth and depth.

Conclusion

A highly automated data center requires accurate data about the network itself to learn and implement policies correctly, so comprehensive network visibility from the data center all the way to the network edge will be essential for network automation to be successful.

Although there will need to be changes made to the tools and methods of network monitoring as public/private cloud use, software-defined networking and machine learning tools develop further, I believe network monitoring will continue to be a core part of networking in the new automated age.