Network Performance Monitoring tools that collect traffic flows (e.g., NetFlow, IPFIX, SFlow) provide much greater insight in what is happening in your network. One of the primary reasons for deploying network performance monitoring tools is to gain insight into the quality of the end user's experience and how applications are performing. This key capability is lacking in a network topology mapping tool. Quite simply put, network topology mapping is not a monitoring tool.

Ulrica de Fort-Menares, vice president of product strategy at LiveAction, brought up the subject of dynamic network topology mapping tools, saying that her customers are often asking to compare them with network performance monitoring tools. The topic will be discussed at the LiveAction Annual User Conference, taking place in September in San Francisco.

Let's explain first what a dynamic network topology mapping tool is, as described by the LiveAction executive:

A dynamic network topology map provides an interactive, animated visualization of the connections between network elements and end systems. Many network management solutions use discovery capabilities to find what elements you have in the network. Some would go one step further by discovering how network elements are connected and put them together to give you a dynamic network topology map. Using a combination of protocols such as Cisco Discovery Protocol (CDP)/Link-Layer Discovery Protocol (LLDP), SNMP data and the Command Line Interface (CLI) information collected, network information can be displayed on the map to drive troubleshooting diagnoses in real-time and historically. Example of useful diagnostics information includes interface errors, router down and link down events. Building a model based on this information, you can map a traffic path between point A and point B. The ability to perform path analysis makes troubleshooting more intuitive. For the purpose of this discussion, we can call this type of network management tools network topology mapping tools. Network topology mapping tools are particularly good for network documentation and ease network troubleshooting if you suspect the problem is caused by either a topology or configuration change.

At a glance, network topology mapping tools appear to have overlapping functions with network performance monitoring tools. Both tools discover the network, present a network topology map, collect SNMP and CLI data from network elements, perform path analysis and they are used by network engineers for troubleshooting.

What users really want is to compare network topology mapping tools and application-aware network performance monitoring tools before making a decision, per De Fort-Menares, and she often gets this question from customers.

A good definition of network performance monitoring tools offered by Gartner is here.

The comparison mentioned above can be viewed here with De Fort-Menares' comments:

Network Topology Mapping Tools

Application-aware Network Performance Monitoring Tools

Primary Data Source

CLI, SNMP, CDP/LLDP

NetFlow, SNMP, Packet Capture & CLI

Data Collection Approach

Pull & on demand

Push & always-on mode of monitoring

Troubleshooting Approach

Build a model of how the network is constructed. Compare configuration files and output of show commands to identify changes that may have caused the problem.

Report on observations from the network &

reflect what is truly happening in the network.

Primary purpose for the topology diagram

Automate network documentation. Automatically detect any changes in the network and keep the topology diagram up to date.

Overlay real application traffic on top of the topology diagram.

Path Analysis

Typically interrogate the path between a pair of IP addresses using the model built from CLI & SNMP information.

Visualize all the traffic flows over multiple paths.

The LiveAction executive states that "there is a perception that router-based traffic-flow collection and analysis is impractical to turn on at every interface and device in the network leading to blind spots. In reality, it is not necessary to turn on flow collection everywhere although the more observation points you have, the better the visibility. It is also increasingly not possible to enable flow collection and analysis at every node due to administrative control issues with managed services and the Internet. A model-based network topology mapping tool is going to have a hard time dealing with this kind of black hole of information with no CLI nor SNMP access to the network elements, whereas a traffic measurement centric view is able to stitch together a picture from the disparate parts."

After many years in the networking industry, with hands-on experience and various patents, she concludes that "network performance monitoring tools that collect traffic flows (e.g. NetFlow, IPFIX, SFlow) provide much greater insight in what is happening in your network. One of the primary reasons for deploying network performance monitoring tools is to gain insight into the quality of the end user's experience and how applications are performing. This key capability is lacking in a network topology mapping tool. Quite simply put, network topology mapping is not a monitoring tool!" To register for the LiveAction User Conference (dinner and a San Francisco Giants ticket included), De Fort-Menares invites you to go to http://liveaction.com/livex/.

Georgiana Comsa is the founder of Silicon Valley PR, a PR agency with a unique focus on the data infrastructure markets. Georgiana's decision to found Silicon Valley PR was based on her own experience as a corporate PR professional working with other PR agencies; she noticed that there was a need for a specialized, rather than a general tech PR firm, with media, analyst, and vendor relationships that would benefit its clients. With Silicon Valley PR, companies get to leverage the power of traditional and digital media relations to generate highly targeted press coverage, contributing to tangible business wins, which help them launch and grow their businesses.

ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.

"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.

Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.

As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure.
In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.

A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.

The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably.
The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...

ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infra...

"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.

Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...

As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure.
In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacent...

A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...

It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service.
FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...

The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...

The need for greater agility and scalability necessitated the digital transformation in the form of following equation: monolithic to microservices to serverless architecture (FaaS). To keep up with the cut-throat competition, the organisations need to update their technology stack to make software development their differentiating factor.
Thus microservices architecture emerged as a potential method to provide development teams with greater flexibility and other advantages, such as the abili...

In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...

The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.