Marple makes monitoring of networks much more efficient

August 26, 2017

3 Min Read

Researchers at MIT, Cisco Systems, and Barefoot Networks have developed a new approach to network traffic monitoring that not only provides flexibility over data collection but also reduces the circuit complexity of the router and the number of external analytic servers required for the analysis of data.

The new system has been named Marple and it consists of a programming language enabling network operators to specify a wide range of network-monitoring tasks and a small set of simple circuit elements that can execute any task specified in the language. Simulations using actual data center traffic statistics suggest that, in the data center setting, Marple should require only one traffic analysis server for every 40 or 50 application servers.

Marple attempts to solve one of the most complex problems of network monitoring. If traditional means of traffic analysis and monitoring are used, there’s too much data to handle because of which the infrastructure required to just store the enormous amount of network traffic data would alone make it cost ineffective. Marple solves this and many other problem related to network traffic monitoring and analysis.

Researchers designed the Marple language and the circuitry required to implement Marple queries, with one eye on the expressive flexibility of the language and another on the complexity of the circuits required to realize that flexibility. The idea behind Marple is to do as much analysis on the router itself as possible without causing network delays, and then to send the external server summary statistics rather than raw packet data, incurring huge savings in both bandwidth and processing time.

Marple is designed to individually monitor the transmissions of every computer sending data through a router, a number that can easily top 1 million. The problem is that a typical router has enough memory to store statistics on only 64,000 connections or so. To solve this problem, Marple uses a variation on the common computer science technique of caching, in which frequently used data is stored close to a processing unit for efficient access.

Routers have a cache in which they maintains statistics on the data packets seen from some fixed number of senders — say, 64,000. If its cache is full, and it receives a packet from yet another sender — the 64,001st — it simply kicks out the data associated with one of the previous 64,000 senders, shipping it off to a support server for storage. If it later receives another packet from the sender it booted, it starts a new cache entry for that sender. This approach would definitely work if newly booted data can be merged with the data already stored on the server. But the merge process is not so straightforward if the statistic of interest is a weighted average of the number of packets processed per minute or the rate at which packets have been dropped by the network.

Researchers show that merging is always possible for statistics that are “linear in state.” “Linear” means that any update to the statistic involves multiplying its current value by one number and then adding another number to that product. The “in state” part means that the multiplier and the addend can be the results of mathematical operations performed on some number of previous packet measurements.