The Network as a Calculator

Jay Perrett, Aria Networks CTO, considers how using artificial intelligence techniques and treating the network as a ‘calculator’ can improve service and network optimization.

As the Chief Technical Officer at Aria Networks I am responsible for encouraging our team to create and see-through new ideas that drive the business forward. I tend to think quite a bit about how we approach problems and can spend ages talking with our engineers, managers and customers about work related issues.

This is my first blog, which really came about because our Marketing Director got fed up with me waxing lyrical to him on the benefits of artificial intelligence (AI) in network modeling, optimization and capacity planning. So, in the interests of stimulating some new thought processes I have put together a series of blog posts.

Before I go much further I’d like to introduce you to DA (Devil’s Advocate) which is something I use when thinking through problems like this. He has a habit of popping up at just the wrong time and throwing a spanner in the works that makes me rethink an idea.

This first post is concerned with using AI to solve network modeling and optimization. The thought of using AI for anything other than raising sentient machines or clever avatars may be alien to some but really AI is all about applying rules to perform some action or solve some problem without the need for (much) human intervention.

There are many forms of AI and others are much more qualified than I to précis the subject but the area that interests me most, and that employed by Aria Networks, is Evolutionary Learning. This is so cool it’s positively freezing.

Evolutionary programming is a branch of AI where the principles of Darwinian Evolution are used to evolve a modeling process to solve a particular set of problems. In nature it is the 3 S’s; Sex, Sustenance and Successors (I may have just made that up). In computer modeling it could be an objective like; working out the equation of a complex equation, grouping data in sets with similar characteristics or, in the case of network modeling, trying to balance many tens of constraints when placing services across a data network.

For this approach to work its vital that there is some underlying process to model and that is where I come to the crux of my post. In network modeling it is valid to simply think of a network as a calculator. Think of it like this:

The inputs are the service requirements and network state

The calculator is the network protocol and the physical rules

The output is the resolved service

This may or may not seem obvious to you BUT there are two amazing consequences of this hypothesis:

Computational techniques, like AI, can be applied to network modeling without needing to understanding the detailed message handling or level of detail required by many current modeling solutions

It is possible to estimate properties like link utilization as a result of service placement WITHOUT having to do a routing exercise

Given that evolutionary programming requires an objective function that defines “what good looks like” and functions to modify solution candidates you don’t have to know how to solve a problem just be able to describe what a good answer looks like. A good example is Dijkstra’s algorithm for IP routing. This is a very efficient algorithm (but not the only one) that can be used to determine the least cost path through a network. A computer programmer could very easily write a program to determine this given a service demand and a network. Suppose now the cost of links changed with bandwidth, this now gets more complicated because of the circular, as opposed the linear nature of the problem. So our programmer has to make a change to the algorithm to get a good result. Then suppose we now need to add delay minimization, load balancing, resource optimization and disjoint paths. Suddenly you have a pretty complex problem and our poor developer gets his CV polished as he can’t work in an environment of constantly changing requirements.

If evolutionary programming was implemented all that would need to change is the simple definition of what good is. In the first two instances above it did not change as we still wanted least cost but the “network calculation” process being modeled changed. For the third example we just had to include a definition of “what good looked like” and no change to the evolutionary algorithms.

[DA: “ok you’ve not convinced me yet. Isn’t an objective function as hard to write as the code that poor developer wrote before he left?”]

Good question but in this case the answer is definitely no. The objective function is simply a way to evaluate different solutions and does not assume a particular solution methodology. So in the case of the 1 and 2 above the same method is used to determine cost: Count up the cost on all the links in the way cost is defined but no change to the evolutionary generation of a candidate solution to check.

So if you think of the network as a calculator and employ AI techniques, like evolutionary programming, quite complex problems can be computed in a robust way, just like nature.

To use this paradigm there are three things you must do:

1. Determine what good looks like

2. Generate candidate solutions that (1) can be applied to

3. Generate candidate solutions fast enough to get an acceptable solution in a timely manner

If you think of nature; (1) is staying alive long enough to pass on your genes to the next generation. (2) is having enough variation in the population to offer ways of combating environmental change and predatory threats and (3) is doing (2) quickly enough to keep abreast of these threats otherwise you finish on the endangered species list!

The same rationale is true with network modeling but (1) is an objective function, (2) is trying alternative routes and (3) is in a timeframe of milliseconds to minutes!

Next time I want to think about what the objective function should be measuring. You might be surprised!