My selfish view on what I want “the network” to be

Context

Definitions first: what “network”? In this case, a mid-size (serving between a handful and up to 2–4 thousand of network-connected physical devices – servers, appliances) Enterprise, or a managed services provider’s network. Yes, not the tens or hundreds of thousands. If you’re wondering why, please visit this entry on my blog.

Also, wishes are often shaped by a point of view. The one for this blog is that of a somebody who’s responsible for the infrastructure architecture; call it what you will. Somebody who is looking after the marriage of the technology and the business outcomes it produces at a mid-size managed IT services provider.

The wish-list

Warning: here be the unicorns.

Physical

I want the network to be a Borg-like “black box”, with access ports on it.

Just two types of ports – 1GE and 10GE for starters should suffice.

I want to be able to add more ports by plugging in additional Borg “modules”, in a scale out fashion.

I want flexibility, (within a reason – campus scale should do), on physical placement of individual Borg “modules”, with as minimal cabling to the rest of the Borg as possible; however I want “lots” of capacity (WDM – yes, please). Gut feel tells me 1:2 over-subscription on the “mothership uplinks” should probably be fine in many cases.

Ideally, there would also be a Virtual Borg Module, which would operate just as a regular Borg Module, but in reality be a Virtual Switch, available for variety of common hypervisor platforms.

Logical

I want to be able to connect my gear to the Borg’s access ports in a standards-based redundant manner. At this point in time, this means LAG/LACP and/or STP (*shudder*, but have to put up with it for now). I also don’t want any limitations from Borg’s side on where an individual LAG’s member ports can be. Like Brocade VCS.

On each port, I need to somehow identify Service Access Points (SAPs) for individual connectivity services. For now, “whole port = one service”, “VLAN ID = one service”, and “C-VID + P-VID = one service” should do for most situations.

I want to be able to instantiate, via an API call, an L2 connectivity between a given set of SAPs.

I also want to be able to instantiate, via an API call, a “virtual router”, which then could be “connected” to two or more L2 connectivity segments mentioned above. These “virtual routers” would need to support routing protocols in addition to static routing.

When a “vitual router” is “connected” to an L2 segment, it becomes a part of an overall L2/L3 service for the purpose of service “promises”, described further below.

When instantiating a copy of a “virtual router” , I should be able to ask for a “high availability” and/or “optimised placement” options enabled. Think of “high availability” as VMware HA, and “optimised placement” as VMware DRS.

Asking for the “high availability” option would tell Borg to, um, make the particular routing instance highly available. Run a second shadow copy with synchronised state of it on a different Borg module or something, and cut over to it quickly should the active one fail. The idea here is to overcome the need for running two router instances and next hop redundancy protocols.

Asking for the “optimised placement” would tell Borg to track the network conversations that this particular router instance facilitates, and shift/distribute the location of where the necessary L3 functions are performed to the Borg module/modules, such as to minimise the utilisation of Borg resources (link capacity, FIB table space, etc).

Borg would need to understand a few concepts related to the connectivity services it provides, expressed in business-y terms, such as “high availability”, “guaranteed capacity”, “low latency”, and such. Let’s call them BCoS (Business Class of Service). I would then should be able to flag individual connectivity services, or particular Class of Service (CoS or DSCP based) with a BCoS, as desired, and let Borg take care of it. Behind the scenes, it would be perfectly fine for Borg to forward traffic that belongs to the same set of L2 segment(s) and virtual router(s), but has different BCoS, via different paths, if needs it to meet the promise.

As a side effect of working with BCoSs, Borg should be able to provide access to the on-going performance stats, for each promise it has made.

When Borg finds itself getting close to becoming incapable of meeting particular availability/capacity/latency promises made, I would expect it to suggest me a recommended action, such as adding more modules, links, or maybe moving particular SAP or SAPs to other Borg access ports. Be a grown-up – understand what’s needed of you, and tell me what help you need to get it done.

Naturally, I need to be able to programmatically apply/modify/read stats off boring things like L2/L3/L4 ACLs on L2 segments (rather than on “virtual router” ports – to support functionality similar to pVLAN, for example).

By the extension of Borg Virtual Module’s function (providing connectivity to VMs), it would naturally need to be able to work with vMotion-like hypervisor functionality, when a VM is moved somewhere else.

The Borg would also need to have a capability of taking in a feed of “physical world” constraints, such as shared risk link groups, physical location of Borg modules and servers where virtual Borg modules reside, time-of-day link/power costs, planned maintenance works, etc., and manage itself internally accordingly, always keeping an eye on the business promises it has made.

What about load balancing and security? At this point, I think these functions are better served via separate appliances, whether they are physical or virtual. Something like Embrane’s Heleos could be a natural fit to deliver the necessary functionality here.

P.S. Yes, troubleshooting this thing when it would misbehave will probably be a bloody nightmare. And yes, that would almost certainly be a total and complete vendor lock-in; but hey – it’s very much same with the modern “fabrics”, irrespectively who’s badge is on the front.