The drawing is from Chapter 8 of Todd Lammle CCNA Study Guide and I figured would be an awesome way to weed out network admin wannabe’s. I’ll just have a router or two fired up, and see if the would-be network admin can answer things like “What does show ip int br do?”

Or – better yet – maybe answer how I built such an environment without the use of VirtualBox, VMware Workstation, KVM, or other virtualization tools. In other words, how I can connect my underlying Docker containers on my EC2 instance to the Dynamips emulated routers.

Plumbing…

This article is based on a CentOS 7 minimal EC2 instance with “just-enough” software added to permit me to run Puppet, Docker, and Dynamips/Dynagen. I’m not going to cover my CloudFormation script (although that would make a great article in and of itself – constructs the networks, IAM policies, auto-scaling groups, launch configurations, and everything and then fires off Puppet to apply policy).

Instead, we’re going to cover the meat of the article: attaching a customized network interface to a Docker container.

The Networks – and the “Host” Network!

First – remember that we are emulating a set of networks, so here are the network mappings:

VMnet2 – “Core” network, 10.1.1.0/24.

VMnet3 – “Finance” network, 192.168.10.0/24.

VMnet4 – “Marketing” network, 192.168.20.0/24.

VMnet5 – “Sales” network, 192.168.30.0/24.

VMnet6 – “HR” network, 192.168.40.0/24.

VMnet7 – “Mobile User” network, 172.16.10.0/24.

VMnet8 – “Host” network, 192.168.81.0/24

Before going further, take a careful look at VMnet8. This corresponds to VMware’s out-of-the-box bridge created in the Windows-based article I wrote some years ago. Also, it just happens to correspond to the default Docker interface, as in:

In other words, we could simply have used docker0 as a perfectly valid bridge (with NAT capabilities). But for my purposes, I want to subvert the Docker networking process completely. (As a side note: Kubernetes is a primary area of research for me, so the more I understand about Docker networking internals and container networking as a discipline, the better.)

So let’s first create the bridges to correspond to our networks:

for i in $(seq 2 8) ; do sudo ip link add VMnet$i type bridge ; done
for i in $(seq 2 8) ; do sudo ip link set VMnet$i up ; done

Some of the bridges are state UP and some are not; that is because we have interfaces associated with some of the bridges.

Dynamips / Dynagen Routers

This is still TBD: suffice it to say that we have four emulated routers to handle our networks, and that we have on a VLAN’ed interface on the CORP router IP address 10.1.1.2. This IP address can only be accessed if routing (and all the network plumbing) is setup correctly.

I *promise* I will get another article pushed out on how I setup the four routers – that itself is worth some words. For now, here is what the CORP router has for 10.1.1.2:

Routing and a Host Address

You will notice that we have a total of seven (7) networks we are working with. In order for us to perform testing, we need the host (really, the AWS EC2 running CentOS instance) to be able to get to each of our managed networks. For this to happen, we need a gateway to the routers we are going to provision. Let’s create an address on our host that can be used to perform NAT:

The Docker Container

This article is about how to attach virtual interfaces dynamically to a running Docker container; thus, we need to create a Docker container. Let’s first fire up a simple Web server (uses thtpd to serve up “Hello, World”, thanks very much to Lars Kellogg-Stedman for the container and for giving me inspiration for this research):

That code snippet does a lot of work. We create the veth pair, attach it to the container by using nsenter, and then change default routing to use that new interface.

The end result? Our internal Docker container now has completely customized networking, and can communicate both with our new host interface as well as the virtual, VLAN-located IP address on the CORP router: