Wednesday, December 14, 2016

In the previous post, I covered how Docker uses linux virtual interfacs and bridge interfaces to facilitate communication between containers over bridge networks. In this post, I will be discussing how Docker utilizes vxlan technology to create overlay networks that are used in swarm clusters, and how it is possible to view and inspect this configuration. I will also discuss the various network types used to facilitiate different connectivity needs for containers launched in swarm clusters.

It is assumed that the readers are already familiar with setting up swarm clusters, and launching services in Docker Swarm. I will also link to a number of helpful resources at the end of the post that will provide more details and context around the topics discussed here. Again, I appreciate any feedback provided.

Contents

Docker overlay networks are used in the context of docker clusters (Docker Swarm), where a virtual network used by containers needs to span multiple physical hosts running the docker engine. When a container is launched in a swarm cluster (as part of a service), multiple networks are attached by default, each to facilitate different connectivity requirements.

Then launch a service with a container running a simple web server that exposes port 8080 to the outside world. This service will have 3 replicas running, and I specify that it is connected to one network only (my-overlay-network):

If I then list the interfaces available to any of the running containers, I see three interfaces as opposed to only 1 interface as we would have expected when running a container on a single docker host:

The container is connected to my-overlay-network through eth2 (as you can tell by the IP address). eth0 and eth1 are connected to other networks. If we run docker network ls, we can see that there are an extra two networks that were added: docker_gwbridge and ingress, and from the subnets addresses we can see that those are connected to eth0 and eth1:

The overlay networks creates a subnet that can be used by containers across multiple hosts in the swarm cluster. Containers running on different physical hosts can talk to each other on the overlay network (if they are all attached to the same network).

For example, for the webapp service we started, we can see that there is one container running on each host in the swarm cluster:

Docker’s overlay network uses vxlan technology which encapsulates layer 2 frames into layer 4 packets (UDP/IP). This allows docker to create create virtual networks on top of existing connections between hosts that may or may not be in the same subnet. Any network endpoints participating in this virtual network, see each other as if they’re connected over a switch, without having to care about the underlying physical network.

To see this in action, we can do a traffic capture on the docker hosts particpating in the overlay network. So in the example above, a traffic capture on the swarm01 or swarm02 will show us the icmp traffic between the containers running on them (vxlan uses udp port 4789):

The traffic capture showed above showed that anyone who can see the traffic between the docker hosts, is able to also see inter-container traffic going over an overlay network. This is why docker includes an encryption option which enables automatic IPSec encryption of the vxlan tunnels simply by adding --opt encrypted when creating the network.

Doing the same test above, but by using an encrypted overlay network, we will only see encrypted packets between the docker hosts:

Similar to bridge networks, docker creates a bridge interface for each overlay network, which connect the virtual tunnel interfaces that make the vxlan tunnel connections between the hosts. However, these bridge and vxlan tunnel interfaces are not created directly on the tunnel host, but instead they are in separate containers that docker launches for each overlay network that is created.

To actually inspect these interfaces, you have to use nsenter to run commands within the network space of the container managing the tunnels and virtual interfaces. This has to be run on the docker hosts that have containers that participate in the overlay network.

Also, you have to edit /etc/systemd/system/multi-user.target.wants/docker.service on the docker host and comment out MountFlags=slave as discussed here.

The second network that the containers where connected to was the ingress network. Ingress is an overlay network but it is installed by default once a swarm cluster is initiated. This network is used to provide connectivity when connections are made to containers from the outside world. It is also where the load balancing feature provided by the swarm cluster happens.

The load balancing is handling by IPVS which runs on a container that docker swarm launches by default. We can see this container attached to the ingress network (I used the same web service as before which will expose port 8080 which is then mapped to port 80 on the containers):

Finally, there is the docker_gwbridge network. This is a bridge network and has a corresponding interface with the name docker_gwbridge created on each host participating in the swarm cluster. The docker_gwbridge provides connectivity to the outside world for traffic originating on the containers in the swarm cluster (For example, if we do a ping to google, that traffic goes through the docker_gwbridge network).

I won’t go into details of the internals of this network as this is the same as the bridge networks covered in the previous post.

When launching a container in a swarm cluster, the container can be attached to three (or more) networks by default. First there is the docker_gwbridge network which is used to allow containers to communicate with the outside world, then the ingress network which is only used if the containers need to allow inbound connections from the outside world, and finally there are the overlay networks that are user created and can be attached to containers. The overlay networks serve as a shared subnet between containers launched into the same network in which they can communicate directly (even if they are launched on different physical hosts).

We also saw that there separate network spaces that are created by default by docker in a swarm cluster that help manage the vxlan tunnels used for the overlay networks, as well as the load balancing rules for inbound connections to containers.

Below is a list of threat intelligence websites that you can use. Cymon.io is an excellent one as it searches around 200 different sources. If you’re looking for a more exhaustive list of threat intel sites, check out https://github.com/rshipp/awesome-malware-analysis