In every century people have thought they understood the universe at last, and in every century they were proved to be wrong.
It follows that the one thing we can say about our modern "knowledge" is that it is wrong.

- Isaac Asimov

I don’t assume I know everything. Not even that I know enough.
And no more than you ;-)
I will share some experience and facts from real life that can help us understand IT and Cloud better.
Comments welcome.

- Luca

Pages

October 20, 2015

This post is a follow up to the initial discussion of the DevOps approach based on Linux containers (specifically with Docker).
Here I elaborate on the advantage provided by Cisco ACI (and some more projects in the open source space) when you work with containers.

Policies and Containers

Cisco ACI offers a common policy model for managing IT operations.
It is agnostic: bare metal, virtual machines, and containers are treated the same, offering a unified policy language: one clear security model, regardless of how an application is deployed.

ACI models how components of an application interact,
including their requirements for connectivity, security, quality of
service (e.g. reserved bandwidth for a specific service), and network services. ACI offers a clear path to migrate existing
workloads towards container-based environments without any changes to
the network policy, thanks to two main technologies:

ACI Policy Model and OpFlex

Open vSwitch (OVS)

OpFlex is a distributed policy protocol that allows
application-centric policies to be enforced within a virtual switch such
as OVS.
Each container can attach to an OVS bridge, just as a
virtual machine would, and the OpFlex agent helps ensure that the
appropriate policy is established within OVS (because it's able to communicate with the Controller, bidirectionally).

The result of this integration is the ability to build and manage a complete
infrastructure that spans across physical, virtual, and
container-based environments.
Cisco plans to release support for ACI with OpFlex, OVS, and containers before the end of 2015.

Value added from Cisco ACI to containers

I will explain how ACI supports the main two networking types in Docker: veth and macvlan.
This can be done already, because it's not based only on Opflex.

Containers Networking option 1 - veth

vEth is the networking mode that leads to virtual bridging with br0 (a linux bridge, the default option with Docker) or OVS (Open Virtual Switch, usually adopted
with KVM and Openstack).
As a premise, I remind you that ACI
manages the connectivity and the policies for bare metal servers, VMs on any hypervisor and network services
(LB, FW, etc.) consistently and easily:

On
top of that, you can add containers running on bare metal Linux servers
or inside virtual machines (different use cases make one of the options
preferred, but from a ACI standpoint it's the same):

That
means that applications (and the policies enabling them) can span
across any platform: servers, VM, containers at the same time. Every
service or component that makes up the application can be deployed on the platform that is more convenient for it in terms of scalability, reliability and management:

And
the extension of the ACI fabric to virtual networks (with the
Opflex-enabled OVS switch) allows applying the policies to any virtual
End Point that uses virtual ethernet, like Docker containers configured
with the veth mode.

Advantages from ACI with Docker veth:

With this architecture we can get two main results:
- Consistency of connectivity and services policy between physical,
virtual and/or container (LXC and Docker);
- Abstraction of the
end-to-end network policy for location independence altogether with
Docker portability (via shared repositories)

Containers Networking option 2 - macvlan

MACVLAN does not bring a network bridge for the ethernet side of a Docker container to connect.
You
can think MACVLAN as a hypotetical cable where one side is the eth0 at
the Docker and the other side is the interface on the physical switch /
ACI leaf.
The hook between them is the VLAN (or the trunked VLAN) in between.
In
short, when specifying a VLAN with the MACVLAN, you tell a container
binding on eth0 on Linux to use the VLAN XX (defined as access or trunked).
The
connectivity will be “done” when the match happens with the other side
of the cable at the VLAN XX on the switch (access or trunk).

At this point you can match vlans with EPG (End Point Groups) in ACI, to build policies that group containers as End Points needing the same treatment, i.e. applying Contracts to the groups of Containers:

Advantages from ACI with Docker macvlan:

This configuration provides two advantages (the first one is common to veth):
- Extend the Docker containers based
portability for applications through the independence of
ACI policies from the server's location.
- Performance increase on network throughput from 5% to 15%
(mileage varies, further tuning and tests will provide more detail) because there’s no
virtual switching consuming CPU cycles on the host.

Intent based approach

A new intent based approach is making its way in networking. An intent
based interface enables a controller to manage and direct network
services and network
resources based on describing the “Intent” for network behaviors.
Intents are described to the controller through a generalized and
abstracted policy
semantics, instead of using Openflow-like flow rules. The Intent based interface
allows for a descriptive way to get what is desired from the
infrastructure, unlike the current SDN interfaces which are based on
describing how to provide different services. This interface will accommodate
orchestration services and network and business oriented SDN
applications, including OpenStack Neutron, Service Function Chaining,
and Group Based Policy.

Docker plugins for networks and volumes

Cisco is working at a open source project that aims at enabling intent-based configuration for both networking and volumes. This will exploit the full ACI potential in terms of defining the system behavior via policies, but will work also with non-ACI solutions.
Contiv netplugin is a generic network plugin for Docker, designed to handle networking use
cases in clustered multi-host systems.
It's still work in progress, detail can't be shared at this time but... stay tuned to see how Cisco is leading also in the open source world.

Mantl: a Stack to manage Microservices

And, just for you to know, another project that Cisco is delivering is targeted at the lifecycle and the orchestration of microservices.
Mantl has been developed in house, as a framework to manage the cloud services offered by Cisco. It can be used by everyone for free under the Apache License.
You can download Mantl from github and see the documentation here.

Mantl allows teams to run their services on any cloud provider. This includes bare metal services, OpenStack, AWS, Vagrant and GCE. Mantl uses tools that are industry-standard in the DevOp community, including Marathon, Mesos, Kubernetes, Docker, Vault and more.
Each layer of Mantl’s stack allows for a unified, cohesive pipeline between support, managing Mesos or Kubernetes clusters during a peak workload, or starting new VMs with Terraform. Whether you are scaling up by adding new VMs in preparation for launch, or deploying multiple nodes on a cluster, Mantl allows for you to work with every piece of your DevOps stack in a central location, without backtracking to debug or recompile code to ensure that the microservices you need will function when you need them to.

When working in a container-based DevOps environment, having the right microservices can be the difference between success and failure on a daily basis. Through the Marathon UI, one can stop and restart clusters, kill sick nodes, and manage scaling during peak hours. With Mantl, adding more VMs for QA testing or live use is simple with Terraform — without one needing to piece together code to ensure that both pieces work well together without errors. Addressing microservice conflicts can severely impact productivity. Mantl cuts down time spent working through conflicts with microservices so DevOps can spend more time working on an application.

Key takeaways

ACI
integrates Docker without requiring gateways (otherwise required if you
build the overlay from within the host) so Virtual and Physical can be
merged in the deployment of a single application.

Intent based configuration makes networking easier. Plugins for enabling Docker to intent based configuration and integration with SDN solutions are coming fast.

Microservices are a key component of cloud native applications. Their lifecycle can be complicated, but tools are emerging to orchestrate it end to end. Cisco Mantl is a complete solution for this need and is available for free on github.

References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

October 9, 2015

In this post I try to describe the connection between the need for a fast IT, the usage of Linux containers to quicky deploy cloud native applications and the advantage provided by Cisco ACI in containers' networking.The discussion is split in two posts to make it more... agile.A big thank you to Carlos Pereira (@capereir), Frank Brockners (@brockners) and Juan Lage (@JuanLage) that provided content and advice on this subject.

DevOps – it’s not tooling, it’s a process optimization

I just want to remind that it’s not a product or a technology, but it’s a way of doing things.

Its goal is to bring fences down between the software development teams and the operations team, streamlining the flow of a IT project from development to production.

Steps are:

alleviate bottlenecks (systems or people) and automate as much as possible,

feed information back so problems are solved by desing in next iteration,

iterate as often as possible (continuous delivery).

Business owners push the IT to deliver faster, and application development via DevOps is changing the behavior of IT.

Gartner defined the Bimodal IT as the parallel management of cloud native applications (DevOps) and more mature systems that require consolidated best practices (like ITIL) and tools supporting their lifecycle.

One important aspect of DevOps is that the infrastructure must be flexible and provisioned on demand (and disposed when no longer needed).
So, if it is programmable it fits much better in this vision.

Infrastructure as code

Infrastructure as code is one of the mantra of DevOps: you can save the definition of the infrastructure (and the policies that define its behavior) in source code repository, as well as you do with the code for your applications.

In this way you can automate the build and the management very easily.

There are a number of tools supporting this operational model. Some examples:

One more example of tool for DevOps is the ACI toolkit, a set of python libraries that expose the ACI network fabric toDevOpsas a code library.

The ACI Toolkit exposes the ACI object model to programming languages so that you can create, modify and manage the fabric as needed.

Remember that one of the most important advantage of Cisco’s vision of SDN is that you can manage the entire system as a whole.

No need to configure or manage single devices one by one, like other approaches to SDN (e.g. Openflow).

So you can create, modify and delete all of the following objects and their relationships:

Linux Containers and Docker

Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications. Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.

Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.

At a high-level, Docker is built of:- Docker Engine: a portable and lightweight, runtime and packaging tool- Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that's beyond the basic overview I'm giving here.

Docker’s main purpose is the lightweight packaging and deployment of applications.

Processes in a container are isolated from processes running on the host OS or in other Docker containers.
All processes share the same Linux kernel.
Docker leverages Linux containers to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years. The default container format is called libcontainer. Docker also supports traditional Linux containers using LXC.
It also uses Control Groups (cgroups), which have been in the Linux kernel even longer, to implement resources (such as CPU, memory, I/O) auditing and limiting, and Union file systems that support layering of the container's file system.

Containers or Virtual Machines

Containers are isolated, portable environments where you can run applications along with all the libraries and dependencies they need.
Containers aren’t virtual machines. In some ways they are similar, but there are even more ways that they are different. Like virtual machines, containers share system resources for access to compute, networking, and storage. They are different because all containers on the same host share the same OS kernel, and keep applications, runtimes, and various other services separated from each other using kernel features known as namespaces and cgroups.
Not having a separate instance of a Guest OS for each VM saves space on disk and memory at runtime, improving also the performances.
Docker added the concept of a container image, which allows containers to be used on any host with a modern Linux kernel. Soon Windows applications will enjoy the same portability among Windows hosts as well.
The container image allows for much more rapid deployment of applications than if they were packaged in a virtual machine image.

Containers networking

When Docker starts, it creates a virtual interface named docker0 on the host machine.
docker0 is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it.
For every new container, Docker creates a pair of “peer” interfaces: one “local” eth0 interface and one unique name (e.g.: vethAQI2QT), out in the namespace of the host machine.
Traffic going outside is NATted

You can create different types of networks in Docker:

veth: a peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link.

vlan: a vlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.

phys: an already existing interface specified by the lxc.network.link is assigned to the container.

empty: will create only the loopback interface (at kernel space).

macvlan: a macvlan interface is linked with the interface specified by the lxc.network.link and assigned to the container. It also specifies the mode the macvlan will use to communicate between different macvlan on the same upper device. The accepted modes are: private, Virtual Ethernet Port Aggregator (VEPA) and bridge

Docker Evolution - release 1.7, June 2015

Important innovation has been introduced in the latest release of Docker, that is still experimental.

Plugins

A big new feature is a plugin system for Engine, the first two available are for networking and volumes. This gives you the flexibility to back them with any third-party system.
For networks, this means you can seamlessly connect containers to networking systems such as Weave, Microsoft, VMware, Cisco, Nuage Networks, Midokura and Project Calico. For volumes, this means that volumes can be stored on networked storage systems such as Flocker.

Networking

The release includes a huge update to how networking is done.

Libnetwork provides a native Go implementation for connecting
containers. The goal of libnetwork is to deliver a robust Container
Network Model that provides a consistent programming interface and the
required network abstractions for applications.
NOTE: libnetwork project is under heavy development and is not ready for general use.
There
are many networking solutions available to suit a broad range of
use-cases. libnetwork uses a driver / plugin model to support all of
these solutions while abstracting the complexity of the driver
implementations by exposing a simple and consistent Network Model to
users.

Containers can now communicate across different hosts (Overlay Driver). You can now create a network and attach containers to it.

Example:

docker network create -d overlay net1

docker run -itd --publish-service=myapp.net1 debian:latest

Orchestration and Clustering for containers

Real world deployments are automated, single CLI commands are less used. Most important orchestrators are Mesos/Marathon, Google Kubernetes, Docker Swarm
Most use JSON or YAML formats to describe an application: a declarative language that says what an application looks like.
That is similar to ACI declarative language with high level abstraction to say what an application needs from the network, and have a network implement it.
This validates Cisco’s vision with ACI, very different from the NSX's of the world.

Next post explains the advantage provided by Cisco ACI (and some other projects in the open source space) when you use containers.

References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject: