In every century people have thought they understood the universe at last, and in every century they were proved to be wrong.
It follows that the one thing we can say about our modern "knowledge" is that it is wrong.

- Isaac Asimov

I don’t assume I know everything. Not even that I know enough.
And no more than you ;-)
I will share some experience and facts from real life that can help us understand IT and Cloud better.
Comments welcome.

- Luca

Pages

October 20, 2015

This post is a follow up to the initial discussion of the DevOps approach based on Linux containers (specifically with Docker).
Here I elaborate on the advantage provided by Cisco ACI (and some more projects in the open source space) when you work with containers.

Policies and Containers

Cisco ACI offers a common policy model for managing IT operations.
It is agnostic: bare metal, virtual machines, and containers are treated the same, offering a unified policy language: one clear security model, regardless of how an application is deployed.

ACI models how components of an application interact,
including their requirements for connectivity, security, quality of
service (e.g. reserved bandwidth for a specific service), and network services. ACI offers a clear path to migrate existing
workloads towards container-based environments without any changes to
the network policy, thanks to two main technologies:

ACI Policy Model and OpFlex

Open vSwitch (OVS)

OpFlex is a distributed policy protocol that allows
application-centric policies to be enforced within a virtual switch such
as OVS.
Each container can attach to an OVS bridge, just as a
virtual machine would, and the OpFlex agent helps ensure that the
appropriate policy is established within OVS (because it's able to communicate with the Controller, bidirectionally).

The result of this integration is the ability to build and manage a complete
infrastructure that spans across physical, virtual, and
container-based environments.
Cisco plans to release support for ACI with OpFlex, OVS, and containers before the end of 2015.

Value added from Cisco ACI to containers

I will explain how ACI supports the main two networking types in Docker: veth and macvlan.
This can be done already, because it's not based only on Opflex.

Containers Networking option 1 - veth

vEth is the networking mode that leads to virtual bridging with br0 (a linux bridge, the default option with Docker) or OVS (Open Virtual Switch, usually adopted
with KVM and Openstack).
As a premise, I remind you that ACI
manages the connectivity and the policies for bare metal servers, VMs on any hypervisor and network services
(LB, FW, etc.) consistently and easily:

On
top of that, you can add containers running on bare metal Linux servers
or inside virtual machines (different use cases make one of the options
preferred, but from a ACI standpoint it's the same):

That
means that applications (and the policies enabling them) can span
across any platform: servers, VM, containers at the same time. Every
service or component that makes up the application can be deployed on the platform that is more convenient for it in terms of scalability, reliability and management:

And
the extension of the ACI fabric to virtual networks (with the
Opflex-enabled OVS switch) allows applying the policies to any virtual
End Point that uses virtual ethernet, like Docker containers configured
with the veth mode.

Advantages from ACI with Docker veth:

With this architecture we can get two main results:
- Consistency of connectivity and services policy between physical,
virtual and/or container (LXC and Docker);
- Abstraction of the
end-to-end network policy for location independence altogether with
Docker portability (via shared repositories)

Containers Networking option 2 - macvlan

MACVLAN does not bring a network bridge for the ethernet side of a Docker container to connect.
You
can think MACVLAN as a hypotetical cable where one side is the eth0 at
the Docker and the other side is the interface on the physical switch /
ACI leaf.
The hook between them is the VLAN (or the trunked VLAN) in between.
In
short, when specifying a VLAN with the MACVLAN, you tell a container
binding on eth0 on Linux to use the VLAN XX (defined as access or trunked).
The
connectivity will be “done” when the match happens with the other side
of the cable at the VLAN XX on the switch (access or trunk).

At this point you can match vlans with EPG (End Point Groups) in ACI, to build policies that group containers as End Points needing the same treatment, i.e. applying Contracts to the groups of Containers:

Advantages from ACI with Docker macvlan:

This configuration provides two advantages (the first one is common to veth):
- Extend the Docker containers based
portability for applications through the independence of
ACI policies from the server's location.
- Performance increase on network throughput from 5% to 15%
(mileage varies, further tuning and tests will provide more detail) because there’s no
virtual switching consuming CPU cycles on the host.

Intent based approach

A new intent based approach is making its way in networking. An intent
based interface enables a controller to manage and direct network
services and network
resources based on describing the “Intent” for network behaviors.
Intents are described to the controller through a generalized and
abstracted policy
semantics, instead of using Openflow-like flow rules. The Intent based interface
allows for a descriptive way to get what is desired from the
infrastructure, unlike the current SDN interfaces which are based on
describing how to provide different services. This interface will accommodate
orchestration services and network and business oriented SDN
applications, including OpenStack Neutron, Service Function Chaining,
and Group Based Policy.

Docker plugins for networks and volumes

Cisco is working at a open source project that aims at enabling intent-based configuration for both networking and volumes. This will exploit the full ACI potential in terms of defining the system behavior via policies, but will work also with non-ACI solutions.
Contiv netplugin is a generic network plugin for Docker, designed to handle networking use
cases in clustered multi-host systems.
It's still work in progress, detail can't be shared at this time but... stay tuned to see how Cisco is leading also in the open source world.

Mantl: a Stack to manage Microservices

And, just for you to know, another project that Cisco is delivering is targeted at the lifecycle and the orchestration of microservices.
Mantl has been developed in house, as a framework to manage the cloud services offered by Cisco. It can be used by everyone for free under the Apache License.
You can download Mantl from github and see the documentation here.

Mantl allows teams to run their services on any cloud provider. This includes bare metal services, OpenStack, AWS, Vagrant and GCE. Mantl uses tools that are industry-standard in the DevOp community, including Marathon, Mesos, Kubernetes, Docker, Vault and more.
Each layer of Mantl’s stack allows for a unified, cohesive pipeline between support, managing Mesos or Kubernetes clusters during a peak workload, or starting new VMs with Terraform. Whether you are scaling up by adding new VMs in preparation for launch, or deploying multiple nodes on a cluster, Mantl allows for you to work with every piece of your DevOps stack in a central location, without backtracking to debug or recompile code to ensure that the microservices you need will function when you need them to.

When working in a container-based DevOps environment, having the right microservices can be the difference between success and failure on a daily basis. Through the Marathon UI, one can stop and restart clusters, kill sick nodes, and manage scaling during peak hours. With Mantl, adding more VMs for QA testing or live use is simple with Terraform — without one needing to piece together code to ensure that both pieces work well together without errors. Addressing microservice conflicts can severely impact productivity. Mantl cuts down time spent working through conflicts with microservices so DevOps can spend more time working on an application.

Key takeaways

ACI
integrates Docker without requiring gateways (otherwise required if you
build the overlay from within the host) so Virtual and Physical can be
merged in the deployment of a single application.

Intent based configuration makes networking easier. Plugins for enabling Docker to intent based configuration and integration with SDN solutions are coming fast.

Microservices are a key component of cloud native applications. Their lifecycle can be complicated, but tools are emerging to orchestrate it end to end. Cisco Mantl is a complete solution for this need and is available for free on github.

References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

October 9, 2015

In this post I try to describe the connection between the need for a fast IT, the usage of Linux containers to quicky deploy cloud native applications and the advantage provided by Cisco ACI in containers' networking.The discussion is split in two posts to make it more... agile.A big thank you to Carlos Pereira (@capereir), Frank Brockners (@brockners) and Juan Lage (@JuanLage) that provided content and advice on this subject.

DevOps – it’s not tooling, it’s a process optimization

I just want to remind that it’s not a product or a technology, but it’s a way of doing things.

Its goal is to bring fences down between the software development teams and the operations team, streamlining the flow of a IT project from development to production.

Steps are:

alleviate bottlenecks (systems or people) and automate as much as possible,

feed information back so problems are solved by desing in next iteration,

iterate as often as possible (continuous delivery).

Business owners push the IT to deliver faster, and application development via DevOps is changing the behavior of IT.

Gartner defined the Bimodal IT as the parallel management of cloud native applications (DevOps) and more mature systems that require consolidated best practices (like ITIL) and tools supporting their lifecycle.

One important aspect of DevOps is that the infrastructure must be flexible and provisioned on demand (and disposed when no longer needed).
So, if it is programmable it fits much better in this vision.

Infrastructure as code

Infrastructure as code is one of the mantra of DevOps: you can save the definition of the infrastructure (and the policies that define its behavior) in source code repository, as well as you do with the code for your applications.

In this way you can automate the build and the management very easily.

There are a number of tools supporting this operational model. Some examples:

One more example of tool for DevOps is the ACI toolkit, a set of python libraries that expose the ACI network fabric toDevOpsas a code library.

The ACI Toolkit exposes the ACI object model to programming languages so that you can create, modify and manage the fabric as needed.

Remember that one of the most important advantage of Cisco’s vision of SDN is that you can manage the entire system as a whole.

No need to configure or manage single devices one by one, like other approaches to SDN (e.g. Openflow).

So you can create, modify and delete all of the following objects and their relationships:

Linux Containers and Docker

Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications. Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.

Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.

At a high-level, Docker is built of:- Docker Engine: a portable and lightweight, runtime and packaging tool- Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that's beyond the basic overview I'm giving here.

Docker’s main purpose is the lightweight packaging and deployment of applications.

Processes in a container are isolated from processes running on the host OS or in other Docker containers.
All processes share the same Linux kernel.
Docker leverages Linux containers to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years. The default container format is called libcontainer. Docker also supports traditional Linux containers using LXC.
It also uses Control Groups (cgroups), which have been in the Linux kernel even longer, to implement resources (such as CPU, memory, I/O) auditing and limiting, and Union file systems that support layering of the container's file system.

Containers or Virtual Machines

Containers are isolated, portable environments where you can run applications along with all the libraries and dependencies they need.
Containers aren’t virtual machines. In some ways they are similar, but there are even more ways that they are different. Like virtual machines, containers share system resources for access to compute, networking, and storage. They are different because all containers on the same host share the same OS kernel, and keep applications, runtimes, and various other services separated from each other using kernel features known as namespaces and cgroups.
Not having a separate instance of a Guest OS for each VM saves space on disk and memory at runtime, improving also the performances.
Docker added the concept of a container image, which allows containers to be used on any host with a modern Linux kernel. Soon Windows applications will enjoy the same portability among Windows hosts as well.
The container image allows for much more rapid deployment of applications than if they were packaged in a virtual machine image.

Containers networking

When Docker starts, it creates a virtual interface named docker0 on the host machine.
docker0 is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it.
For every new container, Docker creates a pair of “peer” interfaces: one “local” eth0 interface and one unique name (e.g.: vethAQI2QT), out in the namespace of the host machine.
Traffic going outside is NATted

You can create different types of networks in Docker:

veth: a peer network device is created with one side assigned to the container and the other side is attached to a bridge specified by the lxc.network.link.

vlan: a vlan interface is linked with the interface specified by the lxc.network.link and assigned to the container.

phys: an already existing interface specified by the lxc.network.link is assigned to the container.

empty: will create only the loopback interface (at kernel space).

macvlan: a macvlan interface is linked with the interface specified by the lxc.network.link and assigned to the container. It also specifies the mode the macvlan will use to communicate between different macvlan on the same upper device. The accepted modes are: private, Virtual Ethernet Port Aggregator (VEPA) and bridge

Docker Evolution - release 1.7, June 2015

Important innovation has been introduced in the latest release of Docker, that is still experimental.

Plugins

A big new feature is a plugin system for Engine, the first two available are for networking and volumes. This gives you the flexibility to back them with any third-party system.
For networks, this means you can seamlessly connect containers to networking systems such as Weave, Microsoft, VMware, Cisco, Nuage Networks, Midokura and Project Calico. For volumes, this means that volumes can be stored on networked storage systems such as Flocker.

Networking

The release includes a huge update to how networking is done.

Libnetwork provides a native Go implementation for connecting
containers. The goal of libnetwork is to deliver a robust Container
Network Model that provides a consistent programming interface and the
required network abstractions for applications.
NOTE: libnetwork project is under heavy development and is not ready for general use.
There
are many networking solutions available to suit a broad range of
use-cases. libnetwork uses a driver / plugin model to support all of
these solutions while abstracting the complexity of the driver
implementations by exposing a simple and consistent Network Model to
users.

Containers can now communicate across different hosts (Overlay Driver). You can now create a network and attach containers to it.

Example:

docker network create -d overlay net1

docker run -itd --publish-service=myapp.net1 debian:latest

Orchestration and Clustering for containers

Real world deployments are automated, single CLI commands are less used. Most important orchestrators are Mesos/Marathon, Google Kubernetes, Docker Swarm
Most use JSON or YAML formats to describe an application: a declarative language that says what an application looks like.
That is similar to ACI declarative language with high level abstraction to say what an application needs from the network, and have a network implement it.
This validates Cisco’s vision with ACI, very different from the NSX's of the world.

Next post explains the advantage provided by Cisco ACI (and some other projects in the open source space) when you use containers.

References

Much of the information has been taken from the following sources.
You can refer to them for a deeper investigation of the subject:

September 6, 2015

It’s been a long time since I did my last post: as promised, I only post information from my experience in the real world and I avoid echoing messages from marketing :-)

I’ve not been at rest, though, but I’ve worked at customer projects that can’t be mentioned publicly (yet).

But I’ve also been in vacation and I could finally read a great book, “The Phoenix Project”.

It is a novel and a very educational reading at the same time.

I wholehearted recommend you to read it (though I’m not earning anything from the book) because I enjoied it a lot and I learned important lessons that deserve to be spread - for our common benefit as IT community.

You are not required to be a IT professional but, if you are, you will benefit the most and it will recall many familiar stories.

Since I’ve led some mission critical projects, and my skin is still impressed with both tragedy and triumph, this story reminded me those great moments.

Essentially, The Phoenix Project describes the evolution of IT in a company that, on the verge of a complete failure, pioneers DevOps and revolutionizes the way they work.

The impact on the core business is huge and their strategy creates a gap with the competition thanks to agility and flexibility.

Also personal lives are affected because the new organization ends the tribal war among Development, Operations, Security and the business stakeholders: they establish respect, trust and satisfaction for all the involved parties.

Of course the DevOps methodology is not a magic wand that makes the miracle for them: it is the outcome of a new way of thinking and working together.

This is a story of people, rather that technology.

If every IT department put themselves in the shoes of the others, instead of finger pointing, they can help each other to reach a common goal.

If the whole IT is not a counterpart of the LOBs but is a partner (understanding why they are asked something instead of focusing on how to do it), they can offer a huge value to the company… and be highly rewarded (see the coup de théâtre at the end of the story).

This would stop the “dysfunctional marriage” between two parties that don’t understand each other and suffer from a forced relationship.

In my experience, most of the business people see the IT as the provider of a services that is never satisfactory.

On the other side, IT sees that business people don’t understand the complexity and the effort required and ask for impossible things.

In most cases, they are bound to a traditional way of working and don’t even raise their head to see that they already own what’s needed to win.

They are overwhelmed by current tasks, troubleshooting and budget cuts, so they can’t think strategically.

The great idea, here, is importing the concepts and the experience from Lean Manufacturing into IT.

They start considering the IT organization similar to a production plant and optimizing its organization.

Finding bottlenecks and avoiding rework are the first steps, then automation follows to free the smart guys from the routine work and so the quality skyrockets.

At the end of the story the release of new features required by the business no longer takes months (and high risk at the roll out) but they can deploy 10 project builds per day!

That is not impressing if you think that these days some companies achieve 1000s of deployments per day thanks to Continuos Integration and Continuous Deployment.

But it is light years ahead of what most of my customers are doing, though some are exploring DevOps now.

Of course, one organization cannot change overnight.

You shouldn’t see the adoption of DevOps as a single step, and be scared by the effort.

In the book, they learn gradually and improve accordingly: you could do the same.

They go through a process that is made of Three Ways, until they master all.
A brief description of the three ways follows, thanks to Richard Campbell:

The First Way – Systems Thinking

• Understand the entire flow of work

• Seek to increase the flow of work

• Stop problems early and often – Don’t let them flow downstream

• Keep everyone thinking globally

• Deeply understand your systems

First Way Goals

• One source of truth – Code, environment and configuration in one place

• Consistent release process – Automation is essential (one click)

• Decrease cycle times, Faster release cadence

The Second Way – Feedback Loops

• Understand and respond to the needs of all customers (internal and external)

• Shorten and amplify all feedback loops

• With feedback comes quality

Second Way Goals

• Defects and performance issues fixed faster

• Ops and InfoSec user stories appear as part of the application

• Everyone is communicating better

• More work getting done

The Third Way – Synergy

• Consistent process and effective feedback result in agility

• Now use that agility to experiment

• You only learn from failure – So fail often, but recover quickly

Third Way Goals

• Ability to anticipate, even define new business needs through visibility in the systems

• Ability to test and optimize new business opportunities in the system while managing risk

• Joy

You should not think that The Phoenix Project is a technical book: though I’ve learned new things or reinforced concepts I knew already, the value I found in it is motivational.

It really moves you to action, and you want to measure the immediate improvement you can get.

More, you want to partner with other stakeholders to achieve common goals.

The Essence of DevOps

• Better Software, Faster

• Pride in the Software You Build and Operate

• Ability to Identify, Respond and Improve Business Needs

My final take from this story is that everybody in the IT (like in other fields) should:

- take risk and innovate - if you fail, probably the result would not be worse than staying still

- invest time - at cost of delaying important targets - to think strategically: the return will overpay the effort

- study what others have done already: learning by examples is much easier

- always try to understand your counterpart before fighting by principle, there could be a common advantage if you shift your perspective

May 23, 2015

Ansible is a radically simple IT automation platform that makes
your applications and systems easier to deploy. Avoid writing scripts or
custom code to deploy and update your applications— automate in a
language that approaches plain English, using SSH, with no agents to
install on remote systems.

May 4, 2015

Cisco is investing a lot in Openstack, as other vendors do these days.
Initiatives include being a Gold member of the Openstack Foundation, being in the board of directors, contribute to different projects in Openstack (mainly Neutron, that manages networking, but also Nova and Ironic) with blueprints and code development.

Cisco also uses Openstack in his own data centers, to provide cloud services to the internal IT (our private cloud) and to customers and partners (the Cisco Cloud Services in the Intercloud ecosystem). We also have a managed private cloud offer based on Openstack (formerly named Metacloud).

Based on this experience, a CVD (Cisco Validated Design) has been published to allow customers to deploy the Openstack platform on the Cisco servers and network. The prescriptive documentation guides you to install and configure the hardware and the software in such a way that you get the expected results in terms of scale and security. It's been fully tested and validated in partnership with Red Hat.

Another important point is the offer of the Cisco ACI data model to the open source community. The adoption of such a model in Openstack (the GBP, i.e. the Group Based Policy) is a great satisfaction for us.

Openstack will also be managed by the Stack Designer in Cisco Prime Service
Catalog (PSC 11.0), to create PaaS services based on
Heat (similarly to what we do now with Stack Designer + UCS Director). Templates to deploy a given Data Center topology will be added as services in the catalog and, based on them, other services could be offered with the deployment of a software stack on top of the Openstack IaaS. The user will be able to order, in a single request, the end to end deployment of a new application.

In this post I will tell you about the main topics in the Cisco-Openstack relationship:

Available Plugins for Cisco products

Plugins exist for the following projects in Openstack: Neutron, Nova, Ironic.

You can leverage the features of the Cisco products while you maintain the usual operations with Openstack: the integration of the underlying infrastructure is transparent for the user.

Networking - Project Neutron

Plugins for all the Nexus switching family
- Tenant network creation is based on VLAN or VXLAN
Plugins for ACI
- Neutron Networks and Routers are created as usual and the plugin has the role to integrate the API exposed by the Cisco APIC controller

Network Service Plug-in Architecture (ML2)

This pluggable architecture has been designed to allow for common API, rapid innovation and vendor differentiation:

Based on the delegation of the real networking service to the underlying infrastructure, the Openstack user does not care what networking devices are used: he only knows what service he needs, and he gets exactly that.

Use the existing Neutron API with APIC and Cisco ACI

When the Openstack user creates the usual constructs (Networks, Subnets, Routers) via Horizon or the Neutron API, the APIC ML2 plugin intercepts the request and send commands to the APIC API.
Network profiles, made of End Point Groups and Contracts, are created and pushed to the fabric. Virtual networks created in the OVS virtual switch in KVM are matched to the networks in the physical fabric, so that traffic can flow to and from the external world.

Another plugin is the one for the Cisco UCS servers, leveraging the UCS Manager API.
This integration allows you to leverage the single point of management of a UCS domain (up to 160 servers) instead of configuring networking on the single blades or - as in competing server architectures - on the individual switches in the chassis.

An additional advantage offered by UCS servers is the VM-FEX (VM fabric extender) feature: virtual NICs can be offered to the VM directly from the hw, bypassing the virtual switch in the hypervisor thanks to SR-IOV and gaining performances and centralization of the management.

Next picture shows the automated VLAN and VM-FEX Support offered by the Cisco UCS Manager plugin for OpenStack Neutron:

Bare metal deployment - Project Ironic

Plugin for UCS Manager to deploy Service Profiles for bare metal workloads on the UCS blades

Ironic is the OpenStack service which provides the capability to
provision bare metal servers. The initial version of Ironic pxe_cisco
driver adds support to manage power operations of Cisco UCS B/C series
servers that are UCSM managed and provides vendor_passthru APIs.
User
can control the power operations using pxe_cisco driver. This doesn’t
require IPMI protocol to be enabled on the servers as the operations are
controlled via Service Profiles.

The
vendor_passthru APIs allows the user to enroll the nodes automatically
to Ironic DB. Also provides APIs to get the Node specific information
like, Inventory, Faults, Location, Firmware Version etc.
Code is available in GitHub @ https://github.com/CiscoUcs/Ironic-UCS

GBP: Group Based Policy

The most exciting news is the adoption of the GBP (Group Based Policy)
model and API in Neutron, that derives from the way the Cisco APIC
controller manages end point groups and contracts in the ACI
architecture. A powerful demonstration of the Cisco thought leadership
in networking.

The Group Based Policy (GBP) extension introduces a declarative policy
driven framework for networking in OpenStack. The GBP abstractions allow
application administrators to express their networking requirements
using group and policy abstractions, with the specifics of policy
enforcement and implementation left to the underlying policy driver.
This facilitates clear separation of concerns between the application
and the infrastructure administrator.

Two Options for the OpenStack Neutron API

The Neutron user can now select the preferred option between two choices:
the usual building blocks in Neutron (Network, Subnet, Router) and the
new - optional - building blocks offered by GBP.

In addition to support for the OpenStack Neutron Modular Layer 2 (ML2)
interface, Cisco APIC supports integration with OpenStack using
Group-Based Policy (GBP). GBP was created by OpenStack developers to
offer declarative abstractions for achieving scalable, intent-based
infrastructure automation within OpenStack. It supports a plug-in
architecture connecting its policy API to a broad range of open source
and vendor solutions, including APIC.
This means that other vendors could provide plugins for their infrastructure, to use with the GBP API.
While GBP is a northbound API for Openstack, the plugins are a southbound implementation.

In this case the Neutron plugin for the APIC controller has a easier task: instead of translating from the legacy constructs (Networks, Subnets, Routers) to the corresponding ACI constructs (EPG, Contracts), it will just resend (proxy) identical commands to APIC.

- The virtual leaf of the fabric extends into the hypervisor (AVS)
- You get immediate visibility of the Health Score for the Fabric, Tenants, Applications

Next picture shows how the fabric is build, using two types of switches: the Spines are used to scale and connect all the leaves in a non blocking fabric that ensures performances and reliability.
The Leaf switches hold the physical ports where servers are attached: both bare metal servers (i.e. running a Operating System) and virtualized servers (i.e. running ESXi, Hyper-V and KVM hypervisors).
The software controller for the fabric, named APIC, runs on a cluster of (at least) 3 dedicated physical servers and is not in the data path: so it does not affect performances and reliability of the fabric, as it could happen with other solutions on the market.

The ACI fabric
supports more than 64,000 dedicated tenant networks. A single fabric can
support more than one million IPv4/IPv6 endpoints, more than 64,000 tenants,
and more than 200,000 10G ports. The ACI fabric enables any service (physical
or virtual) anywhere with no need for additional software or hardware gateways
to connect between the physical and virtual services and normalizes
encapsulations for Virtual Extensible Local Area Network (VXLAN) / VLAN /
Network Virtualization using Generic Routing Encapsulation (NVGRE).

The ACI fabric
decouples the tenant endpoint address, its identifier, from the location of the
endpoint that is defined by its locator or VXLAN tunnel endpoint (VTEP)
address. The following figure shows decoupled identity and location.

Forwarding within the
fabric is between VTEPs. The mapping of the internal tenant MAC or IP address
to a location is performed by the VTEP using a distributed mapping database.
After a lookup is done, the VTEP sends the original data packet encapsulated in
VXLAN with the Destination Address (DA) of the VTEP on the destination leaf.
The packet is then de-encapsulated on the destination leaf and sent down to the
receiving host. With this model, we can have a full mesh, loop-free topology
without the need to use the spanning-tree protocol to prevent loops.

You can attach virtual servers or physical servers that use any network virtualization protocol to the Leaf ports, then design the policies that define the traffic flow among them regardless the local (to the server or to its hypervisor) encapsulation.
So the fabric acts as a normalizer for the encapsulation and allows you to match different environments in a single policy.

Forwarding is not limited to nor constrained by the encapsulation type or encapsulation-specific ‘overlay’ network:

As explained in ACI for Dummies, policies are based on the concept of EPG (End Points Group).
Special EPG represent the outside network (outside the fabric, that means other networks in your datacenter or eventually the Internet or a MPLS connection):

The integration with the hypervisors is made through a bidirectional connection between the APIC controller and the element manager of the virtualization platform (vCenter, System Center VMM, Red Hat EVM...). Their API are used to create local virtual networks that are connected and integrated with the ACI fabric, so that policies are propagated to them.
The ultimate result is the creation of Port Groups, or the like of, where VM can be connected.
A Port Groups represents a EPG.
Events generated by the VM lifecycle (power on/off, vmotion...) will be sent back to APIC so that the traffic is managed accordingly.

How Policies are enforced in the fabric

The policy contains a source EPG, a destination EPG and rules known as Contracts, made of Subjects (security, QoS...). They are created in the Controller and pushed to all the leaf switches where they are enforced.
When a packet arrives to a leaf, if the destination EPG is known it is processed locally.
Otherwise it is forwarded to a Spine, to reach the destination EPG through a Leaf that knows it.

There are 3 cases, and the local and global tables in the leaf are used based on the fact that the destination EP is known or not:
1 - If the target EP is known and it's local (local table) to the same leaf, it's processed locally (no traffic through the Spine).
2 - If the target EP is known and it's remote (global table) it's forwarded to the Spine to be sent to the destination VTEP, that is known.
3 - If the target EP is unknown the traffic is sent to the Spine for a proxy forwarding (that means that the Spine discovers what is the destination VTEP).

You can manage the infrastructure as code.

The fabric is stateless: this means that all the configuration/behavior can be pushed to the network through the controller's API. The definition of Contracts and EPG, of POD and Tenants, every Application Profile is a (set of) XML document that can be saved as text.
Hence you can save it in the same repository as the source code of your software applications.

You can extend the DevOps pipeline that builds the application, deploys it and tests it automatically by adding a build of the required infrastructure on demand.
This means that you can use a slice of a shared infrastructure to create a environment just when it's needed and destroy it soon after, returning the resources to the pool.

You can also use this approach for Disaster Recovery, simply building a clone of the main DC if it's lost.

Any orchestrator can drive the API of the controller.

The XML (or JSON) content that you send to build the environment and the policies is based on a standard language. The API are well documented and lot of samples are available.
You can practice with the API, learn how to use them with any REST client and then copy the same calls into your preferred orchestrator.
Though some products have out of the box native integration with APIC (Cisco UCSD, Microsoft), any other can be used easily with the approach I described above.
See an example in The Elastic Cloud Project.

The concept of Service Graph allows a automated and scalable L4-L7 service insertion. The fabric forwards the traffic into a Service Graph, that can be one or more service nodes pre-defined in a series, based on a routing rule. Using the service graph simplifies and scales service operation: the following pictures show the difference from a traditional management of the network services.

The same result can be achieved with the insertion of a Service Graph in the contract between two EPG:

The virtual leaf of the fabric extends into the hypervisor (AVS).

Compared to other hypervisor-based virtual switches, AVS provides
cross-consistency in features, management, and control through Application Policy Infrastructure Controller (APIC),
rather than through hypervisor-specific management stations. As a key
component of the overall ACI framework, AVS allows for intelligent
policy enforcement and optimal traffic steering for virtual
applications.

The AVS offers:

Single point of management and control for both physical and virtual workloads and infrastructure

Optimal traffic steering to application services

Seamless workload mobility

Support for all leading hypervisors with a consistent
operational model across implementations for simplified operations in
heterogeneous data centers

Cisco AVS is compatible with any upstream physical
access layer switch that complies with the Ethernet standard, including
Cisco Nexus Family switches. Cisco AVS is compatible with any server
hardware listed in the VMware Hardware Compatibility List (HCL). Cisco
AVS is a distributed virtual switch solution that is fully integrated
into the VMware virtual infrastructure, including VMware vCenter for the
virtualization administrator. This solution allows the network
administrator to configure virtual switches and port groups to establish
a consistent data center network policy.

Next picture shows a topology that includes Cisco AVS with
Cisco APIC and VMware vCenter with the Cisco Virtual Switch Update
Manager (VSUM).

Health Score

The APIC uses a policy model to combine data into a health score. Health scores can be aggregated for a variety of areas such as for infrastructure, applications, or services.

The
APIC supports the following health score types:

●System—Summarizes the health of the entire network.

●Leaf—Summarizes the health of leaf switches in the
network. Leaf health includes hardware health of the switch including fan tray,
power supply, and CPU.

●Tenant—Summarizes the health of a tenant and the
tenant’s applications.

Health scores allow you to isolate performance issues by drilling down through the network hierarchy to isolate faults to specific managed objects (MOs). You can view network health by viewing the health of an application (by tenant) or by the health of a leaf switch (by pod).

You can subscribe to a health score to receive notifications if the health score crosses a threshold value. You can receive health score events via SNMP, email, syslog, and Cisco Call Home. This can be particularly useful for integration with 3rd party monitoring tools.

Health Score Use case:
An application administrator could subscribe to the health score of their application - and receive automatic notifications from ACI if the health of the specific application is degraded from an infrastructure point of view - truly an application-aware infrastructure.

Conclusion

I hope that these few lines were enough to show the advantage that modern network architectures can bring to your Data Center.
Cisco ACI joins all the benefit of the SDN and the overlay networks with a powerful integration with the hardware fabric, so you get flexibility without losing control, visibility and performances.

One of the most important aspects is the normalization of the encapsulation, so that you can merge different network technologies (from heterogeneous virtual environments and bare metal) into a single well managed policy model.

Policies (specifically, the Application Network Policies created in APIC based on EPG and Contracts) allow a easier communication between software application designers and infrastructure managers, because they are simple to represent, create/maintain and enforce.

April 8, 2015

Software defined networking (SDN) is a new way of looking at
how networking and cloud solutions should be automated,
efficient, and scalable in a new world where application
services may be provided locally, by the data center, or even
the cloud. This is impossible with a rigid system that’s
difficult to manage, maintain, and upgrade. Going forward,
you need flexibility, simplicity, and the ability to quickly grow
to meet changing IT and business needs.

Software Defined Networking For Dummies, Cisco Special
Edition, shows you what SDN is, how it works, and how you
can choose the right SDN solution. This book also helps you
understand the terminology, jargon, and acronyms that are
such a part of defining SDN.
Along the way, you’ll see some examples of the current state
of the art in SDN technology and see how SDN can help your
organization.

March 25, 2015

Russ is the UCS Director guru in our team and, when I saw an internal email where he explained how to use the UCS Director API from an external client, I asked his permission to publish it.

I believe it will be useful for many customers and partners to integrate UCSD in a broader ecosystem.

This short post explains how to invoke UCS
Director workflows via the northbound REST API. Authentication and role is
controlled by the use of a secure token.Each user account within UCS Director has a unique API token, which can
accessed via the GUI like so:

Firstly, from within the UCS Director GUI,
click the current username at the top right of the screen. Like so:

User Information will then be presented.
Select the ‘Advanced’ tab in order to reveal the API Access token for that user
account.

Once retrieved, this token needs to be
added as an HTTP header for all REST requests to UCS Director.The HTTP header name must be
X-Cloupia-Request-Key.

X-Cloupia-Request-Key :
E0BEA013C6D4418C9E8B03805156E8BB

Once this step is complete, the next requirement
is to construct an appropriate URI for the HTTP request in order to invoke the
required UCS Director workflow also supplying the required User Inputs (Inputs
that would ordinarily be entered by the end user when executing the workflow
manually).

Workflow invokation for UCS Director uses
Version 1 of the API (JSON). A typical request URL would look similar to this:

http://<UCSD_IP>/app/api/rest?formatType=json

&opName=userAPISubmitWorkflowServiceRequest

&opData={SOME_JSON_DATA_HERE}

A
very quick JSON refresher

JSON formatted data consists of either
dictionaries or lists. Dictionaries consist of name/value pairs that are
separated by a colon. Name/value pairs are separated by a comman and
dictionaries are bounded by curly braces. For example:

{“animal”:”dog”, “mineral”:”rock”,
“vegetable”:”carrot”}

Lists are used in instances where a single
value is insufficient. Lists are comma separated and bounded by square braces.
For example:

{“animals”:[“dog”,”cat”,”horse”]}

To ease readability, it is often worth
temporarily expanding the structure to see what is going on.

{

“animals”:[

“dog”,

”cat”,

”horse”

]

}

Now things get interesting. It is possible
(And common) for dictionaries to contain lists, and for those lists to contain
dictionaries rather than just elements (dog, cat, horse etc…).

{ “all_things”:{

“animals”:[

“dog”,

”cat”,

”horse”

],

“minerals”:[

“Quartz”,

“Calcite”

],

“vegetable”:”carrot”

}

}

With an understanding of how JSON objects
are structured, we can now look at the required formatting of the URI for UCS
Director. When invoking a workflow via the REST API, UCS Director must be
called with three parameters, param0, param1 and param2. ‘param0’ contains the
name of the workflow to be invoked. The syntax of the workflow name must match
EXACTLY the name of the actual workflow. ‘param1’ contains a dictionary, which
itself contains a list of dictionaries detailing each user input and value that
should be inserted for that user input (As though an end user had invoked the
workflow via the GUI and had entered values manually.

The structure of the UCS Director JSON URI
looks like so:

{

param0:"<WORKFLOW_NAME>",

param1:{

"list":[

{“name":"<INPUT_1>","value":"<INPUT_VALUE>"},

{"name":"<INPUT_2>","value":"<INPUT_VALUE"}

]

},

param2:-1

}

So, let’s see this in action. Take the
following workflow, which happens to be named ‘Infoblox Register New Host’ and
has the user inputs ‘Infoblox IP:’,’Infoblox Username:’,’Infoblox
Password:’,’Hostname:’,’Domain:’ and ‘Network Range:’.

The correct JSON object (Shown here in
pretty form) would look like so:

Note once more, that the syntax of the
input names must match EXACTLY that of the actual workflow inputs.

After removing all of the readability
formatting, the full URL required in order to invoke this workflow with the
‘user’ inputs as shown above would look like this:

Now that we have our URL and authentication
token HTTP header, we can simply enter this information into a web based REST
client (e.g. RESTclient for Firefox or Postman for Chrome) and execute the
request. Like so:

If the request is successful, then UCS
Director will respond with a “serviceError” of null (No error) and the
serviceResult will contain the service request ID for the newly invoked
workflow:

Progress of the workflow can either be
monitored by other API requests or via the UCS Director GUI:

Service request logging can also be
monitored via either further API calls or via the UCS Director GUI:

This concludes the example, that you could easily test on your own instance of UCS Director or, if you don't have one at hand, in a demo lab on dcloud.cisco.com.

It should be enough to demonstrate how simple is the integration of the automation engine provided by UCSD, if you want to execute its workflows from an external system: a front end portal, another orchestrator, your custom scripts.