Chaining CNI Plugins

CNI (Container Network Interface) supports since version 0.3.0 chained plugins.
This is a feature which can potential solve various cases. In the same time
it keeps the container network stack clean. This post explains how chained
plugins can be used in low level and how someone can extend the chain by adding
a custom made CNI plugin. Whether your container orchestrator supports plugin
chaining depends on which Container Runtime or which version is being used.

Chained Plugins

Chaining CNI plugins is different from a multi-call on CNI plugins in terms that
that each plugin call depends on an information that was created in the
previous step. In most cases the information transferred is the container IP.

The following figure taken for a CNI presentation illustrates plugin chaining.

Container Runtime Behavior

A detailed description of how a Container Runtime should chain plugins
can be found in the CNI
spec

To summarize the ADD part:

For the ADD action, the runtime MUST also add a prevResult field to the
configuration JSON of any plugin after the first one, which MUST be the Result
of the previous plugin (if any) in JSON format (see below). For the ADD action,
plugins SHOULD echo the contents of the prevResult field to their stdout to
allow subsequent plugins (and the runtime) to receive the result, unless they
wish to modify or suppress a previous result. Plugins are allowed to modify or
suppress all or part of a prevResult. However, plugins that support a version
of the CNI specification that includes the prevResult field MUST handle
prevResult by either passing it through, modifying it, or suppressing it
explicitly. It is a violation of this specification to be unaware of the
prevResult field.

The runtime MUST also execute each plugin in the list with the same environment.
For the DEL action, the runtime MUST execute the plugins in reverse-order.

The above script is getting too complicated and also not complete, for example CNI ARGs
behavior is missing. For that reason, it is recommended to use the library
provided by the containernetworking project. In our simple case, we use the binary
cnitool which
is a minimal wrapper of the library calls.

Demo Run

In the context of CNI, a container is simply a network namespace.

ip netns add cake

A json configuration is required in order the Container Runtime to properly
call the CNI plugins in they right order and with the right inputs. A such
configuration looks like that:

The interesting part is the "capabilities": {"portMappings": true}, of the
portmap plugin. It basically means that the Container Runtime is supposed to provide
some runtime arguments to the plugin with the name portMappings. The format
of the runtime arguments is a contract between the Container Runtime and the CNI
plugin without any restriction from the CNI specification. For example, for the
portmap plugin it should look like that

Conclusion

The tricky part with the custom plugin is that the Container Runtime should add
to add custom runtime arguments. A such behavior might not be supported out of
the box.

CNI plugins in chaining mode seems a good fit to add custom behavior on the
network stack. Important is not to “abuse” but try to solve only network
related topic. CNI chaining combines different CNI plugins and places the
right “responsibility” to the different CNI plugins enabling to build complex
and efficient network behavior stacks.