One of the great benefits of deploying Cisco's Container Platform on Cisco Hyperflex is the ability to get persistent storage for containers out of the box with minimal configuration. This works well for containers where you require the data to survive on the event of a terminated pod, VM, or an error where the host goes down, heaven forbid!

With the recent release of CCP 2.2 and HX 3.5 this is super simple and works great! Let's illustrate how all of this works by starting at the bottom and working our way up to an application.

Volumes

Before storage can be used it must be presented to Kuberentes. The way this is done is through volumes. Most Kubernetes pods get an ephemeral volume attached to the pod, or just a directory with perhaps some data. When the pod is destroyed or restarted the ephemeral volume is destroyed and the data is lost.

Volumes are ways that can specify what storage to mount. The volumes have their own lifetime independent from the pod and can be destroyed or created whenever, depending on the driver. Typically, an administrator would create a volume, and then the container would specify that it requires this volume to be mounted. Let's create a simple NGINX example to see what this looks like. We will use the following YAML file and call it ng1-hostPath.yaml:

Now, we see that the external IP is 10.99.104.37. Navigating to this IP address in our browser we get:

This is expected because there is nothing in the directory and nginx is configured by default to not let you list the contents of the directory. Let's add something to it. First, find the name of the container that's running:

kubectl get pods | grep ng1

Take that name and sub it into the command below:

kubectl exec -it ng1-77ccc7dc74-xbbpv /bin/bash# now you will be on the pod, type this command:
echo "Hello Volume" >/usr/share/nginx/html/index.html

From here we now can refresh, the webpage and see the changes have been made:

Cool. Now let's delete the entire configuration:

kubectl delete -f ng1.yaml

After it is gone, we can then create the yaml file again and see that the data was preserved:

kubectl create -f ng1-hostPath.yaml

Refreshing the browser gives us the same cheery, "Hello Volume" message.

We can, in fact, go to the host node we specified and modify the contents of /ng1 directly. We could use a configuration management tool like Chef to keep these /ng1 directories in sync on each of the worker nodes. That way all nodes could serve the same stuff.

The problem with those approaches is that we either introduce complexity or points of failure. Lucky for us, Kubernetes provides another way.

Persistent Volumes

If you noticed in the preceding configurations volumes are tied directly to the configuration of the Pod. Persistent volumes differ from plain volumes in that they are defined as an entity of their own. With volumes, they needed to be pre-deployed by an administrator before the user could employ them. Persistent volumes, on the other hand can either be pre-deployed (like the volumes of yore) or dynamically allocated. The capabilities of whether a persistent volume can be dynamically created depend upon the PV plugin. Both Volumes and Persistent Volumes have the idea of plugins. We used the hostPath plugin previously. We can now use Hyperflex flexdriver plugin to create a persistent volume to be used by our application.

Storage Classes

Kubernetes has the concept of a StorageClass. StorageClasses come with a provisioner which is abstracted away from the user. The provisioner is platform dependent. You can see a list of different provisioners here. Think of a storage class as a storage plugin to automatically provision storage for users who want to use it. An administrator can create a default storage class so that when persistent volumes are requested they can automatically be provisioned. The way this is done in Cisco Container Platform is that in step 2 we determine which storage class we want to select. If you choose hyperflex then Kubernetes will use the flexvolume plugin to communicate with Hyperflex to carve out storage.

Using HX Persistent Volumes

The HX Kubernetes guide has lots of info as to how the HX persistent volume plugin works. After you install CCP and then get a tenant cluster up and running with Hyperflex as the default storage class we can use it to create persistent volumes for a MariaDB server.

Let's create a YAML file with maria db information called mariadb-StatefuleSet.yaml based off of another yaml found on github.

In this example we use the storageClassName: hyperflex to create a persistent volume claim template. The persistent volume claim template is used for stateful sets in which if there were multiple replicas each would use the template to create a persistent volume claim. While you should probably create a secret file with the mariadb info, we can test it by statically setting the password.

Now we can use client mariadb utils to login and create a database. First, get the IP address of the database by running:

We're just rolling out two UCS pods and using Central to manage the policies/profiles/etc. Having an issue now where I create a Service Profile from a Service Profile template, and that Service Profile does not get any WWxx/MAC information associated with...
view more

Hi All,
I have seen lot of cases with VIC adapter crash/kernel panic and as a result all the vNIC's and vHBA's mapped through it goes to error state. But enic or fnic driver from the VMware ESXi level never report it as down and as a result host will keep...
view more

I have an S3260 storage server with 48 5722031MB (5.47TB, or '6TB' marketing size) disks. I'd like to make a single RAID 60 LUN that spans the entire set, but so far the largest size the UCSM will let me make is 50000GB (roughly 48TB), which seems like a ...
view more