Overview

You can configure your OpenShift Container Platform cluster to use Red Hat Gluster Storage as
persistent storage for containerized applications. There are two deployment
solutions available when using Red Hat Gluster Storage, using either a
containerized or dedicated storage cluster. This topic focuses mainly on the the
persistent volume plug-in solution using a dedicated Red Hat Gluster Storage
cluster.

Containerized Red Hat Gluster Storage

Starting with the Red Hat Gluster Storage 3.1 update 3 release, you can deploy
containerized Red Hat Gluster Storage directly on OpenShift Container Platform. Containerized
Red Hat Gluster Storage converged with OpenShift Container Platform addresses the use case
where containerized applications require both shared file storage and the
flexibility of a converged infrastructure with compute and storage instances
being scheduled and run from the same set of hardware.

Container Native Storage Recommendations

OpenShift Container Platform offers container native storage (CNS) storage, which makes it
easier for OpenShift Container Platform users to fulfill their storage needs. With CNS,
solution users and administrators are empowered to have storage and application
pods running together on the same infrastructure and sharing the same resources.

Creation Time of Volumes with Container Native Storage

Building environment storage can influence the time it takes for an application
to start. For example, if the application pod requires a persistent volume claim
(PVC), then extra time might have to be considered for CNS to be created and
bound to the corresponding PVC. This effects the build time for an application
pod to start.

Creation time of CNS volumes scales linearly up to 100 volumes. In the latest
tests, each volume took approximately 6 seconds to be created, allocated, and
bound to a pod.

Dynamic storage provisioning and storage classes were also configured and used
when provisioning the PVC.

Deletion Time of Volumes with Container Native Storage

When you delete a PVC that is used by an application pod, then that action
will trigger the deletion of the CNS volume that was used by the PVC.

PVCs will disappear immediately from the oc get pvc output. However, the time
to delete and recycle CNS volumes depends on the number of CNS volumes. In the
latest tests, the deletion time of CNS volumes proved to scale linearly up to
100 volumes.

Deletion time does not affect application users. CNS deletion behavior serves as
orientation for CNS storage administrators to be able to estimate how long it
will approximately take for CNS volumes to be removed from a CNS cluster.

Recommended Memory Requirements for Container Native Storage

Follow the
planning guidelines when planning hardware for a CNS storage environment to ensure that
you have enough memory.

Dedicated Storage Cluster

If you have a dedicated Red Hat Gluster Storage cluster available in your
environment, you can configure OpenShift Container Platform’s Gluster volume plug-in. The
dedicated storage cluster delivers persistent Red Hat Gluster Storage file
storage for containerized applications over the network. The applications access
storage served out from the storage clusters through common storage protocols.

You can also dynamically provision volumes in a dedicated Red Hat Gluster
Storage cluster that are enabled by Heketi. See
Managing Volumes Using Heketi in the Red Hat Gluster Storage 3.3 Administration Guide for more information.

This solution is a conventional deployment where containerized compute
applications run on an OpenShift Container Platform cluster. The remaining sections in this
topic provide the step-by-step instructions for the dedicated Red Hat Gluster
Storage solution.

This topic presumes some familiarity with OpenShift Container Platform and GlusterFS:

See the
Persistent Storage topic for details on the OpenShift Container Platform PV framework in general.

The versions of OpenShift Container Platform and Red Hat Gluster Storage integrated must be
compatible, according to the information in
Supported Operating Systems.

A fully-qualified domain name (FQDN) must be set for each hypervisor and Red Hat
Gluster Storage server node. Ensure that correct DNS records exist, and that the
FQDN is resolvable via both forward and reverse DNS lookup.

Red Hat OpenShift Container Platform

All installations of OpenShift Container Platform must have valid subscriptions to Red Hat
Network channels and Subscription Management repositories.

All OpenShift Container Platform nodes on RHEL systems must have the glusterfs-fuse RPM
installed, which should match the version of Red Hat Gluster Storage server
running in the containers. For more information on installing glusterfs-fuse,
see
Native
Client in the Red Hat Gluster 3.3 Storage Administration Guide.

Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes
across a single project. While the GlusterFS-specific information contained in a
PV definition could also be defined directly in a pod definition, doing so does
not create the volume as a distinct cluster resource, making the volume more
susceptible to conflicts.

Creating Gluster Endpoints

An endpoints definition defines the GlusterFS cluster as EndPoints and
includes the IP addresses of your Gluster servers. The port value can be any
numeric value within the accepted range of ports. Optionally,
you can create a
service
that persists the endpoints.

Creating the Persistent Volume Claim

Developers request GlusterFS storage by referencing either a PVC or the Gluster
volume plug-in directly in the volumes section of a pod spec. A PVC exists
only in the user’s project and can only be referenced by pods within that
project. Any attempt to access a PV across a project causes the pod to fail.

In order to access the HadoopVol volume, containers must match the SELinux
label, and run with a UID of 592 or 590 in their supplemental groups. The
OpenShift Container Platform GlusterFS plug-in mounts the volume in the container with the
same POSIX ownership and permissions found on the target gluster mount, namely
the owner will be 592 and group ID will be 590. However, the container is
not run with its effective UID equal to 592, nor with its GID equal to 590,
which is the desired behavior. Instead, a container’s UID and supplemental
groups are determined by Security Context Constraints (SCCs) and the project
defaults.

Group IDs

Configure Gluster volume access by using supplemental groups, assuming it is not
an option to change permissions on the Gluster mount. Supplemental groups in
OpenShift Container Platform are used for shared storage, such as GlusterFS. In contrast,
block storage, such as Ceph RBD or iSCSI, use the fsGroup SCC strategy and the
fsGroup value in the pod’s securityContext.

Use supplemental group IDs instead of user IDs to gain
access to persistent storage. Supplemental groups are covered further in the
full Volume Security topic.

The group ID on the target Gluster mount example above is 590.
Therefore, a pod can define that group ID using supplementalGroups under the
pod-level securityContext definition. For example:

securityContext must be defined at the pod level, not under a specific container.

2

An array of GIDs defined at the pod level.

Assuming there are no custom SCCs that satisfy the pod’s requirements, the pod
matches the restricted SCC. This SCC has the supplementalGroups strategy
set to RunAsAny, meaning that any supplied group IDs are accepted without
range checking.

As a result, the above pod will pass admissions and can be launched. However, if
group ID range checking is desired, use a custom SCC, as described in
pod security and custom
SCCs. A custom SCC can be created to define minimum and maximum group IDs,
enforce group ID range checking, and allow a group ID of 590.

User IDs

User IDs can be defined in the container image or in the pod definition. The
full Volume Security topic covers
controlling storage access based on user IDs, and should be read prior to
setting up NFS persistent storage.