CephFS driver

The CephFS driver enables manila to export shared filesystems backed by Ceph’s
File System (CephFS) using either the Ceph network protocol or NFS protocol.
Guests require a native Ceph client or an NFS client in order to mount the
filesystem.

When guests access CephFS using the native Ceph protocol, access is
controlled via Ceph’s cephx authentication system. If a user requests
share access for an ID, Ceph creates a corresponding Ceph auth ID and a secret
key, if they do not already exist, and authorizes the ID to access the share.
The client can then mount the share using the ID and the secret key. To learn
more about configuring Ceph clients to access the shares created using this
driver, please see the Ceph documentation (http://docs.ceph.com/docs/master/cephfs/).
If you choose to use the kernel client rather than the FUSE client, the share
size limits set in manila may not be obeyed.

And when guests access CephFS through NFS, an NFS-Ganesha server mediates
access to CephFS. The driver enables access control by managing the NFS-Ganesha
server’s exports.

A manila share backed by CephFS is only as good as the
underlying filesystem. Take care when configuring your Ceph
cluster, and consult the latest guidance on the use of
CephFS in the Ceph documentation (
http://docs.ceph.com/docs/master/cephfs/)

manila.keyring, along with your ceph.conf file, will then need to be
placed on the server running the manila-share service.

Important

To communicate with the Ceph backend, a CephFS driver instance
(represented as a backend driver section in manila.conf) requires its own
Ceph auth ID that is not used by other CephFS driver instances running in
the same controller node.

In the server running the manila-share service, you can place the
ceph.conf and manila.keyring files in the /etc/ceph directory. Set the
same owner for the manila-share process and the manila.keyring
file. Add the following section to the ceph.conf file.

Set driver-handles-share-servers to False as the driver does not
manage the lifecycle of share-servers. To let the driver perform snapshot
related operations, set cephfs_enable_snapshots to True. For the driver
backend to expose shares via the the native Ceph protocol, set
cephfs_protocol_helper_type to CEPHFS.

Then edit enabled_share_backends to point to the driver’s backend section
using the section name. In this example we are also including another backend
(“generic1”), you would include whatever other backends you have configured.

Note

For Mitaka, Newton, and Ocata releases, the share_driver path
was manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver

cephfs_ganesha_server_is_remote to False if the NFS-ganesha server is
co-located with the manila-share service. If the NFS-Ganesha
server is remote, then set the options to True, and set other options
such as cephfs_ganesha_server_ip, cephfs_ganesha_server_username,
and cephfs_ganesha_server_password (or cephfs_ganesha_path_to_private_key)
to allow the driver to manage the NFS-Ganesha export entries over SSH.

cephfs_ganesha_server_ip to the ganesha server IP address. It is
recommended to set this option even if the ganesha server is co-located
with the manila-share service.

With NFS-Ganesha (v2.5.4 or later), Ceph (v12.2.2 or later), the driver (Queens
or later) can store NFS-Ganesha exports and export counter in Ceph RADOS
objects. This is useful for highly available NFS-Ganesha deployments to store
its configuration efficiently in an already available distributed storage
system. Set additional options in the NFS driver section to enable the driver
to do this.

ganesha_rados_store_pool_name to the Ceph RADOS pool that stores Ganesha
exports and export counter objects. If you want to use one of the backend
CephFS’s RADOS pools, then using CephFS’s data pool is preferred over using
its metadata pool.

Edit enabled_share_backends to point to the driver’s backend section
using the section name, cephfnfs1.

Alternatively, the cloud admin can create Ceph auth IDs for each of the
tenants. The users can then request manila to authorize the pre-created
Ceph auth IDs, whose secret keys are already shared with them out of band
of manila, to access the shares.

Following is a command that the cloud admin could run from the
server running the manila-share service to create a Ceph auth ID
and get its keyring file.

A CephFS driver instance, represented as a backend driver section in
manila.conf, requires a Ceph auth ID unique to the backend Ceph Filesystem.
Using a non-unique Ceph auth ID will result in the driver unintentionally
evicting other CephFS clients using the same Ceph auth ID to connect to the
backend.

The snapshot support of the driver is disabled by default. The
cephfs_enable_snapshots configuration option needs to be set to True
to allow snapshot operations. Snapshot support will also need to be enabled
on the backend CephFS storage.

Snapshots are read-only. A user can read a snapshot’s contents from the
.snap/{manila-snapshot-id}_{unknown-id} folder within the mounted
share.

To restrict share sizes, CephFS uses quotas that are enforced in the client
side. The CephFS FUSE clients are relied on to respect quotas.

Mitaka release only

The secret-key of a Ceph auth ID required to mount a share is not exposed to
an user by a manila API. To workaround this, the storage admin would need to
pass the key out of band of manila, or the user would need to use the Ceph ID
and key already created and shared with her by the cloud admin.

An additional level of resource isolation can be provided by mapping a
share’s contents to a separate RADOS pool. This layout would be be preferred
only for cloud deployments with a limited number of shares needing strong
resource separation. You can do this by setting a share type specification,
cephfs:data_isolated for the share type used by the cephfs driver.