Hook-based approach

In this approach, all the hook scripts that install/configure ceph and openstack services to use
ceph as the storage backend, are present in DevStack repo itself.

These hook scripts are called by DevStack (as part of stack.sh) at appropriate times to install
& initialize a ceph cluster, create openstack service specific ceph pools, and finally configure
the openstack services to use the service specific respective ceph pool as the storage backend.

Thus, at the end of stack.sh you get a fully working local ceph cluster, with openstack
service specific ceph pools acting as the storage for respective openstack services.

Nova and Glance doesn't have a backend specific scripts, so they are configured directly by
DevStack's ceph hook script

With the above localrc, running stack.sh should get you a basic openstack setup,
that uses Ceph as the storage backend for Nova, Glance, & Cinder services

Plugin-based approach

This is a new way of configuring DevStack to use ceph

In this approach, all the plugin scripts that install/configure ceph and openstack services
to use ceph as the storage backend, are present outside of the DevStack repo, in a
plugin-specific repo.

For ceph, the repo is

https://github.com/openstack/devstack-plugin-ceph

Like hook scripts, the plugin scripts are called by DevStack (as part of stack.sh) at
appropriate times to install & initialize a ceph cluster, create openstack service specific
ceph pools, and finally configure the openstack services to use the service specific ceph
pool as the storage backend.

Thus, at the end of `stack.sh you get a fully working local ceph cluster, with openstack
service specific ceph pools acting as the storage for respective openstack services.

Changes to plugin/backend can happen independently of DevStack, thus both can evolve independently & as long as the API contract is maintained, its guaranteed to work

Changes to the plugin repo can be CI'ed (CI = Continuous Integration) at the plugin repo
itself (instead of DevStack) thus ensuring that the plugin change doesn't harm / break
DevStack for that plugin/backend. Inversely, this also means that changes to DevStack
doesn't have to worry about different plugins as the changes to them are gated by the
plugin's respective CI job(s)

Setting up GlusterFS volume

GlusterFS native driver of Manila requires using a version of GlusterFS that supports SSL/TLS based authorization. This is a fairly new support (as of writing this) added to GlusterFS, hence I used the latest (not yet released) version of GlusterFS from the glusterfs nightly build page. Since the time i wrote this, this feature is now available in GlusterFS 3.6.x and above versions.

Thus, the least supported version for this to work is GlusterFS 3.6 and above

As said above, Manila's GlusterFS Native driver uses GlusterFS' SSL/TLS based authorization feature to allow/deny access to Manila share. In order for SSL/TLS based authorization to work, we need to setup SSL/TLS certificates between the client and server. These certificates provide the mutual trust between the client and server and this trust setup is not handled by Manila. It needs to be done out-of-band of Manila. Its understood that this will be done by the storage admin/deployer in a real world setup, prior to setting up Manila with GlusterFS Native driver.

There are many ways to create SSL/TLS client and server certificates, the easiest way is to create self-signed certificates & use the same on both sides, which is good enough for the devstack/test setup of ours.

Follow the steps below to create the certificates. Note that it doesn't matter where you create these certificates, what matters is where you put them!

Create a new private key for the glusterfs server called glusterfs.key

Create a new public certificate for glusterfs server using the above private key, called glusterfs.pem. This public certificate will be self-signed certificate as it was created using server's private key instead of a CA (Certifying Authority). This certificate will be issued to client.example.com.

Lastly, we need to create glusterfs.ca which has the list of certificates we trust. For our devstack/test only setup, we just copy the glusterfs.pem as glusterfs.ca. This means we trust ourselves and also any other entity that throws glusterfs.pem as the identity during SSL/TLS handshake.

This closes the server side setup. These certificates needs to be copied to the client system(s) as well, but in Manila the client is the Nova VM! Thus we copy these certificates to the client once we create the Nova VM. In real world, storage admin/deployer might copy these certificates into a tenant specific glance image before hand and reduce the manual intervention needed to setup gluster on the client.

Setup password-less ssh access between devstack and GlusterFS VMs

GlusterFS native driver would need password-less ssh access for root user to GlusterFS VM in order to configure and tune the GlusterFS volume for Manila.

There are enough articles floating around the internet, if you don't know how to make this happen, so please help yourself :)

Configure Manila to use GlusterFS Native driver

Below is my /etc/manila/manila.conf with the GlusterFS specific changes marked with #DPKS

If all is well, then m-shr will now be started with GlusterFS Native driver which uses gv0 GlusterFS volume as the backend. As part of the startup, GlusterFS native driver enables the SSL/TLS mode in gv0 which can be verified as below (See lines marked with #DPKS)

In my case, devstack VM (which acts as the host system, aka openstack node) is on 192.168.122.0/24 network which is not compatible with 172.24.4.0/24 of the public network. In order for external connectivity to work the public neutron network should be use the same network CIDR as host / provider network.

So delete the existing public network, create a new public network that matches our host/provider network.

For external connectivity to work, we need to put eth0 of devstack as a port inside neutron's br-ex OVS bridge. The below command does that. IMP NOTE : Run the below command as a single bash command as shown below, doing it any other way would cause you to lose connectivity to your devstack VM/Host! Ofcourse replace the IP addr and other stuff with your setup specific values.

As you can see above, eth0 is added as a br-ex port and we are now able to ping router1's gateway, scratchpad-vm (GlusterFS server IP: 192.168.122.137) and public DNS (8.8.8.8). Its very IMP to get this working, otherwise stop and fix/debug and don't proceed until this works!

At the end of this whole exercise, make sure the route entry on your devstack VM/Host looks like the below:

Its also OK to create a gluster-nightly repo (as we did on GlusterFS server) and install glusterfs-fuse from it so that client and server glusterfs packages are in sync. For me this worked, so i just continued with it.

Copy the 3 certificate files that we created on GlusterFS server into the Nova VM. Note: You cannot access the Nova VM from the GlusterFS server. So i copied the 3 files to my devstack VM/Host and then copied into the Nova VM as below:

NOTE: By having the same certificate files on both client and server, is the quickest way to setup mutual trust between the two systems. In real world setup, admin/deployer will create separate client and server certificates and set them up accordingly.

Edit /etc/hosts/ file in Nova VM and add entry for GlusterFS server. It seems GlusterFS client looks for the hostname (even when IP is provided) during mount, hence the need for this step. Use sudo or sudo -s bash to get a root shell inside the Nova VM.