Many of the upgrade guides for OpenStack focus on in-place upgrades to
your OpenStack environment. Some organizations may opt for a less
risky (but more hardware intensive) option of setting up a parallel
environment, and then migrating data into the new environment. In
this article, we look at how to use Cinder backups with a shared NFS
volume to facilitate the migration of Cinder volumes between two
different OpenStack environments.

Overview

This is how we’re going to proceed:

In the source environment:

Configure Cinder for NFS backups

Create a backup

Export the backup metadata

In the target environment:

Configure Cinder for NFS backups

Import the backup metadata

Create a new volume matching the size of the source volume

Restore the backup to the new volume

Cinder configuration

We’ll be using the NFS backup driver for cinder, which means
cinder.conf must contain:

backup_driver=cinder.backup.drivers.nfs

And you need to configure an NFS share to use for backups:

backup_share=fileserver:/vol/backups

Cinder in both environments should be pointing at the same
backup_share. This is how we make backups made in the source
environment available in the target environment – they will both have
access to the same storage, so that we only need to copy the metadata
into the target environment.

After making changes to your Cinder configuration
you will need to restart Cinder. If you are using RDO or RHEL-OSP,
this is:

openstack-service restart cinder

Creating a backup

Assume we have a volume named testvol that is currently attached to
a running Nova server. The output of cinder list looks like:

But this will fail because the volume is currently attached to a Nova
server:

ERROR: Invalid volume: Volume to be backed up must be available
(HTTP 400) (Request-ID: req-...)

There are two ways we can deal with this:

We can pass the --force flag to cinder backup-create, which
will allow the backup to continue even if the source volume is
attached. This should be done with care, because the on-disk
filesystem may not be in consistent state.

The --force flag was introduced in OpenStack Liberty. If you
are using an earlier OpenStack release you will need to use the
following procedure.

We can make the volume available by detaching it from the server.
In this case, you probably want to shut down the server first:

Exporting the backup

Now that we have successfully created the backup, we need to export
the Cinder metadata regarding the backup using the cinder
backup-export command (which can only be run by a user with admin
privileges):

That giant block of text labeled backup_url is not, in fact, a URL.
In this case, the actual content is a base64 encoded JSON string. You
will need to copy the base64 data to your target OpenStack
environment. You can extract just the base64 data like this:

Importing the backup

In the target OpenStack environment, you need to import the backup
metadata to make Cinder aware of the backup. You do this with the
cinder backup-import command, which requires both a backup_service
parameter and a backup_url. These are the values produces by the
cinder backup-export command in the previous step.

Assuming that we have dumped the base64 data into a file named
metadata.txt, we can import the metadata using the following
command:

Creating a new volume

At this point, we could simply run cinder backup-restore on the
target system, and Cinder would restore the data onto a new volume
owned by the admin user. If you want to restore to a volume owned
by another user, it is easiest to first create the volume as that
user. You will want to make sure that the size is at least as large
as the source volume: