Description

Series:

Tags:

Overview

This charm provides the Cinder volume service for OpenStack. It is intended to
be used alongside the other OpenStack components, starting with the Folsom
release.

Cinder is made up of 3 separate services: an API service, a scheduler and a
volume service. This charm allows them to be deployed in different
combination, depending on user preference and requirements.

This charm was developed to support deploying Folsom on both
Ubuntu Quantal and Ubuntu Precise. Since Cinder is only available for
Ubuntu 12.04 via the Ubuntu Cloud Archive, deploying this charm to a
Precise machine will by default install Cinder and its dependencies from
the Cloud Archive.

Usage

Cinder may be deployed in a number of ways. This charm focuses on 3 main
configurations. All require the existence of the other core OpenStack
services deployed via Juju charms, specifically: mysql, rabbitmq-server,
keystone and nova-cloud-controller. The following assumes these services
have already been deployed.

Basic, all-in-one using local storage and iSCSI

The api server, scheduler and volume service are all deployed into the same
unit. Local storage will be initialized as a LVM phsyical device, and a volume
group initialized. Instance volumes will be created locally as logical volumes
and exported to instances via iSCSI. This is ideal for small-scale deployments
or testing:

Separate volume units for scale out, using local storage and iSCSI

Separating the volume service from the API service allows the storage pool
to easily scale without the added complexity that accompanies load-balancing
the API server. When we've exhausted local storage on volume server, we can
simply add-unit to expand our capacity. Future requests to allocate volumes
will be distributed across the pool of volume servers according to the
availability of storage space.

All-in-one using Ceph-backed RBD volumes

All 3 services can be deployed to the same unit, but instead of relying
on local storage to back volumes an external Ceph cluster is used. This
allows scalability and redundancy needs to be satisified and Cinder's RBD
driver used to create, export and connect volumes to instances. This assumes
a functioning Ceph cluster has already been deployed using the official Ceph
charm and a relation exists between the Ceph service and the nova-compute
service.

Configuration

The default value for most config options should work for most deployments.

Users should be aware of three options, in particular:

openstack-origin: Allows Cinder to be installed from a specific apt repository.
See config.yaml for a list of supported sources.

openstack-origin-git: Allows Cinder to be installed from source.
See config.yaml for a list of supported sources.

block-device: When using local storage, a block device should be specified to
back a LVM volume group. It's important this device exists on
all nodes that the service may be deployed to.

overwrite: Whether or not to wipe local storage that of data that may prevent
it from being initialized as a LVM phsyical device. This includes
filesystems and partition tables. CAUTION

enabled-services: Can be used to separate cinder services between service
service units (see previous section)

HA/Clustering

There are two mutually exclusive high availability options: using virtual
IP(s) or DNS. In both cases, a relationship to hacluster is required which
provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that
the VIP is a valid IP on the subnet for one of the node's interfaces and each
node has an interface in said subnet. The VIP becomes a highly-available API
endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP
HA. If multiple networks are being used, a VIP should be provided for each
network, separated by spaces. Optionally, vip_iface or vip_cidr may be
specified.

To use DNS high availability there are several prerequisites. However, DNS HA
does not require the clustered nodes to be on the same subnet.
Currently the DNS HA feature is only available for MAAS 2.0 or greater
environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must
have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s)
must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and at least one
of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must
be set in order to use DNS HA. One or more of the above hostnames may be set.

The charm will throw an exception in the following circumstances:
If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster
If both 'vip' and 'dns-ha' are set as they are mutually exclusive
If 'dns-ha' is set and none of the os-{admin,internal,public}-hostname(s) are
set

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.

Access to the underlying MySQL instance can also be bound to a specific space using the shared-db relation.

(float)
The CPU core multiplier to use when configuring worker processes for
Cinder. By default, the number of workers for each daemon is set to
twice the number of CPU cores a service unit has. When deployed in
a LXD container, this default value will be capped to 4 workers
unless this configuration option is set.

(string)
SSL certificate to install and use for API ports. Setting this value
and ssl_key will enable reverse proxying, point Cinder's entry in the
Keystone catalog to use https, and override any certificate and key
issued by Keystone (if it is configured to do so).

(boolean)
If True enables IPv6 support. The charm will expect network interfaces
to be configured with an IPv6 address. If set to False (default) IPv4
is expected.
.
NOTE: these charms do not currently support IPv6 privacy extension. In
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.

(string)
The hostname or address of the public endpoints created for cinder
in the keystone identity provider.
.
This value will be used for public endpoints. For example, an
os-public-hostname set to 'cinder.example.com' with ssl enabled will
create two public endpoints for cinder:
.
https://cinder.example.com:443/v2/$(tenant_id)s and
https://cinder.example.com:443/v3/$(tenant_id)s

(boolean)
If True enables openstack upgrades for this charm via juju actions.
You will still need to set openstack-origin to the new repository but
instead of an upgrade running automatically across all units, it will
wait for you to execute the openstack-upgrade action for this charm on
each unit. If False it will revert to existing behavior of upgrading
all units on config change.

(string)
The hostname or address of the admin endpoints created for cinder
in the keystone identity provider.
.
This value will be used for admin endpoints. For example, an
os-admin-hostname set to 'cinder.admin.example.com' with ssl enabled will
create two admin endpoints for cinder:
.
https://cinder.admin.example.com:443/v2/$(tenant_id)s and
https://cinder.admin.example.com:443/v3/$(tenant_id)s

(string)
The block devices on which to create LVM volume group.
.
May be set to None for deployments that will not need local
storage (eg, Ceph/RBD-backed volumes).
.
This can also be a space-delimited list of block devices to attempt
to use in the cinder LVM volume group - each block device detected
will be added to the available physical volumes in the volume group.
.
May be set to the path and size of a local file
(/path/to/file.img|$sizeG), which will be created and used as a
loopback device (for testing only). $sizeG defaults to 5G

(int)
Newer storage drivers may require the v2 Glance API to perform certain
actions e.g. the RBD driver requires requires this to support COW
cloning of images. This option will default to v1 for backwards
compatibility with older glance services.

(string)
Repository from which to install. May be one of the following:
distro (default), ppa:somecustom/ppa, a deb url sources entry,
or a supported Ubuntu Cloud Archive e.g.
.
cloud:<series>-<openstack-release>
cloud:<series>-<openstack-release>/updates
cloud:<series>-<openstack-release>/staging
cloud:<series>-<openstack-release>/proposed
.
See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which
cloud archives are available and supported.
.
NOTE: updating this setting to a source that is known to provide
a later version of OpenStack will trigger a software upgrade unless
action-managed-upgrade is set to True.

(int)
This value dictates the number of replicas ceph must make of any
object it stores within the cinder rbd pool. Of course, this only
applies if using Ceph as a backend store. Note that once the cinder
rbd pool has been created, changing this value will not have any
effect (although the configuration of a pool can be always be changed
within ceph itself or via the charm used to deploy ceph).

(string)
Used by the nrpe-external-master subordinate charm. A string that will
be prepended to instance name to set the host name in nagios. So for
instance the hostname would be something like 'juju-myservice-0'. If
you are running multiple environments with the same services in them
this allows you to differentiate between them.

(string)
The hostname or address of the internal endpoints created for cinder
in the keystone identity provider.
.
This value will be used for internal endpoints. For example, an
os-internal-hostname set to 'cinder.internal.example.com' with ssl
enabled will create two internal endpoints for cinder:
.
https://cinder.internal.example.com:443/v2/$(tenant_id)s and
https://cinder.internal.example.com:443/v3/$(tenant_id)s

(string)
Cloud instances provide ephemeral storage which is normally mounted
on /mnt.
.
Providing this option will force an unmount of the ephemeral device
so that it can be used as a Cinder storage device. This is useful for
testing purposes (cloud deployment is not a typical use case).