Scale Out Usage

Rolling Upgrades

ceph-mon and ceph-osd charms have the ability to initiate a rolling upgrade.
This is initiated by setting the config value for source. To perform a
rolling upgrade first set the source for ceph-mon. Watch juju status.
Once the monitor cluster is upgraded proceed to setting the ceph-osd source
setting. Again watch juju status for output. The monitors and osds will
sort themselves into a known order and upgrade one by one. As each server is
upgrading the upgrade code will down all the monitor or osd processes on that
server, apply the update and then restart them. You will notice in thejuju status output that the servers will tell you which previous server they
are waiting on.

Supported Upgrade Paths

Currently the following upgrade paths are supported using
the Ubuntu Cloud Archive:
- trusty-firefly -> trusty-hammer
- trusty-hammer -> trusty-jewel

Firefly is available in Trusty, Hammer is in Trusty-Juno (end of life),
Trusty-Kilo, Trusty-Liberty, and Jewel is available in Trusty-Mitaka.

For example if the current config source setting is: cloud:trusty-liberty
changing that to cloud:trusty-mitaka will initiate a rolling upgrade of
the monitor cluster from hammer to jewel.

Edge cases

There's an edge case in the upgrade code where if the previous node never
starts upgrading itself then the rolling upgrade can hang forever. If you
notice this has happened it can be fixed by setting the appropriate key in the
ceph monitor cluster. The monitor cluster will have
keys that look like ceph-mon_ip-ceph-mon-0_1484680239.573482_start andceph-mon_ip-ceph-mon-0_1484680274.181742_stop. What each server is looking for
is that stop key to indicate that the previous server upgraded successfully and
it's safe to take itself down. If the stop key is not present it will wait
10 minutes, then consider that server dead and move on.

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

Network traffic can be bound to specific network spaces using the public (front-side) and cluster (back-side) bindings:

juju deploy ceph-mon --bind "public=data-space cluster=cluster-space"

alternatively these can also be provided as part of a Juju native bundle configuration:

Contact Information

Authors

Ceph

Technical Footnotes

This charm uses the new-style Ceph deployment as reverse-engineered from the
Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected
a different strategy to form the monitor cluster. Since we don't know the
names or addresses of the machines in advance, we use the relation-joined
hook to wait for all three nodes to come up, and then write their addresses
to ceph.conf in the "mon host" parameter. After we initialize the monitor
cluster a quorum forms quickly, and OSD bringup proceeds.

See the documentation for more information on Ceph monitor cluster deployment strategies and pitfalls.

(string)
YAML-formatted associative array of sysctl key/value pairs to be set
persistently. By default we set pid_max, max_map_count and
threads-max to a high value to avoid problems with large numbers (>20)
of OSDs recovering. very large clusters should set those values even
higher (e.g. max for kernel.pid_max is 4194303).

(int)
Number of OSDs expected to be deployed in the cluster. This value is used
for calculating the number of placement groups on pool creation. The
number of placement groups for new pools are based on the actual number
of OSDs in the cluster or the expected-osd-count, whichever is greater
A value of 0 will cause the charm to only consider the actual number of
OSDs in the cluster.

(int)
Restrict the rbd features used to the specified level. If set, this will
inform clients that they should set the config value `rbd default
features`, for example:
.
rbd default features = 1
.
This needs to be set to 1 when deploying a cloud with the nova-lxd
hypervisor.

(boolean)
Causes the charm to not do any of the initial bootstrapping of the
Ceph monitor cluster. This is only intended to be used when migrating
from the ceph all-in-one charm to a ceph-mon / ceph-osd deployment.
Refer to the Charm Deployment guide at https://docs.openstack.org/charm-deployment-guide/latest/
for more information.

(string)
Optional configuration to support use of additional sources such as:
.
- ppa:myteam/ppa
- cloud:xenial-proposed/ocata
- http://my.archive.com/ubuntu main
.
The last option should be used in conjunction with the key configuration
option.

(string)
The Ceph secret key used by Ceph monitors. This value will become the
mon.key. To generate a suitable value use:
.
ceph-authtool /dev/stdout --name=mon. --gen-key
.
If left empty, a secret key will be generated.
.
NOTE: Changing this configuration after deployment is not supported and
new service units will not be able to join the cluster.

(boolean)
If True enables IPv6 support. The charm will expect network interfaces
to be configured with an IPv6 address. If set to False (default) IPv4
is expected.
.
NOTE: these charms do not currently support IPv6 privacy extension. In
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.

(string)
User provided Ceph configuration. Supports a string representation of
a python dictionary where each top-level key represents a section in
the ceph.conf template. You may only use sections supported in the
template.
.
WARNING: this is not the recommended way to configure the underlying
services that this charm installs and is used at the user's own risk.
This option is mainly provided as a stop-gap for users that either
want to test the effect of modifying some config or who have found
a critical bug in the way the charm has configured their services
and need it fixed immediately. We ask that whenever this is used,
that the user consider opening a bug on this charm at
http://bugs.launchpad.net/charms providing an explanation of why the
config was needed so that we may consider it for inclusion as a
natively supported config in the the charm.

(string)
The unique identifier (fsid) of the Ceph cluster.
.
To generate a suitable value use `uuidgen`.
If left empty, an fsid will be generated.
.
NOTE: Changing this configuration after deployment is not supported and
new service units will not be able to join the cluster.

(int)
The number of placement groups per OSD to target. It is important to
properly size the number of placement groups per OSD as too many
or too few placement groups per OSD may cause resource constraints and
performance degradation. This value comes from the recommendation of
the Ceph placement group calculator (http://ceph.com/pgcalc/) and
recommended values are:
.
100 - If the cluster OSD count is not expected to increase in the
foreseeable future.
200 - If the cluster OSD count is expected to increase (up to 2x) in the
foreseeable future.
300 - If the cluster OSD count is expected to increase between 2x and 3x
in the foreseeable future.

(string)
Used by the nrpe-external-master subordinate charm.
A string that will be prepended to instance name to set the hostname
in nagios. So for instance the hostname would be something like:
.
juju-myservice-0
.
If you're running multiple environments with the same services in them
this allows you to differentiate between them.

(int)
Number of ceph-mon units to wait for before attempting to bootstrap the
monitor cluster. For production clusters the default value of 3 ceph-mon
units is normally a good choice.
.
For test and development environments you can enable single-unit
deployment by setting this to 1.
.
NOTE: To establish quorum and enable partition tolerance a odd number of
ceph-mon units is required.