ceph-deploy is a tool which allows easy and quick deployment of a
Ceph cluster without involving complex and detailed manual configuration. It
uses ssh to gain access to other Ceph nodes from the admin node, sudo for
administrator privileges on them and the underlying Python scripts automates
the manual process of Ceph installation on each node from the admin node itself.
It can be easily run on an workstation and doesn’t require servers, databases or
any other automated tools. With ceph-deploy, it is really easy to set
up and take down a cluster. However, it is not a generic deployment tool. It is
a specific tool which is designed for those who want to get Ceph up and running
quickly with only the unavoidable initial configuration settings and without the
overhead of installing other tools like Chef, Puppet or Juju. Those
who want to customize security settings, partitions or directory locations and
want to set up a cluster following detailed manual steps, should use other tools
i.e, Chef, Puppet, Juju or Crowbar.

With ceph-deploy, you can install Ceph packages on remote nodes,
create a cluster, add monitors, gather/forget keys, add OSDs and metadata
servers, configure admin hosts or take down the cluster.

Start deploying a new cluster and write a configuration file and keyring for it.
It tries to copy ssh keys from admin node to gain passwordless ssh to monitor
node(s), validates host IP, creates a cluster with a new initial monitor node or
nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and
a log file for the new cluster. It populates the newly created Ceph configuration
file with fsid of cluster, hostnames and IP addresses of initial monitor
members under [global] section.

If more than one network interface is used, publicnetwork setting has to be
added under [global] section of Ceph configuration file. If the public subnet
is given, new command will choose the one IP from the remote host that exists
within the subnet range. Public network can also be added at runtime using
--public-network option with the command as mentioned above.

Install Ceph packages on remote hosts. As a first step it installs
yum-plugin-priorities in admin and other nodes using passwordless ssh and sudo
so that Ceph packages from upstream repository get more priority. It then detects
the platform and distribution for the hosts and installs Ceph normally by
downloading distro compatible packages if adequate repo for Ceph is already added.
--release flag is used to get the latest release for installation. During
detection of platform and distribution before installation, if it finds the
distro.init to be sysvinit (Fedora, CentOS/RHEL etc), it doesn’t allow
installation with custom cluster name and uses the default name ceph for the
cluster.

If the user explicitly specifies a custom repo url with --repo-url for
installation, anything detected from the configuration will be overridden and
the custom repository location will be used for installation of Ceph packages.
If required, valid custom repositories are also detected and installed. In case
of installation from a custom repo a boolean is used to determine the logic
needed to proceed with a custom repo installation. A custom repo install helper
is used that goes through config checks to retrieve repos (and any extra repos
defined) and installs them. cd_conf is the object built from argparse
that holds the flags and information needed to determine what metadata from the
configuration is to be used.

A user can also opt to install only the repository without installing Ceph and
its dependencies by using --repo option.

Usage:

ceph-deployinstall[HOST][HOST...]

Here, [HOST] is/are the host node(s) where Ceph is to be installed.

An option --release is used to install a release known as CODENAME
(default: firefly).

Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and
the mds command is used to create one on the desired host node. It uses the
subcommand create to do so. create first gets the hostname and distro
information of the desired mds host. It then tries to read the bootstrap-mds
key for the cluster and deploy it in the desired host. The key generally has a
format of {cluster}.bootstrap-mds.keyring. If it doesn’t finds a keyring,
it runs gatherkeys to get the keyring. It then creates a mds on the desired
host under the path /var/lib/ceph/mds/ in /var/lib/ceph/mds/{cluster}-{name}
format and a bootstrap keyring under /var/lib/ceph/bootstrap-mds/ in
/var/lib/ceph/bootstrap-mds/{cluster}.keyring format. It then runs appropriate
commands based on distro.init to start the mds.

Deploy Ceph monitor on remote hosts. mon makes use of certain subcommands
to deploy Ceph monitors on other nodes.

Subcommand create-initial deploys for monitors defined in
moninitialmembers under [global] section in Ceph configuration file,
wait until they form quorum and then gatherkeys, reporting the monitor status
along the process. If monitors don’t form quorum the command will eventually
time out.

Usage:

ceph-deploymoncreate-initial

Subcommand create is used to deploy Ceph monitors by explicitly specifying
the hosts which are desired to be made monitors. If no hosts are specified it
will default to use the moninitialmembers defined under [global]
section of Ceph configuration file. create first detects platform and distro
for desired hosts and checks if hostname is compatible for deployment. It then
uses the monitor keyring initially created using new command and deploys the
monitor in desired host. If multiple hosts were specified during new command
i.e, if there are multiple hosts in moninitialmembers and multiple keyrings
were created then a concatenated keyring is used for deployment of monitors. In
this process a keyring parser is used which looks for [entity] sections in
monitor keyrings and returns a list of those sections. A helper is then used to
collect all keyrings into a single blob that will be used to inject it to monitors
with --mkfs on remote nodes. All keyring files are concatenated to be
in a directory ending with .keyring. During this process the helper uses list
of sections returned by keyring parser to check if an entity is already present
in a keyring and if not, adds it. The concatenated keyring is used for deployment
of monitors to desired multiple hosts.

Usage:

ceph-deploymoncreate[HOST][HOST...]

Here, [HOST] is hostname of desired monitor host(s).

Subcommand add is used to add a monitor to an existing cluster. It first
detects platform and distro for desired host and checks if hostname is compatible
for deployment. It then uses the monitor keyring, ensures configuration for new
monitor host and adds the monitor to the cluster. If the section for the monitor
exists and defines a mon addr that will be used, otherwise it will fallback by
resolving the hostname to an IP. If --address is used it will override
all other options. After adding the monitor to the cluster, it gives it some time
to start. It then looks for any monitor errors and checks monitor status. Monitor
errors arise if the monitor is not added in moninitialmembers, if it doesn’t
exist in monmap and if neither public_addr nor public_network keys
were defined for monitors. Under such conditions, monitors may not be able to
form quorum. Monitor status tells if the monitor is up and running normally. The
status is checked by running cephdaemonmon.hostnamemon_status on remote
end which provides the output and returns a boolean status of what is going on.
False means a monitor that is not fine even if it is up and running, while
True means the monitor is up and running correctly.

Usage:

ceph-deploymonadd[HOST]ceph-deploymonadd[HOST]--address[IP]

Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
node. Please note, unlike other mon subcommands, only one node can be
specified at a time.

Subcommand destroy is used to completely remove monitors on remote hosts.
It takes hostnames as arguments. It stops the monitor, verifies if ceph-mon
daemon really stopped, creates an archive directory mon-remove under
/var/lib/ceph/, archives old monitor directory in
{cluster}-{hostname}-{stamp} format in it and removes the monitor from
cluster by running cephremove... command.

Gather authentication keys for provisioning new nodes. It takes hostnames as
arguments. It checks for and fetches client.admin keyring, monitor keyring
and bootstrap-mds/bootstrap-osd keyring from monitor host. These
authentication keys are used when new monitors/OSDs/MDS are added to the
cluster.

Usage:

ceph-deploygatherkeys[HOST][HOST...]

Here, [HOST] is hostname of the monitor from where keys are to be pulled.

Manage disks on a remote host. It actually triggers the ceph-disk utility
and it’s subcommands to manage disks.

Subcommand list lists disk partitions and Ceph OSDs.

Usage:

ceph-deploydisklist[HOST:[DISK]]

Here, [HOST] is hostname of the node and [DISK] is disk name or path.

Subcommand prepare prepares a directory, disk or drive for a Ceph OSD. It
creates a GPT partition, marks the partition with Ceph type uuid, creates a
file system, marks the file system as ready for Ceph consumption, uses entire
partition and adds a new partition to the journal disk.

Usage:

ceph-deploydiskprepare[HOST:[DISK]]

Here, [HOST] is hostname of the node and [DISK] is disk name or path.

Subcommand activate activates the Ceph OSD. It mounts the volume in a
temporary location, allocates an OSD id (if needed), remounts in the correct
location /var/lib/ceph/osd/$cluster-$id and starts ceph-osd. It is
triggered by udev when it sees the OSD GPT partition type or on ceph service
start with cephdiskactivate-all.

Usage:

ceph-deploydiskactivate[HOST:[DISK]]

Here, [HOST] is hostname of the node and [DISK] is disk name or path.

Subcommand zap zaps/erases/destroys a device’s partition table and contents.
It actually uses sgdisk and it’s option --zap-all to destroy both GPT and
MBR data structures so that the disk becomes suitable for repartitioning.
sgdisk then uses --mbrtogpt to convert the MBR or BSD disklabel disk to a
GPT disk. The prepare subcommand can now be executed which will create a new
GPT partition.

Subcommand prepare prepares a directory, disk or drive for a Ceph OSD. It
first checks against multiple OSDs getting created and warns about the
possibility of more than the recommended which would cause issues with max
allowed PIDs in a system. It then reads the bootstrap-osd key for the cluster or
writes the bootstrap key if not found. It then uses ceph-disk
utility’s prepare subcommand to prepare the disk, journal and deploy the OSD
on the desired host. Once prepared, it gives some time to the OSD to settle and
checks for any possible errors and if found, reports to the user.

Usage:

ceph-deployosdprepareHOST:DISK[:JOURNAL][HOST:DISK[:JOURNAL]...]

Subcommand activate activates the OSD prepared using prepare subcommand.
It actually uses ceph-disk utility’s activate subcommand with
appropriate init type based on distro to activate the OSD. Once activated, it
gives some time to the OSD to start and checks for any possible errors and if
found, reports to the user. It checks the status of the prepared OSD, checks the
OSD tree and makes sure the OSDs are up and in.

Subcommand list lists disk partitions, Ceph OSDs and prints OSD metadata.
It gets the osd tree from a monitor host, uses the ceph-disk-list output
and gets the mount point by matching the line where the partition mentions
the OSD name, reads metadata from files, checks if a journal path exists,
if the OSD is in a OSD tree and prints the OSD metadata.

Push/pull configuration file to/from a remote host. It uses push subcommand
to takes the configuration file from admin host and write it to remote host under
/etc/ceph directory. It uses pull subcommand to do the opposite i.e, pull
the configuration file under /etc/ceph directory of remote host to admin node.

Remove Ceph packages from remote hosts. It detects the platform and distro of
selected host and uninstalls Ceph packages from it. However, some dependencies
like librbd1 and librados2 will not be removed because they can cause
issues with qemu-kvm.

Usage:

ceph-deployuninstall[HOST][HOST...]

Here, [HOST] is hostname of the node from where Ceph will be uninstalled.

Remove Ceph packages from remote hosts and purge all data. It detects the
platform and distro of selected host, uninstalls Ceph packages and purges all
data. However, some dependencies like librbd1 and librados2 will not be
removed because they can cause issues with qemu-kvm.

Purge (delete, destroy, discard, shred) any Ceph data from /var/lib/ceph.
Once it detects the platform and distro of desired host, it first checks if Ceph
is still installed on the selected host and if installed, it won’t purge data
from it. If Ceph is already uninstalled from the host, it tries to remove the
contents of /var/lib/ceph. If it fails then probably OSDs are still mounted
and needs to be unmounted to continue. It unmount the OSDs and tries to remove
the contents of /var/lib/ceph again and checks for errors. It also removes
contents of /etc/ceph. Once all steps are successfully completed, all the
Ceph data from the selected host are removed.

Usage:

ceph-deploypurgedata[HOST][HOST...]

Here, [HOST] is hostname of the node from where Ceph data will be purged.

Manage packages on remote hosts. It is used for installing or removing packages
from remote hosts. The package names for installation or removal are to be
specified after the command. Two options --install and
--remove are used for this purpose.

Install and configure Calamari nodes. It first checks if distro is supported
for Calamari installation by ceph-deploy. An argument connect is used for
installation and configuration. It checks for ceph-deploy configuration
file (cd_conf) and Calamari release repo or calamari-minion repo. It relies
on default for repo installation as it doesn’t install Ceph unless specified
otherwise. options dictionary is also defined because ceph-deploy
pops items internally which causes issues when those items are needed to be
available for every host. If the distro is Debian/Ubuntu, it is ensured that
proxy is disabled for calamari-minion repo. calamari-minion package is
then installed and custom repository files are added. minion config is placed
prior to installation so that it is present when the minion first starts.
config directory, calamari salt config are created and salt-minion package
is installed. If the distro is Redhat/CentOS, the salt-minion service needs to
be started.

Usage:

ceph-deploycalamari{connect}[HOST][HOST...]

Here, [HOST] is the hostname where Calamari is to be installed.

An option --release can be used to use a given release from repositories
defined in ceph-deploy’s configuration. Defaults to calamari-minion.