ceph is a control utility which is used for manual deployment and maintenance
of a Ceph cluster. It provides a diverse set of commands that allows deployment of
monitors, OSDs, placement groups, MDS and overall maintenance, administration
of the cluster.

Show the releases and features of all connected daemons and clients connected
to the cluster, along with the numbers of them in each bucket grouped by the
corresponding features/releases. Each release of Ceph supports a different set
of features, expressed by the features bitmask. New cluster features require
that clients support the feature, or else they are not allowed to connect to
these new features. As new features or capabilities are enabled after an
upgrade, older clients are prevented from connecting.

Subcommand blocked-by prints a histogram of which OSDs are blocking their peers

Usage:

cephosdblocked-by

Subcommand create creates new osd (with optional UUID and ID).

This command is DEPRECATED as of the Luminous release, and will be removed in
a future release.

Subcommand new should instead be used.

Usage:

cephosdcreate{<uuid>}{<id>}

Subcommand new can be used to create a new OSD or to recreate a previously
destroyed OSD with a specific id. The new OSD will have the specified uuid,
and the command expects a JSON file containing the base64 cephx key for auth
entity client.osd.<id>, as well as optional base64 cepx key for dm-crypt
lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
the accompanying lockbox cephx key.

Usage:

cephosdnew{<uuid>}{<id>}-i{<params.json>}

The parameters JSON file is optional but if provided, is expected to maintain
a form of the following format:

Subcommand unlink unlinks <name> from crush map (everywhere, or just at
<ancestor>).

Usage:

cephosdcrushunlink<name>{<ancestor>}

Subcommand df shows OSD utilization

Usage:

cephosddf{plain|tree}

Subcommand deep-scrub initiates deep scrub on specified osd.

Usage:

cephosddeep-scrub<who>

Subcommand down sets osd(s) <id> [<id>…] down.

Usage:

cephosddown<ids>[<ids>...]

Subcommand dump prints summary of OSD map.

Usage:

cephosddump{<int[0-]>}

Subcommand erasure-code-profile is used for managing the erasure code
profiles. It uses some additional subcommands.

Subcommand get gets erasure code profile <name>.

Usage:

cephosderasure-code-profileget<name>

Subcommand ls lists all erasure code profiles.

Usage:

cephosderasure-code-profilels

Subcommand rm removes erasure code profile <name>.

Usage:

cephosderasure-code-profilerm<name>

Subcommand set creates erasure code profile <name> with [<key[=value]> …]
pairs. Add a –force at the end to override an existing profile (IT IS RISKY).

Usage:

cephosderasure-code-profileset<name>{<profile>[<profile>...]}

Subcommand find find osd <id> in the CRUSH map and shows its location.

Usage:

cephosdfind<int[0-]>

Subcommand getcrushmap gets CRUSH map.

Usage:

cephosdgetcrushmap{<int[0-]>}

Subcommand getmap gets OSD map.

Usage:

cephosdgetmap{<int[0-]>}

Subcommand getmaxosd shows largest OSD id.

Usage:

cephosdgetmaxosd

Subcommand in sets osd(s) <id> [<id>…] in.

Usage:

cephosdin<ids>[<ids>...]

Subcommand lost marks osd as permanently lost. THIS DESTROYS DATA IF NO
MORE REPLICAS EXIST, BE CAREFUL.

Usage:

cephosdlost<int[0-]>{--yes-i-really-mean-it}

Subcommand ls shows all OSD ids.

Usage:

cephosdls{<int[0-]>}

Subcommand lspools lists pools.

Usage:

cephosdlspools{<int>}

Subcommand map finds pg for <object> in <pool>.

Usage:

cephosdmap<poolname><objectname>

Subcommand metadata fetches metadata for osd <id>.

Usage:

cephosdmetadata{int[0-]}(defaultall)

Subcommand out sets osd(s) <id> [<id>…] out.

Usage:

cephosdout<ids>[<ids>...]

Subcommand ok-to-stop checks whether the list of OSD(s) can be
stopped without immediately making data unavailable. That is, all
data should remain readable and writeable, although data redundancy
may be reduced as some PGs may end up in a degraded (but active)
state. It will return a success code if it is okay to stop the
OSD(s), or an error code and informative message if it is not or if no
conclusion can be drawn at the current time.

Subcommand destroy marks OSD id as destroyed, removing its cephx
entity’s keys and all of its dm-crypt and daemon-private config key
entries.

This command will not remove the OSD from crush, nor will it remove the
OSD from the OSD map. Instead, once the command successfully completes,
the OSD will show marked as destroyed.

In order to mark an OSD as destroyed, the OSD must first be marked as
lost.

Usage:

cephosddestroy<id>{--yes-i-really-mean-it}

Subcommand purge performs a combination of osddestroy,
osdrm and osdcrushremove.

Usage:

cephosdpurge<id>{--yes-i-really-mean-it}

Subcommand safe-to-destroy checks whether it is safe to remove or
destroy an OSD without reducing overall data redundancy or durability.
It will return a success code if it is definitely safe, or an error
code and informative message if it is not or if no conclusion can be
drawn at the current time.

Subcommand set-require-min-compat-client enforces the cluster to be backward
compatible with the specified client version. This subcommand prevents you from
making any changes (e.g., crush tunables, or using new features) that
would violate the current setting. Please note, This subcommand will fail if
any connected daemon or client is not compatible with the features offered by
the given <version>. To see the features and releases of all clients connected
to cluster, please see ceph features.

Usage:

cephosdset-require-min-compat-client<version>

Subcommand stat prints summary of OSD map.

Usage:

cephosdstat

Subcommand tier is used for managing tiers. It uses some additional
subcommands.

Subcommand add adds the tier <tierpool> (the second one) to base pool <pool>
(the first one).

Usage:

cephosdtieradd<poolname><poolname>{--force-nonempty}

Subcommand add-cache adds a cache <tierpool> (the second one) of size <size>
to existing pool <pool> (the first one).

--no-increasing is off by default. So increasing the osd weight is allowed
using the reweight-by-utilization or test-reweight-by-utilization commands.
If this option is used with these commands, it will help not to increase osd weight
even the osd is under utilized.