This a largely a copy of the regular Manual Deployment with FreeBSD specifics.
The difference lies in two parts: The underlying diskformat, and the way to use
the tools.

All Ceph clusters require at least one monitor, and at least as many OSDs as
copies of an object stored on the cluster. Bootstrapping the initial monitor(s)
is the first step in deploying a Ceph Storage Cluster. Monitor deployment also
sets important criteria for the entire cluster, such as the number of replicas
for pools, the number of placement groups per OSD, the heartbeat intervals,
whether authentication is required, etc. Most of these values are set by
default, so it’s useful to know about them when setting up your cluster for
production.

Following the same configuration as Installation (ceph-deploy), we will set up a
cluster with node1 as the monitor node, and node2 and node3 for
OSD nodes.

Some cache and log (ZIL) can be attached.
Please note that this is different from the Ceph journals. Cache and log are
totally transparent for Ceph, and help the file system to keep the system
consistent and help performance.
Assuming that ada2 is an SSD:

As per FreeBSD default parts of extra software go into /usr/local/. Which
means that for /etc/ceph.conf the default location is
/usr/local/etc/ceph/ceph.conf. Smartest thing to do is to create a softlink
from /etc/ceph to /usr/local/etc/ceph:

ln-s/usr/local/etc/ceph/etc/ceph

A sample file is provided in /usr/local/share/doc/ceph/sample.ceph.conf
Note that /usr/local/etc/ceph/ceph.conf will be found by most tools,
linking it to /etc/ceph/ceph.conf will help with any scripts that are found
in extra tools, scripts, and/or discussionlists.

Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
a number of things:

Unique Identifier: The fsid is a unique identifier for the cluster,
and stands for File System ID from the days when the Ceph Storage Cluster was
principally for the Ceph File System. Ceph now supports native interfaces,
block devices, and object storage gateway interfaces too, so fsid is a
bit of a misnomer.

Cluster Name: Ceph clusters have a cluster name, which is a simple string
without spaces. The default cluster name is ceph, but you may specify
a different cluster name. Overriding the default cluster name is
especially useful when you are working with multiple clusters and you need to
clearly understand which cluster your are working with.

For example, when you run multiple clusters in a multisite configuration,
the cluster name (e.g., us-west, us-east) identifies the cluster for
the current CLI session. Note: To identify the cluster name on the
command line interface, specify the a Ceph configuration file with the
cluster name (e.g., ceph.conf, us-west.conf, us-east.conf, etc.).
Also see CLI usage (ceph--cluster{cluster-name}).

Monitor Name: Each monitor instance within a cluster has a unique name.
In common practice, the Ceph Monitor name is the host name (we recommend one
Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
Ceph Monitors). You may retrieve the short hostname with hostname-s.

Monitor Map: Bootstrapping the initial monitor(s) requires you to
generate a monitor map. The monitor map requires the fsid, the cluster
name (or uses the default), and at least one host name and its IP address.

Monitor Keyring: Monitors communicate with each other via a
secret key. You must generate a keyring with a monitor secret and provide
it when bootstrapping the initial monitor(s).

Administrator Keyring: To use the ceph CLI tools, you must have
a client.admin user. So you must generate the admin user and keyring,
and you must also add the client.admin user to the monitor keyring.

The foregoing requirements do not imply the creation of a Ceph Configuration
file. However, as a best practice, we recommend creating a Ceph configuration
file and populating it with the fsid, the moninitialmembers and the
monhost settings.

You can get and set all of the monitor settings at runtime as well. However,
a Ceph Configuration file may contain only those settings that override the
default values. When you add settings to a Ceph configuration file, these
settings override the default settings. Maintaining those settings in a
Ceph configuration file makes it easier to maintain your cluster.

The procedure is as follows:

Log in to the initial monitor node(s):

ssh{hostname}

For example:

sshnode1

Ensure you have a directory for the Ceph configuration file. By default,
Ceph uses /etc/ceph. When you install ceph, the installer will
create the /etc/ceph directory automatically.

Once you have your initial monitor(s) running, you should add OSDs. Your cluster
cannot reach an active+clean state until you have enough OSDs to handle the
number of copies of an object (e.g., osdpooldefaultsize=2 requires at
least two OSDs). After bootstrapping your monitor, your cluster has a default
CRUSH map; however, the CRUSH map doesn’t have any Ceph OSD Daemons mapped to
a Ceph Node.

Without the benefit of any helper utilities, create an OSD and add it to the
cluster and CRUSH map with the following procedure. To create the first two
OSDs with the long form procedure, execute the following on node2 and
node3:

Connect to the OSD host.

ssh{node-name}

Generate a UUID for the OSD.

uuidgen

Create the OSD. If no UUID is given, it will be set automatically when the
OSD starts up. The following command will output the OSD number, which you
will need for subsequent steps.

Add the OSD to the CRUSH map so that it can begin receiving data. You may
also decompile the CRUSH map, add the OSD to the device list, add the host as a
bucket (if it’s not already in the CRUSH map), add the device as an item in the
host, assign it a weight, recompile it and set it.

Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and cephauthgetmds.{id} output.