Installing a ceph cluster and configuring rbd-backed cinder volumes.

First steps

Choosing Your Configuration

Cisco COI Grizzly g.1 release only supports standalone ceph nodes. Please follow only those instructions.
Cisco COI Grizzly g.2 release supports standalone and ingrated. Integrated options allow you to run MON on control and compute servers, along with OSD on compute servers. You can also have standalone cinder-volume nodes as OSD servers.

Ceph MON on the Controller and OSD on All Compute Nodes

uncomment the following in your control server puppet node definition
# if !empty($::ceph_admin_key) {
# @@ceph::key { 'admin':
# secret => $::ceph_admin_key,
# keyring_path => '/etc/ceph/keyring',
# }
# }
# each MON needs a unique id, you can start at 0 and increment as needed.
# class {'ceph_mon': id => 0 }
add the following to each compute server puppet node definition
class { 'ceph::conf':
fsid => $::ceph_monitor_fsid,
auth_type => $::ceph_auth_type,
cluster_network => $::ceph_cluster_network,
public_network => $::ceph_public_network,
}
class { 'ceph::osd':
public_address => '10.0.0.3',
cluster_address => '10.0.0.3',
}
# Specify the disk devices to use for OSD here.
# Add a new entry for each device on the node that ceph should consume.
# puppet agent will need to run four times for the device to be formatted,
# and for the OSD to be added to the crushmap.
ceph::osd::device { '/dev/sdb': }

Ceph Multi-MON Across Controller(s) and Compute(s), with Some OSD on Compute(s)

you cannot cohabitate mons and osds on the same server.

uncomment the following:

$controller_has_mon = true
$computes_have_mons = false

uncomment the following in your control server puppet node definition
# if !empty($::ceph_admin_key) {
# @@ceph::key { 'admin':
# secret => $::ceph_admin_key,
# keyring_path => '/etc/ceph/keyring',
# }
# }
# each MON needs a unique id, you can start at 0 and increment as needed.
# class {'ceph_mon': id => 0 }
for each additional mon on a compute node, add the following
# each MON needs a unique id, you can start at 0 and increment as needed.
# class {'ceph_mon': id => 0 }
for each compute node that does NOT contain a mon, you can specify the OSD configuration
class { 'ceph::conf':
fsid => $::ceph_monitor_fsid,
auth_type => $::ceph_auth_type,
cluster_network => $::ceph_cluster_network,
public_network => $::ceph_public_network,
}
class { 'ceph::osd':
public_address => '10.0.0.3',
cluster_address => '10.0.0.3',
}
# Specify the disk devices to use for OSD here.
# Add a new entry for each device on the node that ceph should consume.
# puppet agent will need to run four times for the device to be formatted,
# and for the OSD to be added to the crushmap.
ceph::osd::device { '/dev/sdb': }

Configuring Cinder to use Ceph

Ceph Node Installation and Testing

If you do not set puppet to autostart in the site.pp, you will have to run the agent manually as shown here.
Regardless of the start method, the agent must run at least four times on each node running any Ceph services in order for Ceph to be properly configured.

First bring up the mon0 node and run:

apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times

Then bring up the OSD node(s) and run:

apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times

The ceph cluster will now be up. You can verify by logging in to the mon0 node and running the 'ceph status' command. The "monmap" line should show 1 more more mons (depending on the number you configured). The osdmap shoudl show 1 or more OSD's (depending on the number you configured) and the OSD should be marked as "up".