* It is recommended that you zero the first several blocks on each disk that will be used for Ceph OSD data storage. This step is required if you're using disks that have been used in a previous Ceph deployment. The command "dd if=/dev/zero of=/dev/sdX bs=100M count=1" (with "sdX" replaced with an appropriate device name) will suffice.

== Choosing Your Configuration ==

== Choosing Your Configuration ==

Line 16:

Line 17:

+

For all ceph configurations, uncomment the following in site.pp and change the values as are appropriate for your deployment:

This command should return a list of UUIDs, of which you will see the one matching the output of nova volume-list.

+

This command should return a list of UUIDs, of which you will see the one matching the output of cinder commands above.

This is your volume.

This is your volume.

-

* For a moment, depending on the speed of your ceph cluster, nova volume-list will show the volume status as "creating".

+

* For a moment, depending on the speed of your ceph cluster, "cinder list" will show the volume status as "creating".

* After it's created, it should mark the volume "available".

* After it's created, it should mark the volume "available".

* Failure states will either be "error" or a indefinite "creating" status. If this is the case, check the /var/log/cinder/cinder-volume.log for any errors.

* Failure states will either be "error" or a indefinite "creating" status. If this is the case, check the /var/log/cinder/cinder-volume.log for any errors.

+

+

Next, you can attach the volume to a running instance. First, use the "nova list" command to find the UUID of the instance to which you want to attach the volume. Then use the "nova volume-attach \[instance id\] \[volume id\] auto" command to attach the volume to the instance.

You can now log in to the instance, partition the volume, and create a filesystem on the volume. First we'll need to SSH into the instance. We know it's IP address from the "nova list" output above. Use the "quantum router-list" command to find out the namespace we'll need to use when calling SSH.

Warning: Permanently added '10.10.10.4' (RSA) to the list of known hosts.

+

$

+

</pre>

+

+

Note from the output of "nova volume-attach" above that the volume was attached as device "/dev/vdb". We can treat that as an ordinary hard drive by partitioning it, creating a filesystem on it, and mounting it:

Installing a ceph cluster and configuring rbd-backed cinder volumes.

First steps

Install your build server

Run puppet_modules.py to download the necessary puppet modules

Edit site.pp to fit your configuration.

You must define one MON and at least one OSD to use Ceph.

It is recommended that you zero the first several blocks on each disk that will be used for Ceph OSD data storage. This step is required if you're using disks that have been used in a previous Ceph deployment. The command "dd if=/dev/zero of=/dev/sdX bs=100M count=1" (with "sdX" replaced with an appropriate device name) will suffice.

Choosing Your Configuration

Cisco COI Grizzly g.1 release only supports standalone ceph nodes. Please follow only those instructions.
Cisco COI Grizzly g.2 release supports standalone and integrated. Integrated options allow you to run MON on control and compute servers, along with OSD on compute servers. You can also have standalone cinder-volume nodes as OSD servers.

For all ceph configurations, uncomment the following in site.pp and change the values as are appropriate for your deployment:

class { 'ceph::conf':
fsid => $::ceph_monitor_fsid,
auth_type => $::ceph_auth_type,
cluster_network => $::ceph_cluster_network,
public_network => $::ceph_public_network,
}
class { 'ceph::osd':
public_address => '10.0.0.3',
cluster_address => '10.0.0.3',
}
# Specify the disk devices to use for OSD here.
# Add a new entry for each device on the node that ceph should consume.
# puppet agent will need to run four times for the device to be formatted,
# and for the OSD to be added to the crushmap.
ceph::osd::device { '/dev/sdb': }

# each MON needs a unique id, you can start at 0 and increment as needed.
class {'ceph_mon': id => 0 }
for each compute node that does NOT contain a mon, you can specify the OSD configuration
class { 'ceph::conf':
fsid => $::ceph_monitor_fsid,
auth_type => $::ceph_auth_type,
cluster_network => $::ceph_cluster_network,
public_network => $::ceph_public_network,
}
class { 'ceph::osd':
public_address => '10.0.0.3',
cluster_address => '10.0.0.3',
}
# Specify the disk devices to use for OSD here.
# Add a new entry for each device on the node that ceph should consume.
# puppet agent will need to run four times for the device to be formatted,
# and for the OSD to be added to the crushmap.
ceph::osd::device { '/dev/sdb': }

Ceph MON and OSD on the Same Nodes

This feature will be available in g.3 and later releases. It is not supported in g.2 or ealier.

WARNING: YOU MUST HAVE AN ODD NUMBER OF MON NODES.

You can have as many OSD nodes as you like, but the MON nodes must be odd to reach a quorum.

First, uncomment the ceph_combo line in site.pp:

# Another alternative is to run MON and OSD on the same node. Uncomment
# $ceph_combo to enable this feature. You will NOT need to enabled
# $osd_on_compute, $controller_has_mon, or $computes_have_mon for this
# feature. You will need to specify the normal MON and OSD definitions
# for each puppet node as usual.
$ceph_combo = true

You will need to specify the normal MON and OSD definitions for each puppet node as usual:

node 'compute-server01' inherits os_base {
class { 'compute':
internal_ip => '192.168.242.21',
#enable_dhcp_agent => true,
#enable_l3_agent => true,
#enable_ovs_agent => true,
}
# If you want to run ceph mon0 on your controller node, uncomment the
# following block. Be sure to read all additional ceph-related
# instructions in this file.
# Only mon0 should export the admin keys.
# This means the following if statement is not needed on the additional
# mon nodes.
if !empty($::ceph_admin_key) {
@@ceph::key { 'admin':
secret => $::ceph_admin_key,
keyring_path => '/etc/ceph/keyring',
}
}
# Each MON needs a unique id, you can start at 0 and increment as needed.
class {'ceph_mon': id => 0 }
class { 'ceph::osd':
public_address => '192.168.242.21',
cluster_address => '192.168.242.21',
}
# Specify the disk devices to use for OSD here.
# Add a new entry for each device on the node that ceph should consume.
# puppet agent will need to run four times for the device to be formatted,
# and for the OSD to be added to the crushmap.
ceph::osd::device { '/dev/sdb': }
}

Making a standalone OSD node in a combined node environment

Add the following to your puppet OSD node in site.pp

class { 'ceph::conf':
fsid => $::ceph_monitor_fsid,
}

Deploying a Standalone Cinder Volume OSD node

This option is available in g.1 and newer releases. For each puppet node definition, add the following:

Ceph Node Installation and Testing

If you do not set puppet to autostart in the site.pp, you will have to run the agent manually as shown here.
Regardless of the start method, the agent must run at least four times on each node running any Ceph services in order for Ceph to be properly configured.

First bring up the mon0 node and run:

apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times

Then bring up the OSD node(s) and run:

apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times

The ceph cluster will now be up. You can verify by logging in to the mon0 node and running the 'ceph status' command. The "monmap" line should show 1 more more mons (depending on the number you configured). The osdmap shoudl show 1 or more OSD's (depending on the number you configured) and the OSD should be marked as "up". There will be one OSD per disk configured eg. if you have a single OSD node with three disks available for ceph, you will have 3 OSDs show up in your 'ceph status'.

This command should return a list of UUIDs, of which you will see the one matching the output of cinder commands above.
This is your volume.

For a moment, depending on the speed of your ceph cluster, "cinder list" will show the volume status as "creating".

After it's created, it should mark the volume "available".

Failure states will either be "error" or a indefinite "creating" status. If this is the case, check the /var/log/cinder/cinder-volume.log for any errors.

Next, you can attach the volume to a running instance. First, use the "nova list" command to find the UUID of the instance to which you want to attach the volume. Then use the "nova volume-attach \[instance id\] \[volume id\] auto" command to attach the volume to the instance.

You can now log in to the instance, partition the volume, and create a filesystem on the volume. First we'll need to SSH into the instance. We know it's IP address from the "nova list" output above. Use the "quantum router-list" command to find out the namespace we'll need to use when calling SSH.

Note from the output of "nova volume-attach" above that the volume was attached as device "/dev/vdb". We can treat that as an ordinary hard drive by partitioning it, creating a filesystem on it, and mounting it: