You will also need to uncomment both the path Exec statement and the puppet node statements

node 'ceph-mon01' inherits os_base {
# only mon0 should export the admin keys.
# This means the following if statement is not needed on the additional mon nodes.
if \!empty($::ceph_admin_key) {
@@ceph::key { 'admin':
secret => $::ceph_admin_key,
keyring_path => '/etc/ceph/keyring',
}
}
# each MON needs a unique id, you can start at 0 and increment as needed.
class {'ceph_mon': id => 0 }
class { 'ceph::apt::ceph': release => $::ceph_release }
}
# This is the OSD node definition example. You will need to specify the public and cluster IP for each unique node.
node 'ceph-osd01' inherits os_base {
class { 'ceph::conf':
fsid => $::ceph_monitor_fsid,
auth_type => $::ceph_auth_type,
cluster_network => $::ceph_cluster_network,
public_network => $::ceph_public_network,
}
class { 'ceph::osd':
public_address => '192.168.242.3',
cluster_address => '192.168.242.3',
}
# Specify the disk devices to use for OSD here.
# Add a new entry for each device on the node that ceph should consume.
# puppet agent will need to run four times for the device to be formatted,
# and for the OSD to be added to the crushmap.
ceph::osd::device { '/dev/sdd': }
class { 'ceph::apt::ceph': release => $::ceph_release }
}

Installation process

First bring up the mon0 node and run:

apt-get update
run 'puppet agent -t -v --no-daemonize' at least three times

Then bring up the OSD node(s) and run:

apt-get update
run 'puppet agent -t -v --no-daemonize' at least four times

The ceph cluster will now be up. You can verify by logging in to the mon0 node and running the 'ceph status' command. The "monmap" line should show 1 more more mons (depending on the number you configured). The osdmap shoudl show 1 or more OSD's (depending on the number you configured) and the OSD should be marked as "up".