Main menu

Category Archives: Cluster

Introduction

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability, The Ceph storage services are usually hosted on external, dedicated storage nodes. Such storage clusters can sum up to several hundreds of nodes, providing petabytes of storage capacity.

Overview
This document covers how to setup Ceph with Openstack Mitaka with CentOS 7.

ProcedureLet’s assume we are using 3 nodes as a Ceph Server, one of it will be the ceph deployer1.For the Ceph Deployer you can execute the following command chain:

ENABLE PASSWORD-LESS SSH
Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.1. Generate the SSH keys, but do not use sudo or the root user. Leave the passphrase empty:

[root@ceph-admin ~]# su - cephuser
[root@ceph-admin ~]# ssh-keygen
Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.

2. Copy the key to each Ceph Node, replacing {username} with the user name you created with Create a Ceph Deploy User.

Copy as well the content of the file from ceph-admin from /root/.ssh/id_rsa.pub to the ceph-nodes /root/.ssh/authorized_keys

3. Modify the ~/.ssh/config file of your ceph-deploy admin node so that ceph-deploy can log in to Ceph nodes as the user you created without requiring you to specify –username {username} each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scpusage. Replace {username} with the user name you created:

TTY
On CentOS and RHEL, you may receive an error while trying to execute ceph-deploy commands. If requiretty is set by default on your Ceph nodes, disable it by executing sudo visudo and locate the Defaults requiretty setting. Change it to Defaults:ceph !requiretty or comment it out to ensure that ceph-deploy can connect using the user you created with Create a Ceph Deploy User

SELINUX
On CentOS and RHEL, SELinux is set to Enforcing by default. To streamline your installation, we recommend setting SELinux to Permissive or disabling it entirely and ensuring that your installation and cluster are working properly before hardening your configuration. To set SELinux to Permissive, execute the following:

# setenforce 0

PRIORITIES/PREFERENCES
Ensure that your package manager has priority/preferences packages installed and enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to enable optional repositories.

We will set sdb as our Journal Disk and the other disk would be for data2. We need to create GPT partition tables, repeating the commands for each data disk

[root@ceph1 ceph-dash]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) mkpart primary xfs 0% 100%
(parted) quit
Information: You may need to update /etc/fstab.
[root@ceph1 ceph-dash]# parted /dev/sdc
GNU Parted 3.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdc will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) mkpart primary xfs 0% 100%
(parted) quit
Information: You may need to update /etc/fstab.

3. For the journal, we are going to use a raw/unformatted volume, so we will not format it with XFS and we will not mark it as XFS with parted. However, a journal partition needs to be dedicated to each OSD, so we need to create three different partitions. In production environment, you can decide either to dedicate a disk (probably an SSD) to each journal, or like me to share the same SSD to different journal mount points. In both cases, the commands in parted for the journal disk will be:

Creating a pool requires a pool name PG and PGP numbers and a pool type is either replicated or erasures but the default is replicated.
1. Lets create a pool i.e. volumes, images, backups, vms, rbd. You can use the pgcalculator for pg counts from this link.

They also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs it to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the nodes running nova-compute:

CONFIGURE OPENSTACK TO USE CEPHCONFIGURING GLANCE
Glance can use multiple back ends to store images. To use Ceph block devices by default, configure Glance like the following.
1. Edit the /etc/glance/glance-api.conf and add under the [glance_store] section from the Controllers Node:

2. If you want to enable copy-on-write cloning of images, also add under the [DEFAULT] section:

show_image_direct_url = True

Note that this exposes the back end location via Glance’s API, so the endpoint with this option enabled should not be publicly accessible.

CONFIGURING CINDER
OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your Controller node, edit /etc/cinder/cinder.conf by adding:

CONFIGURING NOVA TO ATTACH CEPH RBD BLOCK DEVICE
In order to attach Cinder devices (either normal block or by issuing a boot from volume), you must tell Nova (and libvirt) which user and UUID to refer to when attaching the device. libvirt will refer to this user when connecting and authenticating with the Ceph cluster.
FROM Compute Server add the following to /etc/nova/nova.conf

RESTART OPENSTACKTo activate the Ceph block device driver and load the block device pool name into the configuration, you must restart OpenStack. Thus, for Debian based systems execute these commands on the appropriate nodes: