Helm finds the Kubernetes cluster by reading from the local Kubernetes config file; make sure this is downloaded and accessible to the helm client.

A Tiller server must be configured and running for your Kubernetes cluster, and the local Helm client must be connected to it. It may be helpful to look at the Helm documentation for init. To run Tiller locally and connect Helm to it, run:

$ helm init

The ceph-helm project uses a local Helm repo by default to store charts. To start a local Helm repo server, run:

The ceph-osd-device-<name> label is created based on the osd_devices name value defined in our ceph-overrides.yaml.
From our example above we will have the two following label: ceph-osd-device-dev-sdb and ceph-osd-device-dev-sdc.

The output from helm install shows us the different types of ressources that will be deployed.

A StorageClass named ceph-rbd of type ceph.com/rbd will be created with ceph-rbd-provisioner Pods. These
will allow a RBD to be automatically provisioned upon creation of a PVC. RBDs will also be formatted when mapped for the first
time. All RBDs will use the ext4 filesystem. ceph.com/rbd does not support the fsType option.
By default, RBDs will use image format 2 and layering. You can overwrite the following storageclass’ defaults in your values file:

Kubernetes uses the RBD kernel module to map RBDs to hosts. Luminous requires
CRUSH_TUNABLES 5 (Jewel). The minimal kernel version for these tunables is 4.5.
If your kernel does not support these tunables, run cephosdcrushtunableshammer

Important

Since RBDs are mapped on the host system. Hosts need to be able to resolve
the ceph-mon.ceph.svc.cluster.local name managed by the kube-dns service. To get the
IP address of the kube-dns service, run kubectl-nkube-systemgetsvc/kube-dns