Introduction to IBM Cloud Private

IBM continues to reinvent itself and brings cloud-native development technologies to large enterprises. It is a worthy investment into containers, microservices, and especially Kubernetes, an excellent sign for the Kubernetes community.

In the last ten years, we are used to see infrastructure giants like Cisco, Dell-EMC, HPE, IBM watching from a distance how new technologies emerge and acquire them when they get profitable. Most of the time, buying revenue to satisfy shareholders, instead of investing in their own engineering resources. In that sense, IBM Cloud Private (ICP) is a promising move from IBM.

ICP is based on Kubernetes. I have been playing with ICP for over two months now, since ICP version 1.2, and user experience seems to be steadily improving. The latest version of ICP 2.1 was announcedlast week, and it includes a few important changes:

Since I’m not planning to run anything heavy, I’ll be using 3 nodes, and install Master, Proxy, and Workers an all 3 nodes.

Note: My lab h/w consists of Aparna Systems Orca µCloud4015 chassis and 3 Oserv8 µServers with 8 core 2.1GHz CPU, 64GB DDR RAM, 2x NVMe drives, and 2×10 Gbps integrated network. This enclosure takes only 4U space. I like it because once it’s installed, no additional cabling is required to add up to 15 servers (60 servers in 4060). And µServers are packaged in a hot-swappable cartridge form factor that is about the size of a 3.5-inch hard disk drive.

Community Edition (CE) is intended for non-production use and includes all primary services like Kubernetes, logging, monitoring, IAM, and access to the catalog. It’s limited to one master node. You can try all the core functionality except for setting up a highly available cluster.

If you plan to open your private cloud securely to provide cloud services, you need to be on the next tier, which is available through your IBM Sales Representative. The Cloud Native package includes everything in CE and additionally Cloud Automation Manager, Microservice Builder, and WebSphere Liberty. Cloud Foundry is also available for this option.

How to install IBM Private Cloud 2.1

We need a few things before we get up and running with ICP 2.1. First, I’ll configure my Ubuntu servers and share SSH keys, so the boot node can access all my other nodes. Then I’ll install Docker and after that ICP. From there, ICP will take care of my Kubernetes cluster installation.

Install the base O/S – Ubuntu

Download your preferred version of Ubuntu. I use Ubuntu Server 16.04.3 LTS.

Install Ubuntu on all servers with default options. I used user/nopassword as username/password for simplicity.

Log in to your Ubuntu host via terminal.

Edit the /etc/network/interfaces file, assign a static IP and set a hostname. For my setup, I have used:Hostname IP

ubuntu36 192.168.20.36
ubuntu37 192.168.20.37
ubuntu38 192.168.20.38

Edit the /etc/hosts file, add your nodes to the list, and make sure you can ping them by the hostname:

cat/etc/hosts

ping ubuntu37ping ubuntu38

On your Ubuntu host, install the SSH server:

sudoapt-get install openssh-server

Now, you should be able to access your servers using SSH. Check the status by running:

sudo service ssh status

Disable firewall on your Ubuntu VM by running:

sudo ufw disable

Install curl if it’s not already installed:

sudo apt install curl

Repeat steps 3-9 on all servers.

Now, we need to share SSH keys among all nodes:

Log in to your first node, which will be the boot node (ubuntu36), as root.

Generate an SSH key:

ssh-keygen-b4096-t rsa -f ~/.ssh/master.id_rsa -N""

Add the SSH key to the list of authorized keys:

cat ~/.ssh/master.id_rsa.pub |sudotee-a ~/.ssh/authorized_keys

From the boot node, add the SSH public key to other nodes in the cluster:

ssh-copy-id -i ~/.ssh/master.id_rsa.pub root;

Repeat for all nodes.

Log in to the other nodes and restart the SSH service:

sudo systemctl restart sshd

Now the boot node can connect through SSH to all other nodes without the password.

Install Docker

To get the latest version of Docker, install it from the official Docker repository.

Previous command creates the cluster directory under /opt/ibm-cloud-private-2.1.0 with the following files: config.yaml, hosts, misc/storage_class, and ssh_key. Before deploying ICP, these files need to be modified.

Replace the ssh_key file with the private SSH key you have created earlier.

Add the IP address of all our nodes to the hosts file in the /opt/cluster directory. If you plan to run production workloads, I recommend separate master and worker Kubernetes nodes. Since I want to try high availability with three nodes, my config file looks like this:

To be able to configure a Kubernetes failover cluster, set a VIP for the master nodes. The VIP for the master and proxy nodes are defined in the config.yaml file. Edit config.yaml and add the parameter values as follows:

The last step may take up to 5-10 minutes, and if your deployment is successful, you should be able to access your ICP login screen by visiting https://cluster_vip:8443 (Default username/password is admin/admin).

IBM Cloud Private Login Screen

IBM Cloud Private Dashboard

IBM Cloud Private Login Screen

IBM Cloud Private Catalog

How to Uninstall IBM Cloud Private 2.1 with HA

If you need to clean up your setup/reinstall or remove ICP for any reason you have two options. You can either run the command below or forcefully kill all Docker containers at once on all nodes.