Post navigation

Overview

OpenShift Enterprise v3 by Red Hat is about building and running next-gen applications. If we look around, we have seen startups in virtually every market segment, turning the competitive landscape upside down. Startup companies like NetFlix, Spotify and Uber have literally pushed the incumbents to the brink of extinction and overtaken entire industries in a very short period of time. How have they been able to rival incumbents 100 times their size? The answer is simple, by bringing innovation to the market faster, much faster. Complacency and overcoming previous successes are very challenging for incumbents. It is much easier for a startup to innovate than an existing company with a degree of legacy. OpenShift v3 will level the playing field and provide organizations the appropriate tooling to rapidly reduce their time-to-market.

In this article we will look at how to setup an OpenShift lab environment and get started on the journey to faster innovation cycles.

Pre Configuration Steps

OpenShift requires a master and one or more nodes. In this lab we will configure one master and a node. Install RHEL or CentOS 7.1 on two systems and configure hostname as well as network accordingly. On both systems run the following steps:

Configure OpenShift Enterprise v3

Once the installer completes an OpenShift master and node will exist. Now we can begin with the main configuration. By default OpenShift will use HTTP authentication. This is of course only recommended for lab or test environments. For production environments you will want to connect to LDAP or an identity management system. On the master we can edit the /etc/openshift/master/master-config.yaml and configure authentication.

Next we need to create a standard user. OpenShift enterprise creates the system:admin account for default administration.

#htpasswd -c /root/users.htpasswd admin

Optionally we can give the newly created admin user, OpenShift cluster-admin permisions.

#oadm policy add-cluster-role-to-user cluster-admin admin

Configure Docker Registry

OpenShift uses the Docker registry for storing Docker container images. Anytime you build or change an application configuration, a new docker container is created and pushed to the registry. Each node can access this registry. You can and should use persistent storage for the registry. In this example we will use a host mountpoint on the node. The Docker registry runs as a container in the default namespace that only OpenShift admins can access.

On the node create a directory for the registry

#mkdir /images

On the master login in using the system:admin account, switch to the default project and create a Docker registry.

Create Router

OpenShift v3 uses OpenVswitch as the software defined network. In order for isolation, proxy and load balancing capabilities a router is needed. The router similar to the Docker registry also runs in a container. Using the below command we can create a router in the default namespace.

Configure DNS

OpenShift v3 requires a working DNS environment in order to handle URL resolution. The requirement is to create a DNS wildcard that points to the router. This should be the public IP of the node where the router container is running. In our example we have created a local DNS server that acts as a forwarder for the 192.168.122.0 network. In addition we have implemented a DNS wildcard that points to our nodes public or physical IP, where the router container is running.

Install and Configure GitHub Lab

In most cases you will probably want to configure a local GitHub server. This is of course optional. In the example we are using the public GitHub service, however you could easily do this on an internal GitHub server. You can setup the GitHub server on the OpenShift v3 master. For demos it is recommended to use GitHub lab, since it is much easier to install and configure.

Once the above steps are complete you can access GitHub by connecting through browser to the host.

Username: root
Password: 5iveL!fe

Using OpenShift v3

At this point we should have a functioning OpenShift v3 environment. We can now build and deloy applications. Here we will see how to deploy a mysql database using scaling and build a ruby hello world application from GitHub.

Deploying MySQL database

Though using the OpenShift CLI or API is certainly possible, let us at this point use the UI. To login to the UI open a browser and point it at the IP of the OpenShift v3 master, for example: https://ose3-master.lab.com:8443/console/. Create a new project for hosting containers. In OpenShift v3 each project maps to a namespace in Kubernetes. Under the demo project deploy a MySQL database by selecting “create” or “getting started”. Make sure you add a label, this is explained later. Once an application is created we see the status in the Overview. Each time an application is deployed we have a container deployer and the running container. Once the deployment is complete the deployer container is deleted and we just have the running container. The “oc get pods” command shows us all pods within the namespace. A pod is a Kubernetes construct and means one or more Docker containers that share deployment template. Pods run on nodes, grouping containers within pods is a way to ensure certain containers are localized.

For every application deployed, OpenShift will also create a replication controller and service. These are also Kubernetes constructs. A replication controller is used for auto-scaling and determines how many instances of a given pod should exist.

When creating applications it is very important to always define labels. Labels are applied to pods, replication controllers and services. When deleting an application it is very easy to for example reference the label instead of deleting individual components manually.

#oc delete all --selector="demo=mysql"

OpenShift v3 also supports auto-scaling. This capability leverages Kubernetes replication controllers. First we need to identify the replication controller using the “oc get rc” command. We can automatically scale our application by changing the number of replicas. In this example we will scale from one to three MySQL databases.

#oc scale --replicas=3 rc mysql-1

Upon scaling MySQL, we can quickly see the results in the UI.

Building Ruby Hello World Application

So far we have seen how to provision application components such as databases or middleware in seconds. We have also obeserved how we can effortlessly scale these components. In the following example, we will build our own application code in OpenShift v3. OpenShift will provide the Ruby runtime environment and automatically build, as well as launch a container with our hello world code from GitHub. OpenShift utilizes a technology called “Source to Image” (S2I) that efficiently builds the container. Instead of rebuilding the entire container each time, S2I is able to re-use previous builds and only change the application layer within the container. Docker containers are immutable so any change always requires creating a new container. This is a wasteful, time consuming process without OpenShift and S2I.

OpenShift asks us for the application build runtime. In this case we will select Ruby 2.0 since this is in fact a Ruby application.

In the final step we can provide any custom details about the build configuration and of course add a label.

OpenShift will create a container with Ruby 2.0 and our code from GitHub. It will also complete any required build steps. The end result is a complete application build, of a running application, inside a Docker container. Our application can now be automatically tested using Jenkins or other such continuous delivery tools. If tests pass, it can be automatically rolled out to production. Think about how much faster you can make code available to your customers with OpenShift? By selecting the URL for the Ruby hello world application we can also access the application directly.

Troubleshooting

In this section we will go through some basic troubleshooting steps for OpenShift v3. In order to get logs we first need the pod name. Using the “oc get pods” command, we can get a list of pods.

Beyond looking at a pods logs we can also access journald for docker, openshift-master and openshift-node. Using the below journalctl commands we can get a list of current log messages for the major OpenShift components.

#journalctl -f -l -u docker

#journalctl -f -l -u openshift-master

#journalctl -f -l -u openshift-node

Issue 1: Pod shows as pending and scheduled but never gets deployed on a node.

This problem can occur if the node docker image cache gets out of sync. In order to resolve this issue perform the following steps on the node:

#systemctl stop docker

#rm -rf /var/lib/docker/*

#reboot

Summary

In this article we have seen how to deploy an OpenShift Enterprise v3 lab environment. We have seen how to use OpenShift in order to deploy and build applications. This is just the tip of the iceberg of course. In a world where speed and agility becomes increasingly important, it is clear that container infrastructure will become the future platform for running applications. You simply can’t argue with being able to start 60 containers in the time it takes to start a single VM. Google deploys over two billion containers a week and everything you do from Google mail to search, runs in a container. Containers are enterprise ready and it is time to start understanding how to take advantage of this technology. OpenShift Enterprise v3 provides a platform for building and running applications on container infrastructure. OpenShift Enterprise v3 enables organizations to innovate faster, bringing that innovation to the market sooner. Don’t let your organization be overtaken by the next startup! If you found this article informative or helpful please share your thoughts.

Well autoscaling isnt exactly a percise term, it is used rather broadly from what I have experienced. Maybe an example on what you consider autoscaling would be appropriate? There are many levels where autoscaling applies imho. From the application perspective OpenShift provides autoscaling, in that compute resources and incoming requests are dynamically scaled (more containers running on more container hosts). What is missing in my example is an automated trigger as opposed to manual, maybe that is what you mean? This can certainly be done in OpenShift, though I didnt specify. Here is an example using VMs in OpenStack on autoscaling simple HTTP server based on trigger…yes CPU usage is not the best trigger, it is only an example.

Note: there is no support for CentOS or OpenShift Origin so if you are comfortable working with community upstream then this is fine. If you expect support and to get help in months or years on your deployment you really should consider subscription model and OpenShift Enterprise.

Hi,
I followed installation steps as mentioned and when I try to execute the step “oadm policy add-cluster-role-to-user cluster-admin admin” it gives an error saying “Error: couldn’t read version from server: Get https://master.hostname.local:8443/api: Unknown Host see ‘oadm policy add-cluster-role-to-user -h’ for help.
“

Check your DNS: nslookup master.hostname.local. You need working DNS and you need A record for openshift master and any nodes in DNS in addition to wildcard for your application domain in OpenShift. This is also documented in the guide.

; 192.168.100.0/22 – A records
*.devspace.local. IN A 192.168.100.110
master.devspace.local. IN A 192.168.100.110
node01.devspace.local. IN A 192.168.100.111
node02.devspace.local. IN A 192.168.100.112