Categories

Meta

Tag: system containers

It is still an ongoing work not ready for production, but the upstream version of OpenShift origin has already an experimental support for running OpenShift Origin using system containers. The “latest” Docker image for origin, node and openvswitch, the 3 components we need, are automatically pushed to docker.io, so we can use these for our test. The rhel7/etcd system container image instead is pulled from the Red Hat registry.

The Vagrantfile will provision three virtual machines based on the `fedora/25-atomic-host` image. One machine will be used for the master node, the other two will be used as nodes. I am using static IPs for them so that it is easier to refer to them from the Ansible playbook and to require DNS configuration.

The machines can finally be provisioned with vagrant as:

# vagrant up --provider libvirt

At this point you should be able to login into the VMs as root using your ssh key:

for host in 10.0.0.{10,11,12};
do
ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no [email protected]$host "echo yes I could login on $host"
done
yes I could login on 10.0.0.10
yes I could login on 10.0.0.11
yes I could login on 10.0.0.12

Our VMs are ready. Let’ install OpenShift!

This is the inventory file used for openshift-ansible, store it in a file origin.inventory:

The new configuration required to run system containers is quite visible in the inventory file. `use_system_containers=True` is required to tell the installer to use system containers, `system_images_registry` specifies the registry from where the system containers must be pulled.

And we can finally run the installer, using python3, from the directory where we forked ansible-openshift:

$ oc login --insecure-skip-tls-verify=false 10.0.0.10:8443 -u user -p OriginUser
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project
$ oc new-project test
Now using project "test" on server "https://10.0.0.10:8443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
to build a new example application in Ruby.
$ oc new-app https://github.com/giuseppe/hello-openshift-plus.git
--> Found Docker image 1f8ec11 (6 days old) from Docker Hub for "fedora"
* An image stream will be created as "fedora:latest" that will track the source image
* A Docker build using source code from https://github.com/giuseppe/hello-openshift-plus.git will be created
* The resulting image will be pushed to image stream "hello-openshift-plus:latest"
* Every time "fedora:latest" changes a new build will be triggered
* This image will be deployed in deployment config "hello-openshift-plus"
* Ports 8080, 8888 will be load balanced by service "hello-openshift-plus"
* Other containers can access this service through the hostname "hello-openshift-plus"
* WARNING: Image "fedora" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "fedora" created
imagestream "hello-openshift-plus" created
buildconfig "hello-openshift-plus" created
deploymentconfig "hello-openshift-plus" created
service "hello-openshift-plus" created
--> Success
Build scheduled, use 'oc logs -f bc/hello-openshift-plus' to track its progress.
Run 'oc status' to view your app.

After some time, we can see our service running:

oc get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-openshift-plus 172.30.204.140 8080/TCP,8888/TCP 46m

Are we really running on system containers? Let’s check it out on master and one node:

(The atomic command upstream has a breaking change so with future versions of atomic we will need -f backend=ostree to filter system containers, as clearly ostree is not a runtime)