Posts on Cloud,DevOps, Citrix,VMware and others. Also tracking my Continuous learning from Wintel to open source and development.
Words and views are my own and do not reflect on my companies views.
Disclaimer: some of the links on this site are affiliate links, if you click on them and make a purchase, I make a commission.

Tuesday, June 26, 2018

Habitat and Kubernetes – How We Made that Demo

At ChefConf, we presented our Product Vision & Announcements for 2018. During the opening keynotes, attendees were treated to a live demonstration of Chef's latest developments for Chef Automate, Habitat,InSpec, and the newly launched Chef Workstation. This blog post will be the first of three deep-dive peeks under the hood of each of the on-stage demos we presented. We invite you to follow along and get hands-on with our latest and greatest features to serve as inspiration to take your automation to the next level! Today Nell Shamrell-Harrington, Senior Software Development Engineer on the Habitat team, will take us on a guided tour of using the Habitat Kubernetes Operator to start deploying apps into the Google Kubernetes Engine (GKE). Enjoy!

Habitat and Kubernetes are like peanut butter and jelly. They are both wonderous on their own, but together they become something magical and wholesome. Going into ChefConf, the Habitat team knew how incredible the Kubernetes Habitat integration is and we wanted to make sure that every single attendee left the opening keynote also knowing it. In case you missed it, watch a recording of the demo presentation here.

We decided to showcase the Habitat National Parks Demo (initially created by the Chef Customer Facing Teams). This app highlights packaging Java applications with Habitat (many of our Enterprise customers heavily use Java) and running two Habitat services – one for the Java app, and one for the MongoDB database used by the app.

GKE required that we use Role Based Access Control (RBAC) authorization. This gives the Habitat operator the permissions it needs manage its required resources in the GKE cluster. In order to deploy the operator to GKE, we used the config files located at examples/rbac

Then we could deploy the operator using the file at examples/rbac/habitat-operator.yml. This file pulls down the Habitat Operator Docker Container Image on Docker Hub and deployed it to our GKE cluster.

$ kubectl apply -f examples/rbac/habitat-operator.yml

We also needed to deploy the Habitat Updater, which we used to watch Builder for updated packages and pulled and updated to those packages automatically (the real magic piece of the demo).

And we could see the running app by heading to the IP address of the GKE service:

So at this point we could easily demo creating a new deployment of the National Parks app – but the real magic would be creating a change to the app and seeing it seamlessly roll through the entire pipeline.

Prior to the demo, I created and stashed some style changes to the National Parks app (which would show up well on a projection screen).

Now, all that was left to do was to promote the new build to the stable channel of Builder:

Now the the Habitat Updater came into play. The purpose of the Habitat Updater is to query Builder for new stable versions of Habitat packages deployed within the cluster. Every 60 seconds, the Updater queries builder and, should there be an updated package, it pulls the Docker Container Image for that package from Docker Hub and re-creates the containers running that image.

And then all we had to do was revisit the ip address of the load balancer and we could see our changes live!

The whole purpose of this demo was to showcase the magic of Habitat and Kubernetes. Based on reactions to the demo, we succeeded!

Acknowledgements

Although I had the privilege of running the demo onstage, credit for creating this demo must also be shared with Fletcher Nichols and Elliot Davis – two of my fellow Habitat core team members. It takes a village to make a demo and I am so lucky to have the Habitat core team as my village.