A Docker Captain's Blog

Docker | Kubernetes | Cloud

Category: Kubernetes

Last week I attended Dockercon 2018 EU which took place at Centre de Convencions Internacional de Barcelona (CCIB) in Barcelona, Spain. With over 3000+ attendees from around the globe, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals and so on..Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology. This time I was lucky enough to get chance to Emcee for Docker for Developer Track for the first time. Not only this, I conducted Hallway Track for OpenUSM project & DockerLabs community contribution effort. Around 20-30 participants showed up their interest to learn more around this system management, monitoring & Log Analytics tool.

This Dockercon we had Docker Captains Summit for the first time where the entire day was dedicated to Captains. On Dec #3 ( 10:00 AM till 3:00 PM), we got chance to interact with Docker Staffs, where we put all our queries around Docker Future roadmap. It was amazing to meet all young Captains who joined us this year as well as getting familiar to what they have been contributing to during the initial introductory rounds.

This Dockercon, there has been couple of exciting announcements. 3 of the new features were targeted at Docker Community Edition, while the two were for Docker Enterprise customers. Here’s a rundown of what I think are the most 5 exciting announcements made last week –

#1. Announcement of Cloud Native Application Bundles(CNAB)

Microsoft and Docker have captured a great piece of attention with announcement around CNAB – Cloud Native Application Bundles.

What is CNAB?

Cloud Native Application Bundles (CNAB) are a standard packaging format for multi-component distributed applications. It allows packages to target different runtimes and architectures. It empowers application distributors to package applications for deployment on a wide variety of cloud platforms, cloud providers, and cloud services. It also provides the capabilities necessary for delivering multi-container applications in disconnected environments.

Is it platform-specific tool?

CNAB is not a platform-specific tool. While it uses containers for encapsulating installation logic, it remains un-opinionated about what cloud environment it runs in. CNAB developers can bundle applications targeting environments spanning IaaS (like OpenStack or Azure), container orchestrators (like Kubernetes or Nomad), container runtimes (like local Docker or ACI), and cloud platform services (like object storage or Database as a Service). CNAB can also be used for packaging other distributed applications, such as IoT or edge computing. In nutshell, CNAB are a package format specification that describes a technology for bundling, installing, and managing distributed applications, that are by design, cloud agnostic.

Why do we need CNAB?

The current distributed computing landscape involves a combination of executable units and supporting API-based services. Executable units include Virtual Machines (VMs), Containers (e.g. Docker and OCI) and Functions-as-a-Service (FaaS), as well as higher-level PaaS services. Along with these executable units, many managed cloud services (from load balancers to databases) are provisioned and interconnected via REST (and similar network-accessible) APIs. The overall goal of CNAB is to provide a packaging format that can enable application providers and developers with a way of installing a multi-component application into a distributed computing environment, supporting all of the above types.

Is it open source? Tell me more about CNAB format?

It is an open source, cloud-agnostic specification for packaging and running distributed applications. It is a nascent specification that offers a way to repackage distributed computing apps

The CNAB format is a packaging format for a broad range of distributed applications. It specifies a pairing of a bundle definition(bundle.json) to define the app, and an invocation image to install the app.

The bundle definition is a single file that contains the following information:

Information about the bundle, such as name, bundle version, description, and keywords

Information about locating and running the invocation image (the installer program)

A list of user-overridable parameters that this package recognizes

The list of executable images that this bundle will install

A list of credential paths or environment variables that this bundle requires to execute

What’s Docker future plan to do with CNAB?

This project was incubated by Microsoft and Docker 1 year back. The first implementation of the spec is an experimental utility called Docker App, which Docker officially rolled out this Dockercon and expected to be integrated with Docker Enterprise in near future. Microsoft and Docker plan to donate CNAB to an open source foundation publicly which is expected to happen early next year.

#2. Support for using Docker Compose on Kubernetes.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed.

What benefit does this bring to Community Developers?

By making it open source, Docker, Inc has really paved a way of infinite possibilities around simplified way of deploying Kubernetes application. Docker Swarm gained popularity because of its simplified approach of application deployment using docker-compose.yml file. Now the community developers can use the same YAML file to deploy their K8s application.

Imagine, you are using Docker Desktop on your Macbook. Docker Desktop provides capability of running both Swarm & Kubernetes. You have context set to GKE cluster which is running on Google Cloud Platform. You just deployed your app using docker-compose.yml on your local Macbook. Now you want to deploy it in the same way but this time on your GKE cluster. Just use docker stack deploy command to deploy it to GKE cluster. Interesting, Isn’t it?

How does Compose on Kubernetes architecture look like?

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

If you’re interested to learn further, I would suggest you to visit this link.

This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings

Check out the latest doc which shows how to make it work with AKS here.

#3. Introducing Docker Desktop Enterprise

The 3rd Big announcement was an introduction to Docker Desktop Enterprise. With this, Docker Inc. made a new addition to their desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise.

How will Docker Desktop Enterprise be different from Docker Desktop Community Edition?

Good question. Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Not only this, with Docker Desktop Enterprise, you get access to the Application Designer which is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards

source ~ Docker, Inc

For those who are interested in Docker Desktop Enterprise – Please note that it is expected to be available for preview in January & General Availability is slated to happen during 1H 2019.

#4. From Zero to Docker in Seconds with “docker assemble” CLI

This time, Docker Team announced a very interesting docker subcommand rightly named as “assemble” to the public. Ann Rahma and Gareth Rushgrove from Docker, Inc. announced assemble, a new command that generates optimized images from non dockerized apps. It will get you from source to an optimized Docker images in seconds.

Here are few of interesting facts around docker assemble utility:

Dockerassemble has capability to build an image without a Dockerfile, all about auto detecting the code framework.

It generates docker images (and lot more) from your code with single command and zero effort! which mean no more dockerfile needed for your app till you have a config (.pom file there).

It can analyze your applications, dependencies, and caches, and give you a sweet Docker image without having to author your own Dockerfiles.

It is built on top of buildKit, will auto detect framework, versions etc. from a config file (.pom file) and automatically add dependencies to the image label, optimize image size and push.

Docker Assemble can also figure out what ports need to be published and what healthchecks are relevant.

The docker–assemble builds app without configuration files, without Dockerfile, just a git repository to deploy

Is it an open source project?

It’s an enterprise feature for now — not in the community version. It is available for a couple languages and frameworks (like Java as demonstrated on Dockercon stage).

How is it different from buildpack?

By reading all through its feature, Docker assemble might look very similar to buildpacks as it overlap with some of the stuff docker-assemble does. But the huge benefit with assemble is that it’s more than just an image (also ports, healthchecks, volume mounts, etc), and it’s integrated into the enterprise toolchain. The docker-assemble is sort of an enterprise-grade buildpack to help with digitalization.

Keep eye on my next blog post to get more detail around the fancy docker assemblecommand.

#5. Docker-app & CNAB together for the first time

On the 2nd day of Dockercon, Docker confirmed that they are the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. With this, Docker now lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

Can I test the preview binaries of docker-app which comes with CNAB support?

Yes, you can find some preview binaries of docker-app with CNAB support here.The latest release of Docker App is one such tool that implements the current CNAB spec. Tt can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

As monolithic systems become too large to deal with, many enterprises are drawn to breaking them down into the microservices architecture.Whenever we move from monolithic to microservice architecture, application consists of multiple components in terms of services talking to each other. Each components has its own resources and can be scaled individually. If we talk about Kubernetes, it can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. The below list of challenges might occur :

1. Manage, Edit and Update multiple k8s configuration

2. Deploy Multiple K8s configuration as a SINGLE application

3. Share and reuse K8s configurations and applications

4. Parameterize and support multiple environments

5. Manage application releases: rollout, rollback, diff, history

6. Define deployment lifecycle(control operations to be run in different phases

7. Validate release state after deployment

These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

Helm is a Deployment Management(and NOT JUST PACKAGE MANAGER) for Kubernetes. It does a heavy lifting of repeatable deployment, management of dependencies(reuse and share), management of multiple configurations, update, rollback and test application deployments(Releases).

Under this blog post, we will test drive Helm on top of Play with Kubernetes Platform. Let’s get started.

Click on the Login button to authenticate with Docker Hub or GitHub ID.

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Docker-app allows you to share your applications on Docker Hub directly. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

Docker-app 0.6.0 got released 2 week back. Few of the notable features included under this release include –

Support for external files (extra configuration files): when pushing and pulling all files present in the application folder are included.

Render can now produce output in multiple formats (YAML or JSON).

split and merge now work properly when specifying an image as input and no output.

Command line accepts a trailing slash in the application path.

But one of the most important fix which arrived with this releases was related to Helm chart. This release fixed multiple issues in Helm chart generation.

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

Verifying the Kubernetes Nodes

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

Create WordPress Helm Package

docker-app helm wordpress will output a Helm package in the ./wordpress.helm folder. –compose-file (or -c), –set (or -e) and –settings-files (or -s) flags apply the same way they do for the render subcommand.

Deploy WordPress Application on Kubernetes Cluster

We are now all set to build WordPress application for Kubernetes Cluster using docker-app deploy command.

Let’s talk about RBAC under Docker EE 2.0…

Kubernetes RBAC(Role-based Access Control) security context is a fundamental part of Kubernetes security best practices, as well as rolling out TLS certificates / PKI authentication for connecting to the Kubernetes API server and between its components. Kubernetes RBAC is essentially an authorization and access control specification where you define the actions (GET, UPDATE, DELETE, etc) that Kubernetes subjects (i.e. human users, software, kubelets) are allowed to perform over Kubernetes entities (i.e. pods, secrets, nodes).

Early this year, Docker EE 2.0 introduced an enhanced RBAC solution for the first time that provided flexible and granular access controls across multiple teams and users. Docker EE leverages the Kubernetes webhook authentication model. This feature enables the validation of all requests by an outside source. With Docker EE we use the control plane’s RBAC controller, eNZi. Each Kubernetes request, whether issued via the CLI or the GUI, is validated against Docker EE’s authN/authZ database, and then rejected or accepted as appropriate.

With Docker EE 2.0, UCP now includes an upstream distribution of Kubernetes. From a security point of view this is the best of both worlds. Out of the box Docker EE 2.0 provides user authentication and RBAC on top of Kubernetes. To ensure the Kubernetes orchestrator follows all the security best practices UCP utilizes TLS for the Kubernetes API port. When combined with UCP’s auth model, this allows for the same client bundle to talk to the Swarm or Kubernetes API.

Early this year, I wrote a blog post which deep-dive into Docker EE 2.0 Architecture. Check it out if you are new and want to get into nitty-gritty of Docker Enterprise product.

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

Verifying the Kubernetes Nodes

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

In the UCP UI, navigate to the User Management > Grants menu and click the Create button.In the Subject area, choose Service Account and select the kube-system Namespace and tiller Service Account; then hit Next:

In the Role area, select Full Control. Then hit Next. In the Resource area, select Namespaces, flip the switch to enable Apply grant to all existing and new namespaces, then hit Create:

Verify the final grant. It should look similar to what is shown below:

At this stage, if you have tiller installed in the cluster already, you will need to patch the tiller deployment to use the tiller service account just created:

So before we can use helm with a kubernetes cluster, you need to install tiller on it. It’s as easy as running the below command:

openusm@master01:~$ helm init --service-account tiller
$HELM_HOME has been configured at /home/openusm/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

The above command installs Tiller (the Helm server-side component) onto your Kubernetes Cluster and sets up local configuration in $HELM_HOME (default ~/.helm/).As with the rest of the Helm commands, ‘helm init’ discovers Kubernetes clusters by reading $KUBECONFIG (default ‘~/.kube/config’) and using the default context.

Istio is completely an open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and its a fully grown platform which provides APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is hosted on GitHub under this link. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Istio is composed of these components:

Envoy – Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

Mixer – Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.

Pilot – A component responsible for configuring the proxies at runtime.

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

Verifying the running Pods

Installing Istio 1.0.0

Istio is deployed in a separate Kubernetes namespace istio-system. We will verify it later. As of now, you can copy the below content in a file called install_istio.sh and save it. You can make it executable and run it to install Istio and related tools.

As shown above, it will enable the Prometheus, ServiceGraph, Jaeger, Grafana, and Zipkin by default.

Please note – While executing this script, it might end up with the below error message –

unable to recognize “install/kubernetes/istio-demo.yaml”: no matches for admissionregistration.k8s.io/, Kind=MutatingWebhookConfiguration

The error message is expected.

As soon as the command gets executed completely, you should be able to see a long list of ports which gets displayed at the top center of the page.

Verifying the Services

Exposing the Services

To expose Prometheus, Grafana & Servicegraph services, you will need to delete the existing services and then use NodePort instead of ClusterIP so as to access the service using the port displayed on the top of the instance page. (as shown below)

You should be able to access Grafana page by clicking on “30004” port and Prometheus page by clicking on “30003”.

You can check Prometheus metrics by selecting the necessary option as shown below:

Under Grafana Page, you can add “Data Source” for Prometheus and ensure that the dashboard is up and running:

Congratulations! You have installed Istio on Kubernetes cluster. Below listed services have been installed on K8s playground:

Istio Controllers and related RBAC rules

Istio Custom Resource Definitions

Prometheus and Grafana for Monitoring

Jeager for Distributed Tracing

Istio Sidecar Injector (we’ll take a look next section)

Installing Istioctl

Istioctl is configuration command line utility of Istio. It helps to create, list, modify and delete configuration resources in the Istio system.

Deploying the Sample BookInfo Application

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation- BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings – all managed using Istio.

Nginx (pronounced “engine-x”) is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The nginx project started with a strong focus on high concurrency, high performance and low memory usage. It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Windows.

In my last blog post, I showcased how to build 5-Node Kubernetes cluster. Under this blog post, we will see how to build our first Nginx application on this cluster environment.

Are you new to Kubernetes? Want to build your career in Kubernetes? Then Welcome ! You are at the right place. This blog post series brings you tutorials that help you get hands-on experience using Kubernetes. Here you will find a mix of labs and tutorials that will help you, no matter if you are a beginner, SysAdmin, IT Pro or Developer. Yes, you read it correct ! Its $0 learning platform. You don’t need any infrastructure. Most of the tutorials runs on Play with K8s Platform. This is a free browser based learning platform for you. Kubernetes tools like kubeadm, kompose & kubectl are already installed for you. All you need is to get started.

Kubernetes (often abbreviated to K8S), is a container orchestration platform for applications that run on containers. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

Kubernetes can speed up the development process by making easy, automated deployments, updates (rolling-update) and by managing our apps and services with almost zero downtime. It also provides self-healing. Kubernetes can detect and restart services when a process crashes inside the container. Any developer can package up applications and deploy them on Kubernetes with basic Docker knowledge.

At a minimum, Kubernetes can schedule and run application containers on clusters of physical or virtual machines. However, Kubernetes also allows developers to ‘cut the cord’ to physical and virtual machines, moving from a host-centric infrastructure to a container-centric infrastructure, which provides the full advantages and benefits inherent to containers. Kubernetes provides the infrastructure to build a truly container-centric development environment. K8s provides a rich set of features for container grouping, container orchestration, health checking, service discovery, load balancing, horizontal autoscaling, secrets & configuration management, storage orchestration, resource usage monitoring, CLI, and dashboard.

This is the first blog targeted at setting up 5-Node Kubernetes cluster. To get started with Kubernetes, follow the below steps:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks[discovery] Trying to connect to API Server "192.168.0.8:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.8:6443"
[discovery] Requesting info from "https://192.168.0.8:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.8:6443"[discovery] Successfully established connection with API Server "192.168.0.8:6443"
[bootstrap] Detected server version: v1.8.15
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
[node2 ~]$

If you’re a Developer and have been spending lot of time in developing apps recently, you already understand a whole new set of challenges related to Microservice architecture. Although there has been a shift from bloated monolithic apps to small, focused Microservices to speed up implementation and to improve resiliency but the fact is developers have to really worry about the challenges in integrating services in distributed systems which includes accountability for service discovery, load balancing, registration, fault tolerance, monitoring, routing, compliance and security.

Let us understand the challenges which Microservice bring to developers and operators in details. Consider a 1st Generation simple Service Mesh scenario. As shown below, Service (A) talks to Service (B). Instead of talking directly, the request gets routed through Nginx. The Nginx finds route in Consul (which is actually a service discovery tool) and automatic retries on HTTP 502’s happen.

But with the advent of growing number of microservices architecture, the below listed challenges arises for both developers as well as operation team which are discussed below –

How to enable these growing number of microservices to talk to each other?

How to enable these growing number of microservices to load-balance?

How to enable these growing number of microservices to provide role-based routing?

How to implement outgoing traffic on these microservices and test canary deployment?

How to manage complexity around these growing pieces of microservices?

How can operator implement fine-grained control of traffic behavior with rich-routing rules?

In nutshell, although you could put service discovery and retry logic into application or networking middleware but the fact is service discovery becomes tricky to get right.

Enter Istio’s Service Mesh

“Service Mesh” is one of the hottest buzzword of 2018. As its name suggest, it is a configurable infrastructure layer for a microservices app. It describes the network of microservices that make up applications and the interactions between them. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.

Istio is a completely open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and is actually a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is being hosted on GitHub. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Docker for Mac 18.05.0 CE Release went GA last month. With this release, you can now select your orchestrator directly from the UI in the “Kubernetes” pane which allows “docker stack” commands to deploy to swarm clusters, even if Kubernetes is enabled in Docker for Mac. This feature was introduced for the first time under any Desktop Edition. To try out this feature, ensure that you are using Edge Release of Docker for Mac 18.05.0 CE. Once you update your Docker for Mac, you can find this new feature by clicking on Preference Pane UI and then selecting Kubernetes as shown below:

Whenever you select your choice of orchestrator, it updates ~/.docker/config.json file in the backend as shown below:

Docker for Mac is used everyday by hundreds of thousands of developers to build, test and debug containerized apps in their local and dev environment. Developers building both docker-compose and Swarm-based apps, and apps destined for deployment on Kubernetes now get a simple-to-use development system that takes optimal advantage of their laptop or workstation. All container tasks – build, run and push – will run on the same Docker instance with a shared set of images, volumes and containers. With the current release, it is way more simple to install, so you can have Docker containers running on your Mac in just a few minutes.

Docker for Mac provides docker stack command to deploy your application for both Swarm and Kubernetes. This become very useful for Docker Swarm users as they can use the same Swarm CLI to bring up Kubernetes users. But here is an extra bonus – Docker for Mac now works flawlessly for Helm Package Manager.

Why Yet another Package Manager?

Let’s accept the fact that Kubernetes can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

In case you are completely new – Helm is an open source project that enables developers to create packages of containerized apps to make installation much simpler. Helm is the package manager for Kubernetes and it’s the best way to find, share, and deploy software to k8s.The project was initially created by Deis and has since been donated to the Cloud Native Computing Foundation (CNCF).

Users can install Helm with one click or configure it to suit their organization’s needs. For example, if you want to package and release version 1.0, making only certain parts configurable, this can be done with Helm. Then with version 2.0, additional parts can be made configurable.

Charts – a bundle of information necessary to create an instance of a Kubernetes application

Config – contains configuration information that can be merged into a packaged chart to create a releasable object

Release – a running instance of a chart, combined with a specific config

Architecture of Helm:

Architecturally it’s built on two major components:

Helm Client, a command line tool with the following responsibilities

Interacting with the Tiller server

Sending charts to be installed

Upgrading or uninstalling of existing releases

Managing repositories

Tiller Server, an in-cluster server with the following responsibilities:

Interacts with the Helm client

Interfaces the Kubernetes API server

Combining a chart and configuration to build a release

Installing charts and tracking the release

Upgrading and uninstalling charts

Both the Helm client and Tiller are written in Go and uses gRPC to interact with each other. Tiller (as the server part running inside Kubernetes) provides a gRPC server to connect with the client and it uses the k8s client library to communicate with Kubernetes. It does not require it’s own database as the information is stored within Kubernetes as ConfigMaps.

Installing Helm

Pre-requisites:

Docker for Mac 18.05.0 CE – Edge Release

Enable Kubernetes under Preference Pane UI

To install Helm, you just need a single-liner command on your macOS:

[Captains-Bay]? > brew install kubernetes-helm
[Captains-Bay]? > helm
The Kubernetes package manager
To begin working with Helm, run the 'helm init' command:
$ helm init
This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.
Common actions from this point include:
- helm search: search for charts
- helm fetch: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts
Environment:
$HELM_HOME set an alternative location for Helm files. By default, these are stored in ~/.helm
$HELM_HOST set an alternative Tiller host. The format is host:port
$HELM_NO_PLUGINS disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.
$TILLER_NAMESPACE set an alternative Tiller namespace (default "kube-system")
$KUBECONFIG set an alternative Kubernetes configuration file (default "~/.kube/config")
Usage:
helm [command]
Available Commands:
completion Generate autocompletions script for the specified shell (bash or zsh)
create create a new chart with the given name
delete given a release name, delete the release from Kubernetes
dependency manage a chart's dependencies
fetch download a chart from a repository and (optionally) unpack it in local directory
get download a named release
history fetch release history
home displays the location of HELM_HOME
init initialize Helm on both client and server
inspect inspect a chart
install install a chart archive
lint examines a chart for possible issues
list list releases
package package a chart directory into a chart archive
plugin add, list, or remove Helm plugins
repo add, list, remove, update, and index chart repositories
reset uninstalls Tiller from a cluster
rollback roll back a release to a previous revision
search search for a keyword in charts
serve start a local http web server
status displays the status of the named release
template locally render templates
test test a release
upgrade upgrade a release
verify verify that a chart at the given path has been signed and is valid
version print the client/server version information
Flags:
--debug enable verbose output
-h, --help help for helm
--home string location of your Helm config. Overrides $HELM_HOME (default "/Users/ajeetraina/.helm")
--host string address of Tiller. Overrides $HELM_HOST
--kube-context string name of the kubeconfig context to use
--tiller-connection-timeout int the duration (in seconds) Helm will wait to establish a connection to tiller (default 300)
--tiller-namespace string namespace of Tiller (default "kube-system")
Use "helm [command] --help" for more information about a command.

Verify the Helm version. If Server and client version doesn’t match, you need to upgrade to deploy application seamlessly(as shown below):

Docker is a full development platform for creating containerized apps, and Docker for Mac is the most efficient way to start and run Docker on your MacBook. It runs on a LinuxKit VM and NOT on VirtualBox or VMware Fusion. It embeds a hypervisor (based on xhyve), a Linux distribution which runs on LinuxKit and filesystem & network sharing that is much more Mac native. It is a Mac native application, that you install in /Applications. At installation time, it creates symlinks in /usr/local/bin for docker & docker-compose and others, to the commands in the application bundle, in /Applications/Docker.app/Contents/Resources/bin.

One of the most amazing feature about Docker for Mac is “drag & Drop” the Mac application to /Applications to run Docker CLI and it just works flawlessly. The way the filesystem sharing maps OSX volumes seamlessly into Linux containers and remapping macOS UIDs into Linux is one of the most anticipated feature.

Few Notables Features of Docker for Mac:

Docker for Mac runs in a LinuxKit VM.

Docker for Mac uses HyperKit instead of Virtual Box. Hyperkit is a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.

Docker for Mac does not use docker-machine to provision its VM. The Docker Engine API is exposed on a socket available to the Mac host at /var/run/docker.sock. This is the default location Docker and Docker Compose clients use to connect to the Docker daemon, so you to use docker and docker-compose CLI commands on your Mac.

When you install Docker for Mac, machines created with Docker Machine are not affected.

There is no docker0 bridge on macOS. Because of the way networking is implemented in Docker for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.

Docker for Mac has now Multi-Architectural support. It provides binfmt_misc multi architecture support, so you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.

Under this blog, I will deep dive into Docker for Mac architecture and show how to access service containers running on top of LinuxKit VM.

At the base of architecture, we have hypervisor called Hyperkit which is derived from xhyve. The xhyve hypervisor is a port of bhyve to OS X. It is built on top of Hypervisor.framework in OS X 10.10 Yosemite and higher, runs entirely in userspace, and has no other dependencies. HyperKit is basically a toolkit for embedding hypervisor capabilities in your application. It includes a complete hypervisor optimized for lightweight virtual machines and container deployment. It is designed to be interfaced with higher-level components such as the VPNKit and DataKit.

Just sitting next to HyperKit is Filesystem sharing solution. The osxfs is a new shared file system solution, exclusive to Docker for Mac. osxfs provides a close-to-native user experience for bind mounting macOS file system trees into Docker containers. To this end, osxfs features a number of unique capabilities as well as differences from a classical Linux file system.On macOS Sierra and lower, the default file system is HFS+. On macOS High Sierra, the default file system is APFS.With the recent release, NFS Volume sharing has been enabled both for Swarm & Kubernetes.

There is one more important component sitting next to Hyperkit, rightly called as VPNKit. VPNKit is a part of HyperKit attempts to work nicely with VPN software by intercepting the VM traffic at the Ethernet level, parsing and understanding protocols like NTP, DNS, UDP, TCP and doing the “right thing” with respect to the host’s VPN configuration. VPNKit operates by reconstructing Ethernet traffic from the VM and translating it into the relevant socket API calls on OSX. This allows the host application to generate traffic without requiring low-level Ethernet bridging support.

On top of these open source components, we have LinuxKit VM which runs containerd and service containers which includes Docker Engine to run service containers. LinuxKit VM is built based on YAML file. The docker-for-mac.yml contains an example use of the open source components of Docker for Mac. The example has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Sitting next to Docker CE service containers, we have kubelet binaries running inside LinuxKit VM. If you are new to K8s, kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod. It basically takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.On top of Kubelet, we have kubernetes services running. We can either run Swarm Cluster or Kubernetes Cluster. We can use the same Compose YAML file to bring up both the clusters side by side.

Peeping into LinuxKit VM

Curious about VM and how Docker for Mac CE Edition actually look like?

Below are the list of commands which you can leverage to get into LinuxKit VM and see kubernetes services up and running. Here you go..

How to enter into LinuxKit VM?

Open MacOS terminal and run the below command to enter into LinuxKit VM:

Listing out the service containers:

Earlier the ctr tasks ls used to list the service containers running inside LinuxKit VM but in the recent release, namespace concept has been introduced, hence you might need to run the below command to list out the service containers: