In this article

[Containers]

Up and Running with Azure Kubernetes Services

In the world of container orchestration, everyone seems to being talking about Kubernetes. Originally designed by Google, Kubernetes is an open source platform for orchestrating Docker (or any Open Container Initiative) containers across clusters of virtual machines (VMs), with support for deployment, rollbacks, scaling and a host of other features. Administering a Kubernetes cluster can be a complex endeavor, so the Azure team has provided a managed solution called Azure Kubernetes Service (AKS) to make the process considerably easier.

In this article, I’ll demonstrate how to deploy an AKS cluster, create a secure Azure Container Registry (ACR), deploy an ASP.NET Web API application, and expose that application on the Internet via a Kubernetes Ingress and Azure DNS. This is not intended to be an in-depth article on Kubernetes, but rather everything you need to get up and running quickly with an application using the Azure AKS offering.

One of the benefits of Azure AKS is that the control plane, consisting of the master and configuration nodes, is fully managed. The Kubernetes control plane typically comprises at least one master node and one, three or five etcd configuration nodes. As you can imagine, managing these core services can be tedious and costly. With AKS, you can upgrade the core services or scale out additional worker nodes with a click of a button. Additionally, at this time there are no additional charges for these management nodes. You only pay for the worker nodes that run your services.

The code discussed in this article can be found at bit.ly/2zaFQeq. Included in the repository is a basic ASP.NET Web API application, along with a Dockerfile and Kubernetes manifests.

Creating a Kubernetes AKS Cluster

The Azure CLI is used for creating and managing resources in an Azure Cloud subscription. It can be found at bit.ly/2OcwFQb. Throughout this article, l’ll be using it to manage various Azure services. Azure Cloud Shell (bit.ly/2yESmTP) is a Web-based shell that allows developers to run commands without installing the CLI locally.

Let’s get started by creating a resource group to hold the AKS cluster and container registry, with this bit of code:

> group create --name my-aks-cluster --location eastus2

Once the resource group is created, I create a cluster with a single node. While VMs as small as the B1 burstable images are allowed, it’s suggested that you use at least a 2-core, 7GB memory instance (D-series or greater). Anything smaller has a tendency to be unreliable when scaling and upgrading a cluster. You’ll also need to take into consideration that AKS currently only supports nodes of a single type, so you cannot decide to scale up to a larger VM instance at a later time. Adding nodes of a different type will be supported in the future with multiple node pools, but for now you need to choose a node size that fits the needs of the services that you plan to run.

Sit back and be patient while the cluster is being created, as it often takes upward of 10 to 12 minutes. Here’s the code to kick off the operation:

Getting Docker images into the AKS cluster requires the user of a Docker registry. Using a public Docker registry is acceptable for open source distributions, but most projects will want to secure application code in a private registry.

Azure provides a secure, managed Docker registry solution with Azure Container Registry (ACR). To setup an ACR instance, run the following command:

Note that the registry name must be unique across all of the ACR-hosted registry names on Azure.

The Kubernetes CLI

Kubectl is used to interact with an AKS cluster. It’s available for all OSes and has multiple approaches to installation. You can find more information at bit.ly/2Q58CnJ. There’s also a Web-based dashboard that can be very helpful for getting a quick overview of the state of the cluster, but it doesn’t cover every API operation available and you may often find yourself falling back to the kubectl CLI. Even if you’re not a fan of command-line operations, over time you’ll likely grow to appreciate the power of kubectl. In combination with the Azure CLI, any operation can be performed without leaving the shell.

Once kubectl has been installed, credentials can be imported locally to authenticate to the cluster using the following command:

Running this command updates ~/.kube/config with your cluster uri and signing authority and credentials. It also adds a context for setting the cluster as the current configuration. The kubectl configuration can hold context for multiple clusters, which can easily be switched using the kubectl config command. Additionally, there are open source utilities available to make switching contexts easier (kubectx and kubectxwin).

Once the credentials have been imported, connectivity to the cluster can be tested by listing the running nodes with the kubectl get nodes command. You should see something like this:

Adding a Container Registry Secret for Deployments

Kubernetes has a secure way to store sensitive data using Secrets. For instance, to prevent ACR credentials from being stored in the source code, a secret should be created and referenced from the Kubernetes deployment manifests. To retrieve the credentials for the ACR service, run the following command:

> az acr credential show --name <REGISTRY_NAME>

Next, use kubectl to generate a special type of secret (docker-registry) that’s designed specifically to store credential tokens provided by Docker. The code that follows will create a secret called my-docker-creds using the credentials that were retrieved from the ACR query. Be aware that the username is case-sensitive and ACR will make it lowercase by default for the built-in admin account:

This Dockerfile uses a multi-stage approach that splits the build into separate stages for build and runtime. This reduces the size of the overall image significantly by not including the entire SDK for distribution.

Pushing the Image to the Registry

Docker works on the concept of local images and containers to execute images. A Docker image cannot be pushed directly to the cluster. Instead, the image must be hosted in a location that Kubernetes can access to pull the image locally to the cluster. The ACR registry is a secure location that allows images to be managed centrally between development, continuous integration and cluster environments.

The image must be built and tagged with the format <REGISTRY>/<REPOSITORY>/<IMAGE>:<TAG> so that Docker will know where to push the image upstream. The repository, which can have any name, provides a way to separate out registry images into logical groups. The following code demonstrates how to build and tag an image before pushing to ACR. While the latest tag is supported, when working with Kubernetes it’s highly advisable to use semantic versioning. It makes managing deployments and rollbacks much easier when you can leverage version numbers. Here’s the code:

Deploying the Application

Kubernetes uses manifests to describe every object in the cluster. Manifests are yaml files that are managed through the Kubernetes API. A deployment manifest type is used to describe the resources, image source and desired state of the application. Figure 1 is a simple manifest that tells Kubernetes which container to use, the number of desired running instances of the container and labels to help describe the application in the cluster. It also adds the name of the secret to authenticate to ACR when pulling the remote image.

Kubernetes uses a concept called a pod to group one or more containers into a logical, scalable instance within the cluster. Typically, you’ll have one container per pod. This allows you to independently scale any service for your application. A common misconception is to put all the services of an application—such as Web Application and Database—in a single pod. Doing this doesn’t allow the Web front end to scale individually from the database, and you’ll lose many of the benefits of Kubernetes as a result.

There’s a common scenario where it’s acceptable to have an additional container in a pod—it’s a concept called a sidecar. Imagine a container that observes your application container and provides metrics or logging. Placing both containers in a single pod provides real benefits in this instance. Otherwise, it’s generally best to keep the ratio of one container per pod until you have a solid understanding of the limitations (and benefits) of grouping containers.

Once the deployment has completed, the status of the application pod can be checked with the following command:

Note that two instances of the pods are running to satisfy the replica set.

Creating a Service

Now that the application Docker container is deployed into the cluster, a Service is required to make it discoverable. A Kubernetes Service makes your pods discoverable to other pods within the cluster. It does this by registering itself with the cluster’s internal DNS. It also provides load balancing across all of the pod replicas, and manages pod availability during pod upgrades. A service is a very powerful Kubernetes concept that’s required to provide availability during rolling, blue/green, and canary deployment upgrades to an application. The following command will create a service for the deployment:

By default, services are only accessible from within the cluster, hence the absence of an external-ip. The kubectl CLI provides a convenience to open a proxy between the local machine and the cluster to interactively check if it’s running, which can be seen in this code:

Adding HTTP Routing

Kubernetes is secure by default and you must explicitly expose services that you wish to access from outside the cluster. This is an excellent design feature from a security perspective, but can be confusing to a first-time user. The most common way to access HTTP-based services inside the cluster is by using a Kubernetes Ingress controller. Ingress controllers provide a way to route requests to internal services based on a hostname and path through an HTTP proxy entrypoint.

Before Ingress was added to Kubernetes, the primary way to expose a service was by using a LoadBalancer service type. This would cause a proliferation of load balancers—one per service—that would each need to be separately managed. With Ingress, every service in the cluster can be accessed by a single Azure Load Balancer, significantly reducing cost and complexity.

AKS provides a convenient add-on to extend the cluster with an Nginx proxy that acts as an Ingress controller for handling these requests. It can be enabled via the Azure CLI with the following command:

You should see three new controllers in the list. The Ingress controller, external DNS and the default back end. The default back end is used to provide a response to clients when a route can’t be found to an existing service. It’s very similar to a 404 Not Found handler in a typical ASP.NET application, except that it runs as a separate Docker container. It’s worth noting that while HTTP add-on is great for experiments, it’s not intended for production use.

Exposing the Service

An Ingress is a combination of an Ingress controller and an Ingress definition. Each service will have an Ingress definition that tells the Ingress controller how to expose the service. The following command will get the DNS name for the cluster:

The ingress annotation kubernetes.io/ingress.class notifies the Ingress controller to handle this specification, as shown in Figure 3. Using the cluster DNS name resolved earlier, a subdomain will be added for the host along with a “/” root path. Additionally, the service name and its internally exposed port must be added to tell the Ingress controller where to route the request within the cluster.

Wrapping Up

At this point, we have a single-node AKS cluster running alongside an ACR service hosting the application Docker images with secured secrets. This should be a great starting point for exploring the many additional capabilities that Azure AKS Kubernetes can offer. I have a simple philosophy, “Buy what enables you and build what differentiates you.” As you can see, Azure simplifies Kubernetes so that both developers and DevOps professionals can focus on more critical tasks.