In the final part of our look into Azure Container Instances (ACI), we will deploy a microservices application that spans Google Cloud Platform (GCP) and Microsoft Azure. This multicloud architecture is based on the ACI Connector for Kubernetes, which bridges the gap between a full-blown orchestration engine (Kubernetes) and serverless containers (ACI).

Azure Container Instances is designed to be a lightweight serverless environment meant to run single-container workloads. The job of managing a microservices application composed of multiple containers is better handled by an orchestration engine like Kubernetes, Mesosphere DC/OS, or Docker in Swarm mode. A full-fledged container orchestrator handles tasks including scheduling, service discovery, scaling, health monitoring, logging, and much more. It provides end-to-end lifecycle management of microservices.

ACI handles the lifecycle of one instance at a time. It does not have advanced scheduling capabilities and other functions needed to tackle microservices. By bridging the gap between a container orchestrator and ACI, customers can get best of both worlds. To demonstrate this, Microsoft has built ACI Connector for Kubernetes as a reference implementation. It is possible to build similar connectors for other container management platforms.

When I encountered this project, the first thing that hit me was the integration of a Kubernetes cluster running in Google Cloud with Azure Container Instances. This tutorial gives you a glimpse of what is possible with ACI Connector for Kubernetes. Please note that this project is experimental that is not suitable for production environments.

We will setup a testbed in US West region spanning both Google Cloud Platform and Microsoft Azure. On the GCP side, we will have a three-node Kubernetes cluster. A Resource Group to hold the ACI instances will be created in Azure.

The microservices application that we deploy was covered in detail in a previous article. It’s a Node.js-based web application that talks to Azure Cosmos DB.

Before proceeding with the next steps, ensure that you have an active account with GCP and Microsoft Azure. You will need to download and configure the latest version of kubectl, gcloud and az command line tools on your machine.

Configuring Google Container Engine

Let’s go ahead and create a standard three-node cluster in GKE. Make sure that it is deployed in the us-west1-a zone:

$ gcloud container clusters create gke-aci --zone us-west1-a

With the Kubernetes cluster in place, let’s point kubectl to GKE:

$ gcloud container clusters get-credentials gke-aci --zone us-west1-a

Verify the setup with the following command. It should show the three nodes of GKE:

$ kubectl get nodes

Configuring Microsoft Azure

In Azure, we will create a Resource Group that holds all the resources that we provision for this project. This Resource Group will hold Container Instances and Cosmos DB. We also need to create an Active Directory Service Principal to enable authentication of ACI from Kubernetes.

Let’s start with the Resource Group. Note that it is also created in US West region to reduce the latency:

$ az group create --name gke-aci --location westus

We will now create an Azure Cosmos DB instance with MongoDB API endpoint, which will act as the database backend for the application. Replace the connection string placeholder with the actual value shown by the list-connection-strings command:

After a minute or so, check the availability of ACI virtual node with kubectlget nodes command.

Deploying the Microservices Application

The final step is to create a pod that gets deployed to ACI. We will do this by creating the following file saved as web.yaml. Update the connection string environment variable with the Cosmos DB connection string. The YAML file is available as a Gist on GitHub.

Notice the additional parameter nodeName in the pod definition which forces Kubernetes scheduler to schedule it in the node called aci-connector. Due to this parameter, Kubernetes will delegate the scheduling process to ACI, which will take over the job of creating the container instance.

Deploy the pod with the following command:

$ kubectl create -f web-pod.yml

Let’s check the creation of pod. For brevity, we will omit some of the columns in the output:

This confirms that the pod is created on the node aci-connector. Make note of the IP address.

Now, let’s verify the same in Azure. We will use the following command to describe the ACI instance:

$ az container list --query [*].name

Interestingly, Azure CLI also shows the same IP address reported by Kubernetes CLI:

$ az container show --name web -g gke-aci --query ipAddress.ip

Let’s go ahead and access the application in the browser.

You can easily extend this deployment through the creation of a replication controller and a load balancer. Try removing the NodeName taint from the replication controller definition and watch how Kubernetes spreads the pods across all the nodes including ACI.