Get the Public IP of the ingress controller. It may take some times for the IP assigned to the services

$ kubectl get services --namespace kube-system --watch

When the IP assigned, remember the IP address and press Ctrl+Q to exit

Clone this repo

$ git clone https://github.com/rbitia/aci-demos.git

Change the folder to the root of the source code

$ cd aci-demo

Replace <myResourceGroup>, <IP Address> with your created in previous steps. Replace <appName> with your expected and run following shell script to bind FQDN to your IP. Remember the return FQDN name. You will use it to update your configuration file.

The connector has been deployed and with a kubectl get nodes you can see that the ACI Connector is a new node in your cluster. Now scale up the image recognizer to 10 using the following command

$ kubectl scale deploy demo-fr-ir-aci --replicas 10

Though we are using kubectl, the ACI Connector is dispatching pods to Azure Container Instances transparently, via the ACI connector node.
This virtual node has unlimited capacity and a per-second billing model, making it perfect for burst compute scenarios like this one.
If we wait a minute or so for the ACI containers to warm up, we should see image recognizer throughput increase dramatically.

Check out the dashboard to see throughput it dramatically increase...

Here we can see throughput really beginning to pick up, thanks to burst capacity provided by ACI.
This is powerful stuff. Here we can see AKS and ACI combine to provide the best of “serverless” computing – invisible infrastructure and micro-billing – all managed through the open Kubernetes APIs. This kind of innovation – the marriage of containers and serverless computing -- is important for the industry, and Microsoft is working hard to make it a reality.

Once you've done all the set up you just need these commands during the live demo: