For anyone working with Azure sooner or later, you will end up authoring Azure Resource Manager (ARM) Templates. Working with ARM templates, in the beginning, can seem painful but once you get the hang of them it is a great way to build out and deploy your Azure as code. In this blog post, I am not going to go into detail on authoring ARM Templates. In this blog post, I am going to list out the extensions that I use in VS Code to enhance the ARM Template authoring experience. Recently whenever I am demoing or showing others my ARM Templates in VS Code they ask me how they can also make their VS Code look like mine when working with ARM Templates. I figured it makes sense to write up a blog about how I have my VS Code configured for ARM Templates.

If you are not using VS code, you should change that and start using it today! I use it pretty much for any scripting such as PowerShell, coding, any time I need a text editor and more. I even use it to work directly with Azure via cloud shell and to work with Docker containers and Kubernetes clusters. Here is a quick snapshot of what VS Code is for anyone not familiar with it. VS Code is an open source – code editor developed by Microsoft that is cross-platform able to run on Windows, Linux and macOS.​ At a high level here is what VS Code includes:

Integrates with build and scripting tools to perform common tasks making everyday workflows faster. ​

Has support for Git to work with source control systems such as Azure DevOps, Bitbucket and more.​

Large Extension Marketplace of third-party extensions.​

As you
can see there is a ton of stuff you can do with VS Code. VS Code is a must have
for anyone doing CloudOps work with Azure and more. Now let’s look at the VS
Code extensions I use for ARM Templates. I am including the link for each
extension I will talk about. You can also simply load these right in VS Code.

The Azure Resource Manager Tools extension provides language support for ARM Templates and language expressions. It can be used to create and edit Azure Resource Manager templates. ​High-level features include:

VS Code
natively supports JSON. Azure Resource Manager Tools makes VS Code ARM Template
aware. One of the biggest benefits it gives me is the ARM Template Outline
making it much easier and faster to navigate the sections of an ARM Template.
Here is what it looks like.

Next up is two
extensions that both should be added. It is Material Theme and Material Theme
Icons.

This
extension gives you some very cool themes and works in combination with the
Azure Resource Manager Tools extension to give you the new color coding of your
ARM Template code. The color coding highlights different parts of the ARM
Template code such as parameters, variables, functions and more making it much
easier to read through all of the code in ARM templates. Here is an example:

This extension adds
a nice set of icons to your VS code. This extends beyond just ARM Templates.
Again this makes it visually easier when navigating around VS code and ARM
Templates. I typically use a PowerShell deployment script to deploy ARM
Templates from VS Code into Azure. This icon them makes it easy to see ARM
Template files and PowerShell files.

Here is a what it
looks like without and with the Materials Icon Theme.

The final extension I want to cover is ARM Snippets. This extension was developed by Sam Cogan (@samcogan) a fellow Microsoft MVP. In addition to the aforementioned marketplace link for this extension, you can find Sam’s Github repo for it here https://github.com/sam-cogan/arm-snippets-vscode.

This extension adds
snippets to VS Code for creating Azure Resource Manager Templates. This is
helpful when you are working in VS Code and need to add something to your
template for example a parameter, resource etc. You simply type arm and a menu
appears with a list of the available snippets. For example if you want to add a
virtual machine you could type arm-vm and a list of Windows and Linux VM
resources snippets will appear. Click on the one you want and it will add the
code block for you. This makes authroing templates much-much faster. This is
shown in the following screenshot:

​The
snippets include:​

Skeleton ARM Template​ (Note: This will load a skeleton for a fresh new ARM Template.)

Windows and Linux Virtual Machines​

Azure Web Apps​

Azure Functions​

Azure SQL​

Virtual Networks, Subnets and NSG’s​

Keyvault​

Network Interfaces and IP’s​

Redis​

Application Insights​

DNS​

Virtual Machines​

And more……

Note that the ARM Snippets extension is derived from the Cross Platform Tooling Samples. The Cross Platform Tooling Samples are a set of templates, snippets, and scripts for creating and deploying Azure Resource Management Templates in cross-platform environments. It sounds like this is updated more often and worth looking into loading. It does not have a friendly installer though like the ARM Snippets extension does though. Here is the link to the Cross Platform Tooling Samples Github repo: https://github.com/Azure/azure-xplat-arm-tooling

End
Result:

Below is
a screenshot of what your ARM Templates will look like after loading all of the
extensions mentioned in this blog post into your VS Code.

That wraps up this
blog post. I hope this is helpful to those out there working with ARM Templates
in VS Code. If you have any additional tips to share please add a comment.
Happy authoring!

I recently read a Career Advice for IT professionals in 2019 article and was reminded again by a friend and fellow MVP’s on his blog that “Change is always constant in IT.

Part of being an IT professional is keeping an eye on and ramping up on new technology. Change in IT is constant and it is critical to explore new technology so you can bring innovation to your organization and ensure you are ready if the business decides they want to use a specific technology to gain an edge in the market.

With all the excitement around Blockchain, I decided to spend time ramping up on Azure’s Blockchain technology specifically Azure Blockchain Workbench. Azure Blockchain Workbench is a way for developers and IT pros to get A blockchain network up and running quickly.

Once Azure Blockchain Workbench is up and running IT pros can administrator the network and developers can dive right into building blockchain apps. Most people that have heard of blockchain are familiar with cryptocurrency such as Bitcoin. Most people don’t know of or associate blockchain with smart contracts. Azure Blockchain Workbench powers smart contract technology. A smart contract is a self-executing contract between two or more parties involved in a transaction. Getting started with Blockchain can seem intimidating but with Azure Blockchain Workbench it is not hard to get started. I wrote a white paper that you can use to get started and takes you beyond cryptocurrency into the world of smart contracts using Azure Blockchain Workbench.

Almost every day when you go to a news website, a news program on the radio or news on the TV you can expect to hear some mention of Cryptocurrency and increasingly something about Blockchain.

Blockchain has a strong buzz and yet it is still misunderstood by many. It is an exciting time for technology and blockchain is one of the many reasons why. Blockchain is a public distributed digital ledger. Transactions between parties are processed in an efficient, verifiable and immutable way using cryptography. Transactions are tracked without a central entity such as a bank processing and keeping a record of the transactions. The ledger in a Blockchain is distributed across many nodes in the Blockchain network. Each time a transaction occurs the ledger is reconciled across all the nodes.

The Blockchain you typically
hear about is related to some cryptocurrency such as Bitcoin, Litecoin, or
Ripple. Blockchain goes way beyond this and is a technology that is being
widely explored in use by some enterprises. Here are some examples of
Blockchain in use within the enterprise. Microsoft’s Xbox uses Blockchain to
deliver royalty statements to game publishers, FedEx uses Blockchain for
storing shipping records, and 3M is using Blockchain for a new
label-as-a-service concept. The commonality those examples is that they are
using Blockchain smart contract technology.

A smart contract is a self-executing contract between two or more parties involved in a transaction. A smart contract holds each party in the transaction responsible without the need for a third-party authority. Smart contracts are essentially code running on top of a blockchain that are digitally facilitated, verified, and auto-enforced under the set of terms laid out within the contract.

Opposite of Blockchain used
for cryptocurrency Blockchain used for smart contracts enable more complex
scenarios beyond the exchange of digital currency. To illustrate an example of
a Blockchain smart contract think about being able to buy and sell cars without
a DMV processing the exchange of titles but instead the exchange of the title
being verified and transferred digitally.

In today’s fast-moving world
of technology, it is important to be able to take your solution from idea to
MVP aka 0 to 60 as fast as possible. That is the goal of the Azure Blockchain
Workbench (ABW). As shown in the following image with ABW you can literally go
from idea>consortium blockchain network>code/use pre-built blockchain
app>Blockchain app ready to use in a short amount of time.

When I first started
with Blockchain I was able to go from nothing to a fully functional Blockchain
app in a couple of hours using ABW. As seen in the previous image ABW is made
up of a combination of Azure services and capabilities. The main services
include:

An App Service Plan with two web apps and two web APIs

An Application Insights instance

An Event Grid Topic

A couple of Key Vaults

A Service Bus Namespace

A SQL Server with a SQL Databases

A couple of Azure Storage accounts

Two Virtual Machine scale sets that consist of the ledger
nodes and workbench microservices

Other components leveraged by
ABW are Azure Active Directory for identity, Azure Monitor (optional), and log
analytics workspace for logging (deployed with Azure Monitor), a mobile app for
both iOS and Android along with a REST-based gateway service API to integrate
to blockchain apps. Workbench provides the infrastructure needed to build and
deploy blockchain applications so when you deploy ABW it includes everything
you need. As of now ABW only supports Ethereum as its target blockchain.
Microsoft has plans to add Hyperledger and Corda Blockchains in the future.

ABW is designed to make it
easy for developers to bring Blockchain to the enterprise. ABW is deployed in
the Azure Portal via a solution template. You can deploy Ethereum or attach to
an existing one. After the Blockchain Workbench is deployed developers have the
option to either create a Blockchain app or use one of the Applications
and Smart Contract Samples from a repository maintained by
Microsoft.

These Blockchain apps consist of a configuration metadata and smart contract. The configuration metadata file is in JSON format and determines the multi-party workflow the smart contract is in a language named Solidity and determines the business logic of the Blockchain application itself. The configuration and smart contract together make up the Blockchain application user experience. The Applications and Smart Contract Samples can be used as is to take Blockchain for a test run or can be modified to fit an organization’s specific need. As an example, some of the information you can modify with the configuration is application name, display name, state, and application roles.

As you can see it is relatively easy to get a Blockchain application up and going. Another real benefit to running a Blockchain application on Azure is the integration points with many of the other services available on Azure. Here are a few examples. ABW writes a copy of the Blockchains on-chain data from the Blockchain distributed ledgers to an off-chain SQL database. Developers can connect to this database to work with the Blockchain data for any number of scenarios one of them could be reporting in Power BI. The Workbench has a REST API, Service Bus, IoT Hub, and Event Grid that could be used for integration with other technology such as IoT devices, other systems, and Azure Streaming Analytics to further expand the possibilities. With the Blockchain workbench developers also have access to one of Azures automation tools called Logic Apps opening the door to a world of further automation scenarios.

There is much more to the Azure Blockchain Workbench then can be covered in a single blog. The main point of this post is to show how a developer can go from 0 to 60 within a short amount of time with minimal effort to stand up the scaffolding needed to support a Blockchain app. For a deeper dive into the Azure Workbench it is recommended to download my Blockchain Beyond Cryptocurrency whitepaper once it is released. Thanks for reading. To get started with the Azure Blockchain Workbench visit this link: https://azure.microsoft.com/en-us/features/blockchain-workbench

CloudSkills.fm is a podcast by fellow Microsoft MVP Mike Pfeiffer and veteran in the tech space with 5 books under his belt and numerous courses on Pluralsight. The podcast can be found here: cloudskills.fm. Mike is an all around good guy and I was honored to be a featured guest on one of his podcast episodes. The podcast is weekly with technical tips and career advice for people working in the cloud computing industry. The podcast is geared for developers, IT pros, those making move into cloud.

On this episode Mike
and I talked about managing both the technical and non-technical aspects of
your career in the cloud computing industry. We also discuss DevOps stuff
around Docker, Azure Kubernetes Service, Terraform and cloud stuff around Azure
management including my 5 points to success with cloud. You can listen to the
podcast here:

I’m very excited
Opsgility recently published a new Azure course by me titled: “Deploy and
Configure Infrastructure”. This course is part of the AZ 300 certification
learning path for Microsoft Azure Architect Technologies. More about the AZ 300
certification can be found here: https://www.microsoft.com/en-us/learning/exam-az-300.aspx.
The course is over 4 hours of Azure content!

Description of the course:

In the course learn
how to analyze resource utilization and consumption, create and configure
storage accounts, create and configure a VM for Windows and Linux, create
connectivity between virtual networks, implement and manage virtual networking,
manage Azure Active Directory, and implement and manage hybrid identities.

This year’s summit
was one of the best MVP summits I have been to since being a Microsoft MVP! I
focused on Azure, Azure Stack, containers, and orchestration platforms. That’s
about all I can say about the summit. Everything else is NDA!

On top of all the
learning at the summit it was great connecting with other MVP’s and the
Microsoft teams. This I can share. Here are some highlights from the summit in
pictures:

It was full of cool
stickers starting off with one for the 2019 MVP Summit.

Here are a some of
the core CDM MVPs in front of building 92 including Bob Cornelissen, John
Joyner, Janaka Rangama, Jakob Svendsen, Sam Erskine, Cameron Fuller, Robert
Hedblom, Dieter Wijckmans, and others.

Here I am with Josue
Vidal an MVP from Brazil.

With John Joyner
scoring some OneNote swag.

Hanging with the
CountryCloudboy Kristopher Turner learning about Azure stuff.

With some of the CDM
MVPs (too many to name) Bradley Borrows and Tracey Cummings from Microsoft.

Another one of the
many stickers. This one is from the monitoring team.

With Eric Berg,
Bradley Borrows, and Sam Erskine.

I was waiting for
these set of stickers the entire summit. Some cool Azure management stickers.
Thanks Joseph Chan, Satya Vel and team.

Had a chance to meet
the legend Mark Russinovich who also happens to be the CTO of Microsoft Azure!

Lately I have been hearing a lot about a solution named Rancher in the Kubernetes space. Rancher is an open source Kubernetes Multi-Cluster Operations and Workload Management solution. You can learn more about Rancher here: https://www.rancher.com.

In short you can use
Rancher to deploy and manage Kubernetes clusters deployed to Azure, AWS, GCP
their managed Kubernetes offerings like GCE, EKS, AKS or even if you rolled
your own. Rancher also integrates with a bunch of 3rd party solutions for
things like authentication such as Active Directory, Azure Active Directory,
Github, and Ping and logging solutions such as Splunk, Elasticsearch, or a
Syslog endpoint.

Recently training
opened up for some Rancher/Kubernetes/Docker training so I decided to go. The
primary focus was on Rancher while also covering some good info on Docker and
Kubernetes. This was really good training with a lot of hands on time, however
there was one problem with the labs. The labs had instructions and setup
scripts ready to go to run Rancher local on your laptop or on AWS via
Terraform. There was nothing for Azure.

I ended up getting
my Rancher environment running on Azure but it would have been nice to have
some scripts or templates ready to go to spin up Rancher on Azure. I did find
some ARM templates to spin up Rancher but they deployed an old version and it
was not clear in the templates on where they could be updated to deploy the new
version of Rancher. I decided to spend some time building out a couple of ARM
templates that can be used to quickly deploy Rancher on Azure and add a
Kubernetes host to Rancher. In the ARM template I pulled together it pulls the
Rancher container from Docker Hub so it will always deploy the latest version.
In this blog post I will spell out the steps to get your Rancher up and running
in under 15 minutes.

The repository consists of ARM templates for deploying Rancher and a host VM for Kubernetes. NOTE: These templates are intended for labs to learn Rancher. They are not intended for use in production.

In the repo ARM Template #1 named RancherNode.JSON will deploy an Ubuntu VM with Docker and the latest version of Rancher (https://hub.docker.com/r/rancher/rancher) from Docker Hub. ARM Template #2 named RancherHost.JSON will deploy an Ubuntu VM with Docker to be used as a Kubernetes host in Rancher.

Node Deployment

Deploy the
RancherNode.JSON ARM template to your Azure subscription through “Template
Deployment” or other deployment method. You will be prompted for the
following info shown in the screenshot:

Host Deployment

Deploy the
RancherHost.JSON ARM template to your Azure subscription through “Template
Deployment” or other deployment method. Note that that should deploy this
into the same Resource Group that you deployed the Rancher Node ARM template
into. You will be prompted for the following info shown in the screenshot:

After the Rancher
Node and Rancher Host ARM templates are deployed you should see the following
resources in the new Resource Group:

Name

Type

RancherVNet

Virtual
network

RancherHost

Virtual
machine

RancherNode

Virtual
machine

RancherHostPublicIP

Public
IP address

RancherNodePublicIP

Public
IP address

RancherHostNic

Network
interface

RancherNodeNic

Network
interface

RancherHost_OSDisk

Disk

RancherNode_OSDisk

Disk

Next navigate the
Rancher portal in the web browser. The URL is the DNS name of the Rancher Node
VM. You can find the DNS name by clicking on the Rancher Node VM in the Azure
portal on the overview page. Here is an example of the URL:

The Rancher portal
will prompt you to set a password. This is shown in the following screenshot.

After setting the
password the Rancher portal will prompt you for the correct Rancher Server URL.
This will automatically be the Rancher Node VM DNS name. Click Save URL.

You will then be
logged into the Rancher portal. You will see the cluster page. From here you
will want to add a cluster. Doing this is how you add a new Kubernetes cluster
to Rancher. In this post I will show you how to add a cluster to the Rancher
Host VM. When it’s all said and done Rancher will have successfully deployed
Kubernetes to the Rancher Host VM. Note that you could add a managed Kubernetes
such as AKS but we won’t do that in this blog. I will save that for a future
blog post!

Click on Add Cluster

Under “From my
own existing nodes” Click on custom, give the cluster a name and click
Next.

Next check all the
boxes for the Node Options since all the roles will be on a single Kubernetes
cluster. Copy the code shown at the bottom of the page, click done and run the
code on the Rancher Host.

In order to run the
code on the Rancher Host you need to SSH in and run it from there. To do this
follow these steps:

In the Azure Portal, from within the resource group click on the Rancher Host VM.

On the Overview page click on Connect.

Copy “ssh ranchuser@rancherhost.centralus.cloudapp.azure.com” from the Connect to virtual machine pop up screen.

Open a terminal in either Azure cloud shell or with something like a terminal via VS Code and past the “ssh ranchuser@rancherhost.centralus.cloudapp.azure.com” in.

Running the code
will look like this:

When done you can
run Docker PS to see that the Rancher agent containers are running.

In the Rancher
portal under clusters you will see the Rancher host being provisioned

The status will
change as Kubernetes is deployed.

Once it’s done
provisioning you will see your Kubernetes cluster as Active.

From here you can
see a bunch of info about your new Kubernetes cluster. Also notice that you
could even launch Kubectl right from hereand start running commands! Take some
time to click around to see all the familiar stuff you are used to working with
in Kubernetes. This is pretty cool and simplifies the management experience for
Kubernetes.

If you want to add
more nodes or need the configuration code again just click the ellipsis button
and edit.

In Edit Cluster you
can change the cluster name, get and change settings and copy the code to add
more VMs to the cluster.

That’s the end of
this post. Thanks for reading. Check back for more Azure, Kubernetes, and
Rancher blog posts.

Azure Policy can be used to enforce rules and effects on resources in your Azure subscriptions. It is a part of the Azure Governance and management toolbox native to Azure. I actually wrote a blog post all about Azure Policy here as a part of my native cloud management in azure blog series.

In this blog post I
want to dig into Requiring Tags on Resource Groups via Azure Policy. There is a
sample policy ARM Template to accomplish this here:

Be sure you add a parameter for every rule. Also in the example I gave I removed the “equals”: “[parameters(‘tagValue’)]” from the rules because I did not want to populate the tag value. I simply needed to require the tag and leave the value open for the person creating the resource to fill in. Here is the full example Policy ARM Template here:

Scenario:

So, your team recently has been tasked with developing a new application and running it. The team made the decision to take a microservices based approach to the application. Your team also has decided to utilize Docker containers and Azure as a cloud platform. Great, now it’s time to move forward right? Not so fast. There is no question that Docker containers will be used, but what is in question is where you will run the containers. In Azure containers can run on Azure’s managed Kubernetes (AKS) service, an App Service Plan on Azure App Service Environment (ASE), or Azure Service Fabric (ASF). Let’s look at each one of these Azure services including an overview, pro’s, cons, and pricing.

This Azure Kubernetes Service (AKS) Pros and Cons chart is clickable.This Azure App Service Environment (ASE) Pros and Cons chart is clickable.This Azure Service Fabric (ASF) Pros and Cons chart is clickable.

Conclusion:

Choose Azure Kubernetes Service if you need more control, want to avoid vendor lock-in (can run on Azure, AWS, GCP, on-prem), need features of a full orchestration system, flexibility of auto scale configurations, need deeper monitoring, flexibility with networking, public IP’s, DNS, SSL, need a rich ecosystem of addons, will have many multi-container deployments, and plan to run a large number of containers. Also, this is a low cost.

Choose Azure App Service Environment if don’t need as much control, want a dedicated SLA, don’t need deep monitoring or control of the underlying server infrastructure, want to leverage features such as deployment slots, green/blue deployments, will have simple and a low number of multi-container deployments via Docker compose, and plan to run a smaller number of containers. Regarding cost, running a containerized application in an App Service Plan in ASE tends to be more expensive compared to running in AKS or Service Fabric. The higher cost of running containers on ASE is because with an App Service Plan on ASE, you are paying costs for a combination of resources and the managed service. With AKS and ASF you are only paying for the resources used.

Choose Service Fabric if you want a full micros services platform, need flexibility now or in the future to run in cloud and or on-premises, will run native code in addition to containers, want automatic load balancing, low cost.

A huge thanks to my colleague Sunny Singh (@sunnys101) for giving his input and reviewing this post. Thanks for reading and check back for more Azure and container contents soon.

Part of running Kubernetes is being able to
monitoring the cluster, the nodes, and the workloads running in it. Running
production workloads regardless of PaaS, VM’s, or containers requires a solid
level of reliability. Azure Kubernetes Service comes with monitoring provided
from Azure bundled with the semi-managed service. Kubernetes also has built in
monitoring that can also be utilized.

It is important to note that AKS is a free
service and Microsoft aims to achieve at least 99.5% availability for the
Kubernetes API server on the master node side.

But due to AKS being a free service Microsoft
does not carry an SLA on the Kubernetes cluster service itself. Microsoft does
provide an SLA for the availability of the underlying nodes in the cluster via
the Azure Virtual Machines SLA. Without an official SLA for the Kubernetes
cluster service it becomes even more critical to understand your deployment and
have the right monitoring tooling and plan in place so when an issue arises the
DevOps or CloudOps team can address, investigate, and resolve any issues with the
cluster.

The monitoring service included with AKS
gives you monitoring from two perspectives including the first one being
directly from an AKS cluster and the second one being all AKS clusters in a
subscription. The monitoring looks at two key areas “Health status”
and “Performance charts” and consists of:

Insights – Monitoring for the
Kubernetes cluster and containers.

Metrics – Metric based
cluster and pod charts.

Log Analytics – K8s and Container
logs viewing and search.

Azure Monitor

Azure Monitor has a containers section. Here
is where you will find a health summary across all clusters in a subscription
including ACS. You also will see how many nodes and system/user pods a cluster
has and if there are any health issues with the a node or pod. If you click on
a cluster from here it will bring you to the Insights section on the AKS
cluster itself.

If you click on an AKS cluster you will be
brought to the Insights section of AKS monitoring on the actual AKS cluster.
From here you can access the Metrics section and the Logs section as well as
shown in the following screenshot.

Insights

Insights is where you will find the bulk of
useful data when it comes to monitoring AKS. Within Insights you have these 4
areas Cluster, Nodes, Controllers, and Containers. Let’s take a deeper look
into each of the 4 areas.

Cluster

The cluster page contains charts with key
performance metrics for your AKS clusters health. It has performance charts for
your node count with status, pod count with status, along with aggregated node
memory and CPU utilization across the cluster. In here you can change the date
range and add filters to scope down to specific information you want to see.

Nodes

After clicking on the nodes tab you will see
the nodes running in your AKS cluster along with uptime, amount of pods on the
node, CPU usage, memory working set, and memory RSS. You can click on the arrow
next to a node to expand it displaying the pods that are running on it.

What you will notice is that when you click
on a node, or pod a property pane will be shown on the right hand side with the
properties of the selected object. An example of a node is shown in the
following screenshot.

Controllers

Click on the Controllers tab to see the
health of the clusters controllers. Again here you will see CPU usage, memory
working set, and memory RSS of each controller and what is running a
controller. As an example shown in the following screenshot you can see the
kubernetes dashboard pod running on the kubernetes-dashboard controller.

The properties of the kubernetes dashboard pod
as shown in the following screenshot gives you information like the pod name,
pod status, Uid, label and more.

You can drill in to see the container the pod
was deployed using.

Containers

On the Containers tab is where all the
containers in the AKS cluster are displayed. An as with the other tabs you can
see CPU usage, memory working set, and memory RSS. You also will see status,
the pod it is part of, the node its running on, its uptime and if it has had
any restarts. In the following screenshot the CPU usage metric filter is used
and I am showing a containers that has restarted 71 times indicating an issue
with that container.

In the
following screenshot the memory working set metric filter is shown.

You can also filter the
containers that will be shown through using the searching by name filter.

You also can see a containers logs in the containers tab. To do this select a container to show its properties. Within the properties you can click on View container live logs (preview) as shown in the following screenshot or View container logs. Container log data is collected every three minutes. STDOUT and STDERR is the log output from each Docker container that is sent to Log Analytics.

Kube-system is not currently collected and sent to Log Analytics. If you are not familiar with Docker logs more information on STDOUT and STDERR can be found on this Docker logging article here: https://docs.docker.com/config/containers/logging.

Clicking on View container logs will bring
you to the Log Analytics log search page with that containers logs shown in the
results pain.

Metrics

In the metrics section you can see metric
based cluster and pod charts that can help you see information that is
important to you about your AKS. Note that this service is still in preview so
more functionally and metrics will be added to it later. Here is a screenshot
with a couple of example charts showing pods by phase split based on namespace
and total of available cores in a cluster.

Currently the only available metric namespace
is microsoft.containerservice/managedclusters, aggregation can only be Sum as
of now and the metrics you can see are:

Within the metrics section you can pin charts
to your Azure dashboard and you can create an alert based on a condition such
as when pods are in a failed state.

Log Analytics

Log Analytics is used across many Azure
services for viewing logs and searches to analyze and find specific data to
identify trends, patterns, issues and more. In this section you can gain deep
insights into your AKS cluster and containers. Here is the log schema collected
in Container Insights:

The data types in the ContainerInsights schema are what appear in the Log Analytics search results. When you click on Logs from within the AKS cluster you will see the Log Analytics search page as shown in the following screenshot:

You can use the Filter to filter down the
results of a search. In the following screenshot I am showing the
ContrainerStatus facet selected. Adding this facet would show any pods that
have a terminated status. By clicking on Apply & Run the facet will be
added to the current query and then it will be run updating the results.

The following screenshot shows what the query
looks like with the ContainerStatus facet with a value of terminated added.

On the Log Analytics search page you can
build queries to pull back specific data. Here are some example queries.

Also in the Log Analytics search page you can
save queries for later use, copy a link directly to the query for sharing,
setup alerts based on conditions, and pin a chart to a shared Azure dashboard
like shown in the following screenshot.

Kubelet Logs

If something goes wrong with a node a good
portion of the troubleshooting can be done using the node monitoring provided
in Azure Monitor. If you need to go beyond Azure Monitor you can utilize the
kublet logs. You can view the kubelet logs from any of the AKS nodes using
journalctl. To do this you need to first SSH to the cluster node you want to
see the logs on. Once connected via SSH run

sudo journalctl -u kubelet -o cat

That will start rolling through the kubelet logs so you can have further insight
into what is happening on the node.

Kubernetes Master Node Logs

In AKS the Kubernetes master node
logs are not collected by default. These logs are not collected because
Microsoft manages the Kubernetes master nodes and therefore you typically do not
have to worry about troubleshooting the master nodes. In the event that there
is a need to see logs from any of the master nodes log collection can be turned
on so that they are sent to a Log Analytics workspace.

To enable the master node log
collection in the Azure portal navigate to the AKS resource group. NOTE do not
go to the AKS resource group with this name format
MC_ResourceGroupNAME_AKSClusterNAME_REGION. Once in the AKS resource group
click on Diagnostics Settings. Click on the AKS cluster.

Then click on turn on the diagnostics.

Configure the diagnostics settings like in
the following screenshot to send the logs to a Log Analytics workspace. You
will give the diagnostics collection a name, select or create a new Log
Analytics workspace and select the master nodes that you want to collect logs
from.

After you save the diagnostics log settings
you should now see this set on the AKS resource group like shown in the
following screenshot.

To see the actual logs go to the Log
Analytics workspace that you sent the logs to and run a search query like shown
in the following screenshot.

You can run one of the following
search queries to see logs from the Kubernetes master nodes:

In the Kubernetes dashboard you will also
find health and performance information as well that could help identify and
troubleshoot issues. The purpose of this blog was to show the monitoring
capabilities available in Azure for AKS. I will show some of the options
available in the Kubernetes dashboard but will not go deep into monitoring and
logging available directly in Kubernetes.

Inside the Kubernetes dashboard on the
overview page you will see all up health and performance of the cluster,
services, pods, and more. As you can see in the following screenshot there is
an issue spanning across the deployments, pods, and replica sets.

As we drill into the Pods page we can see
that there is a container that is constantly restarting and is in a failed
state. That is the cause of the issues on the overview dashboard shown across
the deployments, and replica sets. We can remove this pod and re-deploy it.