In this blog post I am going to walk through the steps for deploying WordPress to Azure Kubernetes Service (AKS) using MySQL and WordPress Docker images. Note that using the way I will show you is one way. Another way to deploy WordPress to AKS would be using a Helm Chart. Here is a link to the WordPress Helm Chart by Bitnami https://bitnami.com/stack/wordpress/helm. Here are the images we will use in this blog post:

The first thing we need to do is save these files as mysql-deployment.yaml and wordpress-deployment.yaml respectively.

Next, we need to setup a password for our MySQL DB. We will do this by creating a secret on our K8s cluster. To do this launch the bash or PowerShell in Azure cloud shell like in the following screenshot and run the following syntax:

The secret is now created. To ensure it was created you can run the following syntax to list the secrets:

kubectl get secrets

You also can see the secret in the Kubernetes dashboard as shown in the following screenshot.

Next the mysql-deployment.yaml and wordpress-deployment.yaml files from the beginning of this post need to be uploaded to Azure cloudrive storage.

You can also do this in the Cloudshell as shown in the following screenshot.

Run ls in the shell to make sure the files are on your clouddrive.

You will need your home drive. Mine was. /home/steve. To see this, click on Download. It will show you what yours is.

Next create the MySQL Pod and service by running the following syntax.

kubectl apply -f /home/steve/mysql-deployment.yaml

NOTE: You could use kubectl create /home/steve/mysql-deployment.yaml instead of apply to create the MySQL pod and service. I use apply because I typically use the declarative object configuration approach. kubectl apply essentially equals kubectl create + kubectl replace. In order to update an object after it has been created using kubectl create you would need to run kubectl replace.

Note that in the mysql yaml file it has syntax to create a persistent volume. This is needed so that the database stays in tact even if the pod fails, is moved etc. You can check to ensure the persistent volume was created by running the following syntax:

kubectl get pvc

Also, you can run the following syntax to verify the mysql pod is running:

kubectl get pods

Deploying the WordPress Pod and service is the same process. Use the following syntax to create the WordPress pod and service:

kubectl apply -f /home/steve/wordpress-deployment.yaml

Again, check to ensure the persistent volume was created. Use the following syntax:

kubectl get pvc

NOTE: When checking right after you created the persistent volume it may be in a pending status for a while like shown in the following screenshot:

You can also check the persistent volume using the K8s dashboard as shown in the following screenshot:

With the deployment of MySQL and WordPress we created 2 services. The MySQL service has a clusterip that can only be accessed internally. The WordPress service has an external IP that is also attached to an Azure Load Balancer for external access. I am not going to expand on what Kubernetes services are in this blog post but know that they are typically used as an abstracted layer in K8s used for access to Pods on the backend and follow the Pods regardless of the node they are running on. For more information about Kubernetes services visit this link: https://kubernetes.io/docs/concepts/services-networking/service.

In order to see that the services are running properly and find out the external IP you can run the following syntax:

kubectl get services (to see all services)

or

kubectl get services wordpress (to see just the WordPress service)

You also can view the services in the K8s dashboard as shown in the following screenshot:

Well now that we have verified the pods and the services are running let’s check out our new WordPress instance by going to the external IP in a web browser.

Thanks for checking out this blog post. I hope this was an easy to use guide to get WordPress up and running on your Azure Kubernetes Service cluster. Check back soon for more Azure and Kubernetes/Container content.

Azure Kubernetes
Service (AKS) is a fully managed Kubernetes (K8s) offering from Microsoft on
the Azure platform. AKS reduces the management overhead of running your own K8s
instance while still being able to take full advantage of Container
Orchestration. Microsoft takes care of the K8s health monitoring and
maintenance. With AKS you only manage the agent nodes while Microsoft manages
the master nodes. Also with AKS you get integration to many of the Azure
services such as load balancers, RBAC, Azure storage etc.

In this blog post I am going to walk through the setup of an AKS cluster step by step. This is to serve as a intro to AKS to show how easy it is to get started with Kubernetes in Azure. In a follow up blog post I will dive into AKS more showing how to deploy an instance MySQL and WordPress containers on AKS. Before we get into the setup of AKS there are a few things to note:

With the AKS managed service you only pay for the agent nodes within your AKS cluster. There is no cost for the master nodes and the managed service itself is free.

At the time of this blog post AKS only supports Linux containers. There is a work around for this until Windows nodes and containers come to AKS.

The Kubernetes API server is exposed as a public fully qualified domain name (FQDN). Access should be restricted on this. It can be restricted using K8s RBAC and AAD.

Deploy AKS

Housekeeping is
done, now let’s get into the deployment of AKS. The first thing you need to do
within the Azure portal is go to Create a resource and search on Kubernetes.
Select the Kubernetes Service.

Click on create.

You will now see the
setup. The setup consists of the following sections shown in the following
screenshot:

Let’s walk through
each section.

Basics

Here you need to
give your AKS instance a name, select the region, K8s version, DNS prefix, and
number of nodes and count.

Authentication

Kubernetes has its
own RBAC within its authentication and authorization system. Azure Active
Directory (AAD) can be integrated with this for authentication. Once the AAD
and K8s integration is setup AAD users can be used for Kubernetes role-based
access control (RBAC) to cluster resources. Select yes to enable RBAC and
integration with AAD.

It is recommended to
setup your own service principle in AAD. For this blog post I let the
deployment create one. The service principle is used by K8s for managing Azure
cloud resources attached to the cluster. The service principle interacts with
Azure APIs. For example when you setup a load balancer service in K8s AKS
creates and Azure load balancer. The service principle is what is used for
authentication to create the load balancer.

Networking

In this section you
chose what you want for networking with AKS. If you select basic AKS will
create all needed VNets, Subnets, NSG’s etc. AKS clusters cannot use the
following ranges 169.254.0.0/16, 172.30.0.0/16, and 172.31.0.0/16. If you
select advanced you can chose an existing VNet or create a new one specifying
the subnet, IP range and DNS settings etc. You would select Advanced if you
need more control over the virtual networking.

HTTP application routing is used to make application endpoints publicly accessible in the AKS cluster. Enabling this essentially configures an Ingress controller in the AKS cluster. When getting started with AKS I recommend leaving this disabled and doing more research on K8s Ingress Controllers here https://kubernetes.io/docs/concepts/services-networking/ingress as there are other options making applications publicly accessible. In the meantime while getting started with AKS you can use the load balancer service type for external access to your applications running on AKS.

Monitoring

With AKS you have
the option to utilize Container monitoring from Azure Monitor. This will give
you performance and health monitoring. The monitoring data comes directly from
an AKS cluster or from all the AKS clusters via Azure Monitor more specifically
Log Analytics. In the future I plan to post a deeper blog about monitoring AKS.

If you chose to
enable this you will need to setup a new
Log Analytics workspace or use an existing one.

Tags

You can set tags for
the AKS cluster.

Create

After
all the sections are completed the new AKS will need to validate. After it is
validated click on Create.

Exploring AKS

After the AKS
cluster is created you will see it in Azure under Kubernetes services.

Also you may notice
two new resource groups in your Azure subscription. The first resource group
will be the one you created during the AKS creation. This is the resource group
that will contain the Azure K8s cluster service. If you selected an advanced network
configuration during deployment to create a new VNet you will see that as well.

You will also see a second resource group with a name format similar to MC_ResourceGroupNAME_AKSClusterNAME_REGION. As shown in the following screenshot I have a resource group named MC_AKS12118RG_AKS12118_centralus. This resource group contains the individual AKS cluster resources such as the nodes.

This resource group also contains supporting Azure services like DNS, public IP’s, storage, load balancers, network security groups and more. Note do not make changes to the resources in this resource group directly. You should only make changes through the AKS service and K8s itself. For example when you deploy a new load balancer service in K8s the corresponding Azure load balancer will automatically be created.

Access Kubernetes Dashboard

Next you can access
the K8s cluster via a shell or access the dashboard. Before you can access the
dashboard the service principle account that was created during the AKS
deployment will need a ClusterRoleBinding that assigns the K8s role
dashboard-admin it. Run the following syntax from the Azure cloud shell to do
this:

2018 is almost over and it was an exciting year jam packed full of adventures! In this post I will recap some of the highlights from 2018.

New job

At the start of 2018 I started a new job with Avanade as a Group Manager: Cloud Transformation/DevOps. Who is Avanade? Avanade 35,000 plus employees and is a global consulting firm focused on the Microsoft platform. Avanade is owned by Accenture and Microsoft. I have been at the firm for exactly a year. It has been a fun ride so far working with really smart people on some exciting and very large projects. After I joined Avanade featured me on a Q&A spotlight blog.

I moved away from System Center related work to only working on Azure, Azure Stack, and DevOps project. I had been making this shifting in this direction for a couple of years but the new job helped me transition to the type of work I wanted to do. I am a firm believer in change and through change comes growth. I still have my System Center skills but for me it was time to make a change in my career focus to be challenged and keep growing. Looking back I would make the same choice over again. You can see this change reflected in the blog topics I have posted, topics I have presented on this past year. My new role with Avanade has also helped me move my focus to cloud and DevOps.

I highly recommend any of the Microsoft Professional Program tracks. It is excellent training!

Changes in user group involvement

I stepped down from the MN System Center user group board after 6 years. The board is filled with great people and is as strong as ever. I will continue to speak at the UG from time to time when it makes sense and may even attend some of the meetings. I needed to step down to step up my Azure community focus. I have been leading the MN Azure user group for the past few years and now I am able to step up my involvement with this UG. More info about the MN Azure user group can be found here: http://mnazureusergroup.com Some of the key meetings/topics in my opinion from 2018 are

As you can see we had some really great speakers both from the community and Microsoft. We do our best to collect the slides from presenters and upload them on the UG site. Past meeting info is here: http://mnazureusergroup.com/category/past-meetings

Key blog posts

With everything else going on I did my best to keep up with new blogs over the year. Same theme with my blog topics being focused on Azure and DevOps. Some of my key blogs from 2018 are:

This was the Inaugural conference for Blacks in Technology. BITCon brought together all walks of life in tech such as professionals, entrepreneurs, influencers, subject matter experts, students, and thought leaders.

Today as a part of the Azure Governance and management announcements at Microsoft Ignite 2018Azure Blueprints Public Preview was announced. Azure Blueprints are a core part of the cloud governance and management story. They go hand and hand with Management Groups and will take the enterprise management story of Azure up a level. In this blog post I will take a deep dive into Azure Blueprints explaining what they are and give an example of how they can be used.

NOTE:This is a long blog post so I have also published this content as a whitepaper. The whitepaper PDF can be downloaded here.

BLUEPRINTS OVERVIEW

At a high-level Azure Blueprints help you meet organizational cloud standards, patterns, and requirements through governed subscriptions enabled and enforced by the grouping of artifacts like ARM templates, Azure policies, RBAC role assignments, and resource groups within a Blueprint.

Blueprints can be used to lay a cloud foundation, as cloud patterns, and group cloud governance frameworks. Blueprints are a one-click solution for deploying a cloud foundation, pattern, or governance framework to an Azure subscription. Think of an Azure Blueprint as re-usable design parameters for cloud that can be shared and used across an enterprise.

Azure architects typically map out and plan the many aspects of a cloud foundation for an organization such as access management, networking, storage, policy, security/compliance, naming conventions, tagging, monitoring, backup, locations, and more. Now Azure architects can step this designing a step further build these designs as Azure Blueprints and then apply them to subscriptions. The Blueprints give architects a way to orchestrate the deployment of grouped components to speed up the development and provisioning of new Azure environments ensuring they are meeting organizational compliance.

BLUEPRINTS ARE NOT AZURE POLICY

Azure policy is a service targeted to resource properties that exists or when being deployed with allow or explicit deny policies. It is used to ensure resources in an Azure subscription adhere to requirements and standards of an organization.

Azure policies can exist on their own or be a part of an Azure Blueprint. Blueprints do not replace Policy they are one of the Artifact types that make up a Blueprint.

THE MAKEUP OF A BLUEPRINT

Definition

A Blueprint consists of a Definition. The Definition is the design of what should be deployed it consists of the name of the Blueprint, the description and the Definition location. The Definition Location is the place in the Management Group hierarchy where this Blueprint Definition will be stored and determines the level assignment is allowed at. Currently you must have Contributor access to a Management Group to be able to save a Blueprint Definition to it. A Blueprint can be assigned at or below the Management Group it has set in its Definition Location. Here is a diagram to visualize Blueprint Assignment in relation to Management Group hierarchy:

Artifacts

The Definition is where Blueprint Artifacts are added. As of right now the following is a list of the Artifact types:

Policy Assignments – Lets you add an Azure Policy. This can be a built-in or custom policy.

Role Assignments – Lets you add a user, app, or group and set the role. Only built-in roles are currently supported.

Azure Resource Manager templates – Lets you add an ARM Template. This does not let you import a parameters file. It does let you pre-set the parameters or set the parameters during assignment of the Blueprint.

Resource Groups– Lets you add a Resource Group to be created as a part of this Blueprint.

In my opinion the ARM Template artifact is the most impactful of the Blueprint artifact types because you can define such a variety of resources here. It opens the Blueprint to the power of ARM in general. Hopefully in the future we will see more scripting capability or the ability to load PowerShell scripts, runbooks, and or Functions.

There are two levels in the Artifacts. The first level is Subscription. The second level is Resource Group. Resource Group artifacts cannot be added to a Resource Group artifact. A Resource Group artifact can be created in a Subscription. An ARM Template artifact can only be created in a Resource Group artifact. A Policy Assignments or Role Assignments can be created at either the Subscription or Resource Group level.

Assignment

After a Blueprint has been built it needs to be applied. Applying a Blueprint is known as Blueprint assignment. The assignment is essentially the “what was deployed” for a Blueprint. This is how the artifacts are pushed out to Azure and used to track and audit deployments in Azure.

Sequencing

When the assignment of a Blueprint is processed the default order of resource creation is:

Role assignment artifacts at the Subscription level

Policy assignment artifacts at the Subscription level

Azure Resource Manager template artifacts at the Subscription level

Resource group artifacts and its child artifacts (role assignment, policy assignment, ARM Templates) at the Resource Group level

When a blueprint includes multiple Azure Resource Manager templates there may be a need to customize the sequencing order in which the Blueprint will deploy artifacts during assignment. You customize the artifact deployment sequence by deploying a Blueprint from an ARM Template declaring a dependency within it or declaring a dependency within an ARM Template artifact in the Blueprint. You declare a dependency using the dependsOn property in JSON. This essentially is a string array of artifact names.

Resource Locking

In cloud environments consistency is key. Naturally Azure Blueprints can also leverage resource locking in Azure. Blueprints have a Locking Mode. This Locking Mode can be applied to None or All Resources and is determined during the assignment of the Blueprint. The decision on cannot be changed later. If a locking state needs to be removed, then you must first remove the Blueprint assignment.

Some Blueprint artifacts create resources during assignment. These resources can have the following state:

Artifacts that become Resource groups get the state of Cannot Edit / Delete automatically but you can create, update, and delete resources within them.

The high-level stages of an Azure Blueprint are Create it, assign it to a scope, and track it.

Anatomy of a Blueprint:

Blueprint does have a REST API. I am not covering the REST API in this blog post as I have not had the opportunity to spend much time working with it yet.

Now let’s look at building and assigning an Azure Blueprint.

BUILD A BLUEPRINT

Now I am going to give an example of building and using an Azure Blueprint in a cloud foundation mock scenario. In my mock scenario I have 3 Azure subscriptions. Each subscription should have a Core services Resource Group consisting of a core VNet with 3 subnets, an NSG for each subnet, and the web subnet should be ready for DMZ traffic. For the core VNet and any additional VNet added to the Core Services Resource Group I need network watcher deployed to it.

Each subscription also should have a core storage account and a blob storage that is ready for general storage needs. I want a tag applied to any Blueprint assignment labeling it with the assignment name, so it is easy to track. The last requirement I have is that I need the CloudOps team to automatically be owner of all core services resources. To accomplish all of this I created the following Blueprint:

Now let’s walk through the parts of creating and assigning the Blueprint. The first step is to create the Blueprint Definition.

In the basics step I give it a meaningful name and meaningful description. I set the Definition Location to the root of my Management groups. Doing this will allow me to assign this Blueprint to all 3 subscriptions in turn creating the core services RG in each subscription.

Next the Artifacts need to be added. Note that when adding an Artifact at the Subscription level you have these options as types:

The Resource Group Artifact type is only available at the subscription level and the ARM template Artifact type is only available at the Resource Group level. I added the Resource Group that the core networking and core storage will be deployed into.

Another critical part of managing any cloud is security. In Azure Microsoft has a service called Security Center. I am going to cover Security Center at a high level here in this post as Security Center itself is a big topic and is frequently changing with new improvements. This provides continuous assessment of your clouds security posture. Security Center gives you a central place to monitor and manage your security. Security Center can even covers Hybrid Cloud with the ability to extend on-premises. With Security Center you can apply security policies to your cloud workloads and respond to attacks that occur.

Security Center has a “free” tier that can be used with any Azure subscription. In fact if you are running Azure you should at a minimum be utilizing the free tier of Security Center. The tiers are:

Not covered = not monitored by Security Center.

Basic Coverage = subscriptions under this “free” tier are under the limited, free level of Security Center.

Standard Coverage = subscriptions under this “standard” tier have the maximum level coverage by Security Center.

Key features in Security Center are:

– Security policy, assessment, and recommendations / free / Security Center performs continuous assessment and recommendations based on security policies that are set. This is the core feature of Security Center.

– Event collection and search / standard / Security Center can store security events in a Log Analytics (LA) workspace. The events also are available in the LA workspace for searching.

– Just in time VM access / standard / Just in time VM access locks down inbound traffic to IaaS VM’s. With this feature users are required to request access to the VM for a specified amount of time. A firewall rule is opened on an NSG allowing the access and then the ports are closed after the allotted window of access time. This can reduce the attack surface on VM’s.

– Adaptive application controls / standard / This feature allows you to choose what applications are allowed to run on your VMs. This feature uses machine learning to analyze the applications running in the VM and then you whitelist the ones you want to allow to run.

– Custom alerts / standard / Security Center has a bunch of default alerts. Alerts fire when a threat, or suspicious activity occurs. You can find the list of the default alerts here: security alerts. Security Center also has custom alerts that you can setup. With these you define the conditions upon which an alert is fired.

It is important to note that Security Center leverages many other Azure services to power its services. Some of these other Azure services include:

Azure Policy

Log Analytics

Logic Apps

Machine Learning

Now that we looked at key features of Security Center let’s take a tour of Security Center. The best way to navigate Security Center is via the navigation on the left hand side and that is the way I will break it down. The menu sections are shown in the following table:

When you first click into Security Center you will see the Overview. Overview is also the first section under “General”. Here is a screenshot of the overview pain.

Essentially the overview pane gives you a summary of your security posture pulling in data from several sections in Security Center. Getting started is where you can launch a 60 day trial on the standard plan. Events brings you to a log analytics workspace dashboard to give you another display and search capabilities on your security data. Search will bring you directly to the log analytics search screen where you can search on your security data.

I looked for an existing ARM template that would create multiple Linux VM’s. I found only one that creates some in a scale set. The use case I was working with did not call for a scale set so I needed a different template.

I found a simple ARM template for creating multiple Windows VM’s on Azure here. It had exactly what I needed for my use case but did not cover Linux.

I modified the template and uploaded to Github in case this is helpful to anyone else. The repo has two templates. There is one for Ubuntu and one for SUSE. When you deploy the template it will need the following parameters:

The ARM template will create an availability set (AS) with N number of VM’s put in that AS, network interfaces, and public IP’s for each VM along with a VNet and Subnet as shown in the following screenshot:

When building things in Azure & Azure Stack I tend to create a lot of temporary resources groups. I like to remove these when I am done. I have been using a PowerShell script for a while to help make this easier. I have decided to upload this script hoping others will find it useful as well. The script is named CleanupResourceGroups.ps1 and can be downloaded here:https://gallery.technet.microsoft.com/Cleanup-Azure-Resource-d95fc34e

The script can be used two ways:

#1 the script can be run using -Like with an expression like where {$_.ResourceGroupName -like (‘*MySQL*’) in which the script would remove any resource group with MySQL in it. To use this option just un-comment the code in SECTION 1- Uses -Like, change MySQL to whatever you want, comment SECTION 2- Interactive RG selection code, and then run the script.

#2 the script can be run interactively allowing you to select multiple resource groups you want to remove. By default the SECTION 2- Interactive RG selection code is un-commented. If you run the script it will run interactively as shown in the following steps/screenshots.

After running the script it will prompt you to select an Azure subscription.

Next the script will give you a list of resource groups in the subscription you selected. Select the resource groups you want to remove and click Ok.

The script will loop through and remove the resource groups you selected. Note that script is using -Force so it will not prompt to ensure you intend to remove the resource groups. Make sure you want to remove the resource groups before running this script.

NOTE:When running this for Azure Stack ensure you are logged into the Azure Stack environment. For info on how to do this visit:https://bit.ly/2LkvddG

That is it. It is a simple script to make removing many resource groups easier. I hope you find this script useful as I have!

I was recently working on an Azure Automation runbook that provisions an empty resource group in Azure. I was running into an issue when the runbook ran that the variable being used with New-AzureRmRoleAssignment was null. The errors I was receiving are:

You may have a some differences like the connection variable and the name of the runasconnection. The point here is that the runas connection is what needs to have the proper permissions. You can find this account here to get the name and ApplicationID:

To give the permissions go to Azure Active Directory>the directory you are using in this automation>App registrations>and search based on the ApplicationID. Don’t forget to select All apps in the drop down.

Click on Add first and add the AAD and then Microsoft Graph permissions.

After you add the proper permissions make sure you click on Grant Permissions. The permissions are not actually applied until you do this. Once you click on Grant permissions you will see the prompt shown in the screenshot. Click Yes.

Verify the permissions have been added properly. In AAD go to All applications>select All applications. Find your service principle application.

Click on the service principle applications permissions.

Verify the AAD and graph permissions are listed. If the AAD and graph permissions are listed then the runbook should be good to go.

It has been a while since I have blogged about non-Microsoft technology. Well I recently moved to a new house and figured this was a good reason to upgrade my network and wifi equipment. I decided to go with Ubiquiti Networks – UniFi line. They have a physical hardware and Software Defined Networking (SDN) combo that I deployed. After deploying Unifi I realized how bad the previous wifi solutions I have used are and wanted to blog about Unifi’s solution. Lets jump in. Here is a list and pictures of the gear for my setup:

NOTE: I originally also bought the UniFi® Cloud Key. This is basically an embedded server that runs the UniFi Controller software for managing all the network gear. It kept rebooting every 5 minutes and was super-hot. I ended up returning it after talking to tech support. I will either buy one in the future when they fix it or I will just run the UniFi Controller software on my own server.

I decided to go with all Unifi gear because it works seamlessly together. The gear overall has great designs especially the AP’s. The AP’s mount to a wall or ceiling and blend in like smoke detectors. The real star in the Unifi solution though is the UniFi Controller software. The UniFi Controller software gives you centralized management of all of your network gear. With the controller software you can Visualize the network in maps, get performance charts with real-time graphs, receive outage notifications and custom alerts, manage updates and schedule tasks, set up alerts, apply mass-configuration changes, get deep insights into metrics, setup VLANs, multiple wifi networks, access schedules, setup guest networks and more. I know this is just for my home network but I am a technical geek and am super excited to have this level of networking in my home. Now let’s explorer the UniFi Controller software on my setup.

In the UniFi Controller software you can add all of your devices. The following screenshot shows this. You can manage the devices from here such as rebooting, upgrading firmware, locating them and more. Something cool about locating the devices is that when you click on locate it makes the blue light the device has flash.

One of my favorite features of the UniFi Controller software is the ability to have network maps. You can upload custom floor plans into the UniFi Controller software and then you can place your devices on the map. In my scenario I uploaded maps for 3 floors. This screenshot shows the lower floor with the gateway and switch.

I have a main level map that has one of the AP’s.

I then have an upper map with the second AP. Something else to note about these maps when you have an AP shown is that you can display wifi coverage. You can should 2G or 5G coverage.

From the maps section of the UniFi Controller software you can also switch to the topology view. The topology view gives you a tree view of your devices and clients that are connected to devices. In the following screenshot you can see clients that are connected via CAT6 to the 24 port switch and you can see what clients are connected to each wifi AP. Something else shown in the screenshot is properties of a client. You can get device info, stats, and even deep packet inspection.