Latest Blog Posts

There are many ways to create functionality in PowerShell including basic cmdlets, aliases and functions. When you use multiple combinations its important to understand the precedence. This is best understood by walking through a basic example.

Firstly just run:

get-process

This will result in processes being displayed as expected.

Now lets create a function called get-process that lists child items.

functionget-process{Get-ChildItem}

Now if you run get-process it will show child items so the function trumps the built-in cmdlet.

Now let’s create an alias so get-process points to get-service.

New-Aliasget-process-Valueget-service

Run get-process and it shows services so an alias trumps a function (which trumps the native cmdlets).

I have a demonstration environment where many users have accounts but they never logon to AD directly nor look at this demonstration email mailbox. They only use the environment via Azure AD where they logon at Azure AD via the replicated password hash. Because of this they don’t get password expiry notifications and continue to logon however if they try and access something that does hook into AD and not Azure AD the logon fails.

They wanted to be emailed of upcoming password expiry to their real-email. To accomplish this their real email was stored in extensionAttribute10. I didn’t use the proxyaddresses as this may have SIP information. This attribute could be easily set with:

Set AD attribute

PowerShell

1

$aduser|Set-Aduser-Replace@{extensionAttribute10=$AltEmail}

I had a mailbox for a core process I use. Now that user has no other rights so I placed the password in the script but that’s not ideal at all. If this was Azure Automation I could have used a credential object, I could have at least made the password harder to read by creating an encrypted version of the password and then storing that in the file (but its still reversible, just slightly harder to glance at!), e.g.

Secure password in PowerShell

1

2

3

4

ConvertFrom-SecureString(ConvertTo-SecureString-AsPlainText-Force'Password123')#this is run once to generate the value

$securepassword=ConvertTo-SecureString"<the huge value from previous command on one line>"#this is then in script going forward

However the account can’t do anything except email and access to the script location was highly restricted so I left it as text which was also easier to demonstrate below however in my environment I used the alternate approach above just to make it a little harder to get the password on glance :-). Replace this with your own email and password.

The script looks for any password expiring in less than 10 days and emails a simple message. Customize as you like! It has a basic HTML block with a placeholder (MESSAGEHOLDER) that is replaced by a custom string for the user.

I needed to add members of a number of groups from one Azure AD tenant to a group in another Azure AD tenant that would then be given access to a resource. The goal was to not require the users added to have to redeem the invite which is common when adding a B2B user. To do this the first step was a user invited via B2B the normal way, that user redeemed the invite and in this case was then made a global admin (although another option would have been to enable guests to invite guests). The key point was this user had the ability to invite people via B2B and could enumerate users in the invited Azure AD instance which would mean invites would not have to be redeemed.

My first version of the script was very simply however I soon realized I would have to rerun the script to add new users and so I enhanced it to extract the current members of the group, convert to regular email format (since when invite to Azure AD the users have @ replaced with _ and is put in a string with various components separated by a #). The script therefore extracts the first part and converts the _ back to a @. Then looks for only for people who are not already members.

In the script below replace the group names, Azure AD names and IDs to meet your requirements.

I recently needed to create an AD site for each MTC (an office), add the IP range assigned to that MTC (which was in a CSV file) and then associate the site with a site link for its region. This is so the Active Directory automatic site coverage feature will enable DCs to populate per-site DNS records for the MTCs ensuring authentication traffic uses the most optimal DC. The DCs are spread over four regional locations.

The CSV file simply had one or two second octet numbers for the /16 IP range associated with the MTC. The code therefore enumerates through each OU, checks to see if the MTC can be found in the CSV data for the IP ranges. Next if the site does not already exist it is created, added to its regional site link (based on the parent OU name and for NA if its East or West) and then the IP ranges for the MTC assigned.

A lot of the work I do around Active Directory and Azure AD is for our OneMTC.net environment used by our global Microsoft Technology Centers. It is built around a number of region-based organizational units which then have child OUs for each MTC.

The requirement was to create a number of GPOs for each MTC which could then be modified by the local administrator of the MTC. To do this I created two template GPOs with most of the basic settings which I then just needed to copy to a new, per-MTC GPO instance then link to the GPO. This was very easy with PowerShell and the GroupPolicy module.

I also had already created the GPOs for a couple of MTCs so wanted to skip creating the objects for them. In the PowerShell below you can see I have a variable for the top-level of the MTC and then an array of the top level regional OUs. From there I have the names of the GPO templates and an array of the MTCs to skip. At that point I just enumerate for OUs, copy the GPOs and link the new per-instance GPO to the OU.

In this blog I want to walkthrough a solution I recently architected and implemented along with a two other MTC architects to deliver a solution we needed for two reasons:

To provide insight into the VMs hosted in Azure across the global Microsoft Technology Center environment

Showcase the use of some key Microsoft cloud technologies

The Requirement

The global MTC organization is made up of around 30 offices which each have several Azure subscriptions to host the projects they are working on and environments used in customer activities. Additionally, there are several global, shared Azure subscriptions that host core infrastructure and experiences. These subscriptions are tied to various Azure AD tenants depending on requirements. The primary subscription for each MTC also hosts a virtual network that is part of a global IP space that is connected via one of four regional ExpressRoute circuits to the MTC worldwide VPN that provides connectivity between all MTC offices.

While there is a standard governance and process guide each MTC has control of their own subscriptions and resources however from a central MTC organization perspective insight into several key factors was required.

Are the VMs registered with the central Log Analytics instance to report inventory and patch state. Log Analytics is part of the Operations Management Suite and is used to accept log information from almost any sort and then provides power analytical capabilities to use that information to provide insight into the environment. A number of solutions are included that provide visibility into best practices, patch status, anti-malware status and much more. For OS instance visibility Log Analytics uses the Microsoft Monitoring Agent (MMS) which is the same agent used by System Center Operations Manager.

What is the current patch status of the VM. This is provided by information to Log Analytics and to Azure Security Center if registered. Azure Security Center (ASC) provides a central security posture location for Azure resources including VM health, network health, storage health and more.

Is the VM connected to ExpressRoute. This can be found by checking the virtual network a VM is attached to and if that virtual network has an ExpressRoute Gateway connected.

Does the VM have a public IP and is it health. Public IP existence can be found through the properties of VM IP configurations and the health which is based on use of Network Security Groups to lock down communication through ASC.

Is the VM older than 30 days. Object creations are logged in Azure. By default, these are kept for 60 days which enables a search of the logs for the VM creation. If not found it would mean the VM is older than 60 days and if found the exact age can be determined. The age is useful as short-term VMs do not have the same levels of reporting requirements, i.e. does not have to be registered to OMS.

The insight into the health needed to be in the form to provide easy overall insight while allowing detail to be exposed through drilling down into the data.

The Solution

I started off crafting a solution in PowerShell through which I can access the full knowledge of the Azure Resource Manager via the AzureRM module and also other solutions such as Log Analytics, Azure Security Center and Azure Storage.

If you like to read the end of the book below is the final solution and what I will walk through is some of the detail you see in the picture.

The first challenge was the context to run the script under since multiple Azure AD tenants were utilized and I didn’t want to have to manage multiple credentials. Therefore, Azure AD B2B (business to business) was utilized. A single identity in the main Azure AD tenant was created and then a communication sent to each MTC to add that identity via Azure AD B2B to any local Azure AD tenant instances and then to give that account Read permissions to all subscriptions. This enabled a single credential to be used across every subscription, regardless of the Azure AD tenant the subscription was tied to. This same credential was also give rights to the Log Analytics instance all VMs reported to which enables queries to be run.

Now the access was available the next step was the actual PowerShell to gather the required information. A storage account was created that would be used to store the output of the execution which would be a basic execution report and two JSON files that contained custom objects representing the VM state and Azure subscription information.

The basic PowerShell flow is as follows:

Import the ASC and Log Analytics PowerShell modules

Access the credential that will be used

Connect to Azure using the credential

Store a list of every subscription associated to the credential in an array

Connect to the Azure Storage account to create a context for BLOB storage

Connect to the Log Analytics workspace and trigger two queries whose results were stored in two arrays

List of all machines that report to the instance that are stored in Azure

List of all machines that are missing patches that are stored in Azure

Create two empty arrays that will store custom objects for VM state and subscription information

For every subscription perform the following:

List the administrators and write to the log

Retrieve the ASC status for the subscription and store in an array

For every Resource Group

Find the virtual networks connected to ExpressRoute gateway and store in an array

For every VM in the Resource Group

Find the creation time by scanning the operational log of Azure. Save the creation time if found and if older than 30 days or report older than 30 days if no log found

For each NIC inspect the IP configurations

Is it connected to a virtual network that has ExpressRoute connectivity

Does it have a public IP address and if so what is the health of that public IP based on information previously saved from ASC

Is the VM registered in OMS

Is the VM missing patches based on information from OMS or ASC

Create a custom object using a hash table with all desired information about the VM and add to the VM object array

Add a subscription information custom object to the subscription array

Upload the three data files generated to the Azure storage account as BLOBs

To actually run the PowerShell I used Azure Automation which not only provided a resilient engine to run the code but capabilities such as credentials which could securely store the identity that was used removing any need to hardcode it in the script itself. The schedule capability was used to trigger the runbook (the container for the PowerShell in Azure Automation) to right daily at 11pm.

At this point in an Azure Storage account was a report and two JSON files with one of them, the VM state JSON file, the most useful which enabled all information to be queried easily however the goal was to have it more easily digestible which meant PowerBI and ideally getting the data more easily available to everyone, e.g. Teams along with a notification that the nights execution was successful.

The solution was to use a Logic App (created by Ali Mazaheri, https://blogs.msdn.com/alimaz) which enables activities to be chained together using various connectors which include Azure Storage, Teams and SharePoint. The Logic App was designed with a recurrence trigger (but could also trigger based on object creations and other triggers) and to then perform the following:

List the blobs in the azurescan container (a container is like a folder in Azure Store)

For each object that is not empty

Get the BLOB content

Create a file containing that content in SharePoint

Copy the BLOB to an archive BLOB

Delete the original BLOB

Write a message to a team’s channel that the log migration was completed (Or send an email, notification to phone, etc.)

A great feature of Logic Apps is that they are implemented by adding the built-in connectors or your own API apps, Azure Functions and then graphically laying out the flow using conditions, branches and those connectors by passing output as an input for next connectors and in this case some custom expressions. Below is the key content of the Logic App (as an alternative we could have also used Azure Functions and EventGrid to achieve the same goal).

The final step was the Power BI portion to read in the file from SharePoint and provide a visualization of the data contained in the JSON. David Browne created this powerful dashboard that enabled various visualizations of the data and easy access to change the criteria of the data contained.

The Power BI Service can connect directly to SharePoint Online to read the files. Power Query in Power BI is used to identify the latest data files, convert them from JSON to a tabular format and to clean the data. The data is then loaded into an in-memory Tabular Model hosted by Power BI and configured for daily refresh.

I recently deployed a new WSUS server on Windows Server 2016 but the console would crash, the WSUS engine had crashed and it turns out the problem is it runs out of memory. Make sure your WSUS server had at least 8GB of memory then perform the following:

If you leverage the Azure Cloud Shell in the Azure portal its a very convenient way to manage Azure resources using PowerShell and the CLI but you may have also noticed an actual Azure drive, i.e. Set-Location azure: and you can navigate around your Azure resources (this is actually the default location when the cloud shell opens). At the top level are subscriptions and you can then navigate to resource groups, VMs, WebApps and more.

The Azure drive is provided via the Simple Hierarchy in PowerShell (SHiPS) provider which you can see via Get-PSProvider.

The actual functionality is evolving, its a project on GitHub at https://github.com/PowerShell/SHiPS but this also means you can run this same provider outside of the Azure Cloud Shell.

You need to ensure you are running the latest version of the AzureRM module then download, install, add an Azure account and add the provider:

Add the Azure PS Drive

PowerShell

1

2

3

4

5

Update-ModuleAzureRM

Install-ModuleAzurePSDrive

Login-AzureRmAccount

Import-ModuleAzurePSDrive

New-PSDrive-NameAzure-PSProviderSHiPS-root'AzurePSDrive#Azure'

You can now navigate to Azure: and enjoy the same feature as when in the Azure Cloud Shell.

Note this is completely different from the Azure Cloud Drive which is the persistent file storage you have in the Azure Cloud Shell that is backed by Azure Files and enables data to be saved and used between sessions. Use Get-CloudDrive to see the current configuration and if you wish to change it simply run Dismount-CloudDrive and then restart the shell and select Advanced options to customize the location.

Azure Automation enables PowerShell (and more) to be executed as runbooks by runbook workers hosted in Azure. Additionally Azure Automation accounts bring capabilities such as credential objects to securely store credentials, variables, scheduling and more. When a runbook executes it runs in a temporary environment that does not have any persistent state and so if you want to work with files you need to save them somewhere, for example to an Azure storage account as a blob, before the runbook completes.

You can actually create and use files as normal using the default path within PowerShell during execution, just remember to save the files externally before the script completes.

For example create a file as usual:

Creating and writing to a file

PowerShell

1

2

3

4

5

$todaydate=Get-Date-FormatMM-dd-yy

$LogFull="AzureScan-$todaydate.log"

$LogItem=New-Item-ItemTypeFile-Name$LogFull

" Text to write"|Out-File-FilePath$LogFull-Append

Then before ending the PowerShell, copy it to a blob (as an example storage place):

I recently needed to create a whole set of subnets in a large number of virtual networks of various sizes. I thought some variables would be a great way to quickly create the set of subnets in each virtual network which were each /20 networks in a shared class B IP which enabled 16 virtual networks per Class B IP space. The goal was to show that each subnet didn’t need to be a full class C (/24) in instead we could use smaller subnets based on the number of hosts actually required. I’ve included the comments which explains the subnets created and the number of hosts supported in each.