Microsoft Azure

I recently had to deploy some new VMs and wanted to use PowerShell and also join them to a domain and get the anti-malware extension used. Below is the PowerShell I used. You would need to modify the variables in the below to match your own domains.

I was recently part of a project to deploy SharePoint and Office Online Server (OOS) to Azure IaaS as part of a hybrid deployment. A requirement was to make the SharePoint available to the Internet in addition to the OOS (enabling editing of documents/previews online).

The deployment was very simple, 3 VMs were deployed to a subnet that has connectivity to an existing AD:

SQL Server – 10.244.3.68

SharePoint Server – 10.244.3.69, alias record sharepoint.onemtcqa.net

OOS Server – 10.244.3.70, alias oos.onemtcqa.net

The alias records were created on the internal DNS and external DNS, a split-brain DNS. We also had a wildcard certificate for onemtcqa.net which we could therefore use for https for both sites.

Azure has two built-in load balancer solutions (with more available through 3rd party solutions and virtual appliances).

The layer 4 Azure Load Balancer which could have been used by configuring the front-end as a public IP and supports any protocol

The layer 7 Azure Application Gateway that in addition to providing capabilities like SSL offload and cookie based affinity also has the optional Web Application Firewall to provide additional protection. More information on the Application Gateway can be found at https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-introduction. The front-end IP can be internal or public and the back end can load balance to multiple targets (like the layer 4 load balancer option).

Because the services being published were HTTP based, it made sense to utilize the Azure Application Gateway and would provide a great reason to get hands on with the technology. Additionally the added protection via the WAF was a huge benefit.

There are a number of settings related to the App Gateway which all relate to each other in a specific manner which provides the complete solution. A single App Gateway can publish multiple sites which meant I only needed a single App Gateway instance with a single public IP for both the sites I needed to publish.

Below is a basic picture of the key components related to an App Gateway that I put together to aid in my own understanding! The arrows show directions of link, so the Rule links to three other items which really bind everything together.

When deploying the Application Gateway through the portal there are some initial configurations:

The SKU

The virtual network it will connect to and you must specify an empty subnet that can only be populated by App Gateway resources. This should be at least a /29

The front end IP and if a public IP is created it must be dynamic and cannot have a custom DNS name

If the listener is HTTP or HTTPs and the port

Note, if using a public IP, because it is dynamic and cannot have a custom DNS name you can check its actual DNS name using PowerShell and then create an alias on the Internet to that DNS name. Use Get-AzureRmPublicIPAddress and use the DnsSettings.Fqdn attribute. For example:

The name will be <GUID>.cloudapp.net. I created two alias records, sharepoint and oos, both pointing to this name on the public DNS servers.

Once created we need to tweak some things from those created by the portal wizard.

The virtual subnet that is used for the App Gateway needs its NSG modified as some additional ports must be opened from the Any source to the Virtual Network (this is in addition to the AzureLoadBalancer default inbound rule). Add an inbound rule to allow 65503-65534 TCP from Any to VirtualNetwork. Note this only needs to be enabled on the NSG applied to the Application Gateways subnet and NOT the subnets containing the actual back-end resources. Also ensure the Application Gateway subnet can communicate with the subnets hosting the services.

By default the built-in probe that checks if a backend target is healthy and a possible target for traffic looks for a response between 200 and 399 as a healthy response (per https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-probe-overview) however for the SharePoint site this won’t work as it prompts for authentication so we need to create a custom probe on HTTPS which accepts 200-401. This can be done with PowerShell (I’m using the internal DNS name here which is the same as external):

Open the HTTP Settings object, ensure it is HTTPS, upload the certificate and select to use a custom probe and select the probe that was just created.

A default listener was created but this can’t be used so instead create a new multi-site listener.

Use the existing frontend IP configuration and 443 port

Enter the hostname, e.g. sharepoint.onemtcqa.net

Protocol is HTTPS

Use an existing certified or upload a new certificate to use

Open the backend pool and add the internal IP address of the target(s).

The initial default rule created should work which links to the listener created, the backend pool and the HTTP setting that was modified.

If you open the Backend health under Monitoring it should show a status of healthy and you should be able to connect via the external name (that points to the DNS name of the public IP address).

Now the OOS has to be published which does not require authentication which means a different probe must be used which means a different listener and different targets. Even though it will be a different listener its not like old style listeners where only one can listen on a specific port. This is rather just a set of configurations and so multiple 443 listeners can share the same frontend configuration (and therefore public IP).

Create a new Backend pool with the OOS machines as the target

Create a new multi-site listener that uses the existing Frontend IP configuration and port with the OOS public hostname, HTTPS and OOS certificate (same if a wildcard or subject alternative names)

Create a new health probe. Use the OOS internal DNS name, HTTPS and for path use /hosting/discovery

Create a new HTTP setting that is HTTPS, uses the certificate and uses the new health probe

Create a new basic rule that uses the new listener, the new backend pool and the new HTTP setting

Click the below to see a large image of the OOS set of additional configurations.

Now your OOS should also be available and working! You have now published two sites through a single Application Gateway.

Network Security Groups (NSGs) are a critical component in Azure networking which enable the flow of traffic to controlled both within the virtual network, i.e. between subnets (and even VMs), and external to the virtual network, i.e. Internet, other parts of known IP space (such as an ExpressRoute connected site) and Azure components such as load balancers. Rules are grouped into NSGs and applied to subnets (and sometimes vNICs however its easier management to apply at the subnet level). Rules are based on:

Source IP range

Destination IP range

Source port(s)

Destination port(s)

Protocol

Allow/Deny

Priority

In place of the IP ranges certain tags can be used such as VirtualNetwork (known IP space which includes IP spaces connected to the virtual network, e.g. an on-premises IP space connected via ExpressRoute), Internet (not known IP space) and AzureLoadBalancer. Additionally through the use of service tags other Azure services can be included in rules which include the IP ranges of certain services for example Storage, SQL and AzureTrafficManager. It is also possible to limit these to specific regions for the service, for example Storage.EastUS as the service tag to enable access only to Storage in EastUS. This could then be used in a rule instead of an IP range. This is very beneficial as now you can enable only specific machines in a specific subnet to communicate to specific services in specific regions. Without this functionality you would have to try and create rules based on the public IP addresses each service used. More information on service tags can be found at https://docs.microsoft.com/en-us/azure/virtual-network/security-overview#service-tags.

Another useful feature is application security groups. Using application security groups you can create a number of groups for the various types of application tiers you have (using New-AzureRmApplicationSecurityGroup), use them in NSG rules (e.g. -DestinationApplicationSecurityGroupId) and then you assign a network interface for a VM to be part of a specific application security group (using the ApplicationSecurityGroup parameter at creation time). Now you don’t have to worry about the actual IP address or subnet of the VM in the NSG rules. The NIC is now part of the application security group and will automatically have the rules applied based on that membership. Imagine you created an application security group for all the VMs in a certain tier of the application and they would all automatically have the correct rules regardless of their IP address or subnet membership.

On the other side of the equation you have Azure services like Storage and SQL and by default they have public facing endpoints. While there are some ACLs to limit access it can be very difficult/impossible to try and restrict them to only specific Azure IaaS VMs in your environment. For example you may have a storage account or Azure SQL database instance you only want to be accessible from VMs in a specific subnet in a virtual network. This is now possible through a combination of service endpoints and the Azure service firewall capability.

Firstly on the virtual network, service endpoints are enabled for specific services (e.g. Storage) for specific subnets. This now makes that subnet available as part of the firewall configuration for that target service.(note that if you skip this step it can be done automatically when performing the configuration on the actual service!).

Next on the actual service (which must be in the same region as the virtual network) select the ‘Firewalls and virtual networks’ option, change the ‘Allow access from’ to ‘Selected networks’, ‘add existing virtual network’, select the virtual network and subnets and click Add and then Save. Now the service will only be available to the selected virtual subnets.

When you put all these various features together there are now great controls available between VMs in virtual networks and key Azure services to really help lock down access in a simple way.

I recently had a requirement to check the age of VMs deployed in Azure. As I looked it became clear there was no metadata for a VM that shows its creation time. When you think about this it may be logical since if you deprovision a VM (and therefore stop paying for it) then provision it again what is its creation date? When it was first created or when it last was provisioned.

As I dug in there is a log written at VM creation however by default these are only stored for 90 days (unless sent to Log Analytics) BUT if its within 90 days I could find the creation date of any VM. For example:

Write-Output"- Found VM creation at $($log.EventTimestamp) for VM $($log.Id.split("/")[8]) in Resource Group $($log.ResourceGroupName) found in Azure logs"

# write-output " - ID $($log.Id)"

$vmCreationTime=$($log.EventTimestamp)

#$log

}

}

What if the VM was not created in the last 90 days? If the VM uses a managed disk you can check the creation date of the managed disk. If it is NOT using a managed disk then there is not creation time on a page blob however by default VHDs include the creation data and time as part of the file name. Therefore to try and find the creation data of a VM I use a combination of all 3 by first looking for logs for the VM, then look for a managed disk and finally try a data in unmanaged OS disk.

In this article I want to walk-through deploying operating systems in Azure using a custom Windows PE environment and along the way cover some basics around PE and OS deployment. Before going any further I would stress I don’t recommend this. The best way to deploy in Azure is using templates, have generic images and then inject configuration into them using declarative technologies such as PowerShell DSC, Chef or Puppet however there are organizations that have multiple years of custom image development at their core that at least in the short term need to be maintained which was my goal for this investigation. Is it even possible to use your own Windows PE based deployment.

My starting point was to get a deployment working on-premises on Hyper-V. Azure uses Hyper-V and at this level there really is nothing special about what Azure does so my thinking is if I got a process running on-premises I should be able to take that VHD, upload it to Azure, make an image out of it and create VMs from it (and this proved to be true!). The benefit of this approach was speed of testing and the ability to interact with the Windows PE environment during the development and testing phase. Something that is much harder in Azure as there is no console access.

Notice in the code above when I’m adding packages I do this my mounting the boot.wim file that is part of my copied PE environment, performing actions against it then committing those changes when I unmounts it. I’m modifying that boot.wim. This is an important point.

Once the PE was ready I wanted to quickly test so I built a VHD based on that PE environment.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

diskpart

create vdisk file="C:WinPEPS.vhd"maximum=100000type=expandable

attach vdisk

create partition primary size=1000

assign letter=V

format fs=ntfs quick

exit

MakeWinPEMedia/UFDC:\WinPE_amd64V:

diskpart

select vdisk file="C:\WinPEPS.vhd"

detach vdisk

exit

This creates a new VHD file and attaches it to the current OS as drive V:. I then make bootable media of my PE folder to the V: folder then detach. I then copied this VHD file to a Hyper-V box and created a VM that used it as its boot disk. Sure enough it booted and I was facing a PE environment. The next step was to format the disks and apply an image automatically. My initial though was “how can I format the disk and apply an OS to the disk if booted from it (PE)?” however quickly it became obvious that the PE I was booted into wasn’t really running from the local disk. Instead what happens is on boot the boot.wim file on the PE media is read into a writable RAM disk which is where the PE actually runs from (the X drive). Therefore even though the C: drive contained that boot.wim it’s not actually being used and do it can be wiped. Therefore I created a script that did three things

Wiped the disk and create the system and windows partitions

Applied a Windows Server image (1709 Server Core)

Make the disk bootable

To partition the disk I created a text file, parts.txt which contained:

1

2

3

4

5

6

7

8

9

Select disk0

Clean

create partition primary size=350

format quick fs=ntfs label="System"

assign letter="S"

active

create partition primary

format quick fs=ntfs label="Windows"

assign letter="W"

I could then call this with (I would copy this to my Windows PE environment as well):

1

diskpart/sx:\parts.txt

The WIM file I placed on a file share (this would be an Azure Files share once in Azure) so I had to map to the network drive and apply so the complete file became:

I saved this as autolaunch.bat and added to the root of my Windows PE boot.wim (by remounting it) along with the parts.txt. I also modified the startnet.cmd found under the WindowsSystem32 folder of my mounted PE environment to call my autolaunch.bat file, e.g.

1

2

3

Wpeinit

x:\autolaunch.bat

I then unmounted and created a new VHD. I made sure my install.wim was present in my file server as referenced, copied over the VHD to my Hyper-V server, changed the VM to use the new VHD and sure enough it booted, formatted the disks and laid down the image. Note you are putting a password in the file, this is not ideal. Also not if your password contains special characters you may have to escape them in the batch file or they won’t work correctly, for example if you password contained % you actually need %% in the string!

The next step was to try this in Azure. I created a storage account in Azure, added an Azure Files share and uploaded the install.wim file to it. I changed the autolaunch.bat to map to the Azure Files share instead of the local file share (along with the path to the WIM file). It therefore became:

To upload the VHD to Azure and create an image from that uploaded file I used the following PowerShell. This is important. Trying to upload from other tools or the Azure portal seems to leave the VHD in a strange state and unusable.

From here I created a new VM from my image. I used PowerShell below to create my VM. Note I’m enabling boot diagnostics. This allowed me to view the console even if I couldn’t interact with it. Therefore I had some idea what was happening

I then jumped over to the portal and via the Support + Troubleshooting section – Boot diagnostics – Screenshot I could see it deploying in Azure (updated about every 30 seconds or so).

This worked! The OS installed and strangely I could RDP to it even though I never enabled this, it had the right name, it had the Azure agent installed. What trickery is this and then it hit me. I never added my own unattend.xml file. All I did was apply a 2016 image to a disk and it rebooted. Basically the same as if I had used a template with 2016. The ISO file that Azure automatically creates when deploying a VM that contains an unattend.xml file and other setup files still got created, still got attached and was therefore still used. This was good but also bad as I wanted to use my own unattend.xml file to further prove we could customize.

The next step was to generate my own unattend.xml file and use it. At this point I didn’t want to keep having to rebuild the VHD every time I made a script change and so I broke apart the logic so the autolaunch.bat just connected to the Azure Files, partitioned the disk then copied down an imageinstall.bat file and executed. This way I could change imageinstall.bat on the file share whenever I wanted to change the functionality. autolaunch.bat became:

I created a new VHD with the reduced autolaunch.bat and uploaded to Azure (and created a new image after deleting the old one with Remove-AzureRmImage -ImageName $imageName -ResourceGroupName $rgImgName).

Now I’m jumping over a few steps here but basically I created an unattend file to set a default password, have a placeholder for the computer name, enable auto mount of disks, move the pagefile to D:, enable RDP and required firewall rules and also launch the install.cmd that Azure normally runs. This would install the agent, register with Azure fabric etc. Because I place my unattend.xml in the windows\panther folder it overrides any found on removable media, i.e. the Azure one! My unattend file was:

Now in this file I have a placeholder string for the computername, TSCAEDPH . I wanted to replace this with the computername specified on the Azure fabric. How would I get this from inside the guest? Well Azure has an endpoint at 169.254.169.254 that can be called from within a VM and basic information can be found and so I created a PowerShell script that would find the computername and update the unattend.xml I had copied to the panther folder of the deployed image:

This was saved as unattendupdate.ps1 on the Azure Files share as well which now contained the install.wim, unattend.xml, imageinstall.bat and this ps1 file. Fingers crossed I kicked off a new VM build. It worked. It used my unattend.xml file but still got the Azure agent etc installed. It also still renamed the local administrator account to that specified as part of the VM creation as that happens as part of the Azure install step process which I was now calling from my unattend.xml file.

Now there are some problems here. If Azure changes the structure of their ISO file with the install.cmd this will break so it would have to be re-investigated however this is still better than trying to duplicate everything they do manually which is far more likely to change far more often.

So there you go. You can use your own PE in Azure to customize and create deployments including unattend. You can still call the Azure agent install and finalize. But ideally, use images 😉

In this post I want to document the results of a POC (proof of concept) I was engaged in for a very large customer. The customer wanted to create single/multi VM environments in Azure for dev/test/QA purposes. The goal was a single command execution that would create the VM and in this case make it a domain controller, install SQL Server 2012 then install SharePoint 2010. For this scenario I decided to use PowerShell rather than JSON just to demonstrate the PowerShell approach since there are already many JSON templates in the gallery around SharePoint deployment.

To enable this solution the high level workflow would be:

Create a new resource group and in that create a new storage account and virtual network (since each environment was to be isolated and by placing in their own resource group the lifecycle management, i.e. deletion, would be simple)

Create a new VM using the created resources

Execute PowerShell inside the VM via the Azure VM Agent to promote the VM to a domain controller then reboot it

You will notice in the code I write the AD to the E: drive. This is because in Azure the OS disk by default is read/write cache enabled which is not desirable for databases. Therefore for the VM I add two data disks with no caching; one for AD and one for SQL and SharePoint. The code below is what I use to change the drive letter of the DVD device then initialize and format the two data disks.

The two pieces of code above would be combined into the first boot PowerShell code (with the disk initialization block before the DC promotion code). Once the reboot has completed firewall exceptions for SQL and SharePoint need to be enabled.

Next I need the SQL Server and SharePoint media along with unattended commands to install. I decided to use Azure Files as the store for the installation media. Azure Files presents an SMB file share to the VMs with only the storage account key and name required to access. In my example I place this in the PowerShell script however it could also be injected in at runtime or stored more securely if required. Create a storage account then create an Azure Files share through the portal and take a note of the access key and storage account name.

Into this share I will copy the SQL Server and SharePoint installation files. The easiest way to upload content is using the free Azure Storage Explorer tool from http://storageexplorer.com/.

Now the details of performing unattended installations of SQL and SharePoint are outside the scope of this write-up as the goal for this is more how to install applications through Azure IaaS PowerShell however at a very high level:

To install SQL Server unattended simply requires a configuration file which can be generated by running through the graphical SQL Server setup and on the last page it will show you the location of the configuration file it will use for installation. Simply copy this file and cancel the installation. Copy the SQL Setup structure and the configuration file to the Azure Files share. I place the ConfigurationFile.ini in a separate Assets folder on the share. Then use that setup file with the SQL setup.exe, for example

For the SharePoint unattended installation I used the autospinstaller solution which is fully documented at https://autospinstaller.com/ and includes a web based site to create the unattended answer file used by the program. Follow the instructions on the site and copy the resulting structure to the Azure Files share.

To map to the share, copy the content, trigger the SQL Server installation from the share, dismount the share then trigger the SharePoint installation I use the following (which also adds an account named Administrator as that was a requirement). I would add the firewall exception creation to this code as the secondboot PowerShell file. You will notice I wait for 40 minutes at the end for the SharePoint installation to complete. I run the SharePoint install as a separate, asynchronous job as at the end it asks for key presses to continue so this avoids trying to handle that and after a reboot that will all get cleared up.

At this point I have a firstboot.ps1 and a secondboot.ps1 file. Upload those files into blobs in a container named scripts in the same storage account as the Azure Files. These files will be used as part of the total VM provisioning process.

The final part is to create the VM and use the PowerShell created. In the example code below I create all the resources and use premium storage accounts to maximum performance however any of these parameters can be changed to meet requirements. In the code replace the <storage account name for assets> with the storage account created holding the Azure Files and blob content along with its key. Also change the VM name to something unique since a public IP name will be generated based on this name. If you will deploy this many times add some logic to include some random sequence or perhaps the requesting username. Also include that as part of the resource group, storage account etc name.

In this example I give the VM a public IP so it can be accessed externally and has no NSG to lock down traffic. In reality you may not want the public IP and may add the environment to existing networks with connectivity to on-premises so would connect via private IP but I added public IP to handle worst case connectivity. If you do add a public IP like this example don’t use administrator account and don’t set simple passwords and make sure you configure NSGs to at least lock down traffic. I talk about NSGs at http://windowsitpro.com/azure/network-security-groups-defined and below is example ARM PowerShell to create and add an NSG to a NIC.

I decided to create a brand new walkthrough of creating a VM using the new Azure Portal and using Azure Resource Manager. In this walkthrough I cover Resource Groups, virtual networks, storage accounts, public IPs and Network Security Groups. All while creating and publishing a Minecraft server out to the Internet! Available at https://youtu.be/YuMXm7owGEwand below.

Just finished a brand new 90 minute whiteboarding overview of Azure Infrastructure services so grab your popcorn, kick back and enjoy. Available at https://youtu.be/jJdXDRi_SCg up to 1080 to see all the screen detail :-).