A blog about my exerience with Clouds, Automation and Virtualization

The recommendation stated is that for virtual machines running on either VMware or Hyper-V should be configured with a High Performance power plan.

Looking at Microsoft Azure VM´s they are set as High Performance by default:

In my Hyper-V lab you can see that I have balanced set and when using the powerplan powershell module I created you can also change it to high perf

If you save the following powershell functions in a folder on c:\program files\windowspowershell\modules\powerplan you can then import it as the screendump and utilize it either on a local server or remote server.

powerplan.psm1

PowerShell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

<#

.Synopsis

As the WMI has some issues on core I use PowerCFG.exe and get data in this function

.DESCRIPTION

Long description

.EXAMPLE

Get-PowerPlan -ComputerName HypervCore1

.EXAMPLE

Get-PowerPlan -ComputerName Hcore1,Hcore2,Hcore3

.NOTES

Author: Niklas Akerlund 20141202

#>

functionGet-PowerPlan

{

[CmdletBinding()]

Param

(

[Parameter(Mandatory=$false,

ValueFromPipelineByPropertyName=$true,

Position=0)]

$ComputerName="localhost"

)

$result=@()

foreach($Computerin$ComputerName){

$res=Invoke-Command-ComputerName$Computer-ScriptBlock{

powercfg.exe-l

}

foreach($lin$res){

if($l.EndsWith("*")){

$regex=[regex]"\((.*)\)"

$ActivePlan=[regex]::match($l,$regex).Groups[1].Value

$regex2=[regex]"\w{1,12}\-\w{1,12}\-\w{1,12}\-\w{1,12}\-\w{1,12}"

$guid=[regex]::match($l,$regex2).Groups[0].Value

$hash=[ordered]@{

ComputerName=$Computer

ActivePlan=$ActivePlan

Guid=$guid

}

}

}

$PowerPlan=New-Object-TypeNamePSObject-Property$hash

$result+=$PowerPlan

}

$result

}

<#

.Synopsis

As the WMI has some issues on core I use PowerCFG.exe and set the powerplan in this function

Last week I updated my Azurestack Devkit to 1804, well with the devkit I have to do a redeploy, during the deployment it got stuck on creating the ADFS VM so i did a reset on that one and -rerun and it got into happyland!

After the deployment was successful I logged into the admin portal and found this, the default subscription have two pals now.

Upgrading our multinode stack did though not give the same view

The docs release notes have been updated to clarify about this and it also states that you should not use the new subscriptions yet

I have been exploring a bit with both Azure and Azurestack and when you onboard your VM´s to Log Analytics and the security center you soon get noticed about 100s of drilions attempts to log on to your mashine if you have made it available through RDP. Although there now is a way to take care of this in a better way using the Security Center JIT Access giving a timespan for opening the port and also limiting to certain IP/networks! Some times an JIT access is not what you can live with but an alternative port could be utilized then the following can be applied.

A recent update to the Azure portal have now surfaced where you get the option to dowload the RDP file with an alternative port instead of the standard 3389, that does not

set the NSG to allow for the new port

set the VM´s internal RDP service to respond to it

So to get the possibility to connect to the virtual machine we need to update the NSG and also reconfigure the virtual machine to actually listen on the new rdp port

First I add a row on the NSG

and then i utilize the custom script extension and change the listener on the virtual machine for RDP

Azurestack:

If I am utilizing an AzureStack all above can be achieved but in the portal the connect button will be greyed out so you can still connect to it but you need to manually enter the public IP and custom port:

In the agile world we live in now Microsoft have released their new administration tool for servers formerly called Honolulu which was the project name and it is now by marketing named as Microsoft Windows Admin Center

I am running it on a Windows Server 2019 (core) build 17639

Using the AD module from Patrick Grünauer I can via the PowerShell remoting see viable information from the AD controller in WAC,

To manage a 2016 Hyper-V Server with WAC you need to add some features and roles

Enable Remote Management.

Enable File Server Role.

Enable Hyper-V Module for PowerShell.

And the following OS can be managed by WAC:

Version

Managed node via Server Manager

Managed cluster via Failover Cluster Mgr

Managed HCI cluster via HCI Cluster Mgr (preview)

Windows 10 Fall Creators Update (1709) or newer

Yes (via Computer Management)

N/A

N/A

Windows Server 2019 (insider builds)

Yes

Yes

Yes

Windows Server, version 1709

Yes

Yes

No

Windows Server 2016

Yes

Yes

Coming soon

Windows Server 2012 R2

Yes

Yes

N/A

Windows Server 2012

Yes

Yes

N/A

Note:

Windows Admin Center requires PowerShell features that are not included in Windows Server 2012 and 2012 R2. If you will manage Windows Server 2012 or 2012 R2 with Windows Admin Center, you will need to install Windows Management Framework (WMF) version 5.1 or higher on those servers.

Type $PSVersiontable in PowerShell to verify that WMF is installed, and that the version is 5.1 or higher.

“Cluster Sets” is the new cloud scale-out technology in this Preview release that increases cluster node count in a single SDDC (Software-Defined Data Center) cloud by orders of magnitude. A Cluster Set is a loosely-coupled grouping of multiple Failover Clusters: compute, storage or hyper-converged. Cluster Sets technology enables virtual machine fluidity across member clusters within a Cluster Set and a unified storage namespace across the “set” in support of virtual machine fluidity. While preserving existing Failover Cluster management experiences on member clusters, a Cluster Set instance additionally offers key use cases around lifecycle management of a Cluster Set at the aggregate.

Failover Cluster removing use of NTLM authentication

Windows Server Failover Clusters no longer use NTLM authentication by exclusively using Kerberos and certificate based authentication. There are no changes required by the user, or deployment tools, to take advantage of this security enhancement. It also allows failover clusters to be deployed in environments where NTLM has been disabled

Encrypted Network in SDN

Network traffic going out from a VM host can be snooped on and/or manipulated by anyone with access to the physical fabric. While shielded VMs protect VM data from theft and manipulation, similar protection is required for network traffic to and from a VM. While the tenant can setup protection such as IPSEC, this is difficult due to configuration complexity and heterogeneous environments.

Encrypted Networks is a feature which provides simple to configure DTLS-based encryption using the Network Controller to manage the end-to-end encryption and protect data as it travels through the wires and network devices between the hosts It is configured by the Administrator on a per-subnet basis. This enables the VM to VM traffic within the VM subnet to be automatically encrypted as it leaves the host and prevents snooping and manipulation of traffic on the wire. This is done without requiring any configuration changes in the VMs themselves.

Windows Defender Advanced Threat Protection

Windows Defender ATP Exploit Guard

If you have not signed up for the insiders do so now and start playing with this new release, I am in the works of upgrading my lab!

There is a new (re-released) course on the openedx.microsoft.com site where you can sign up and start learn about Azure Stack and also from the 30th of March do labs to enhance the learning experience! This lab environment is an awesome opportunity if you do not have access to a multinode or devkit setup and want hands on experience!

Extract from the site:

You will work your way through the online labs to become familiar with:

The components and architecture of Microsoft Azure Stack

Deploying Microsoft Azure Stack

DevOps using Microsoft Azure Stack

Resources in Microsoft Azure Stack

Managing IaaS in Microsoft Azure Stack

Managing PaaS in Microsoft Azure Stack

Managing updates in Microsoft Azure Stack

Performing monitoring and troubleshooting in Microsoft Azure Stack

Understanding how licensing and billing works in Microsoft Azure Stack

Labs included are (online labs will be available on 3/30/18)

Connecting to Microsoft Azure Stack using Azure PowerShell

Configuring Delegation Using the Azure Stack Administrator Portal

Registering Azure Stack with an Azure Subscription using Azure PowerShell

As I described earlier I had an eval image in my marketplace that I used to provision servers and I wanted some of them to be converted so they could be correctly activated and reconfigured away from eval.

The AzureStack uses the function within Hyper-V for the VM´s that is called Automatic Virtual Machine Activation and as you can see in the device manager the device Microsoft Hyper-V Activation Component and the VM´s should have the appropriate AVMA key on them and if the host is licensed with the right key the VM will activate automatically.

On this page you can find the keys you need for the different guest-OS that it can be used with! A Windows Server 2016 AVMA host can activate guests that run the Datacenter, Standard or Essentials editions of Windows Server 2016 and Windows Server 2012 R2.

Utilizing the DISM command I can check what license I had and then use DISM /online /Set-Edition:ServerDatacenter /ProductKey:xxxxxx-xxxx-xxxx-xxx-xxxx /AcceptEula

If you just want to change a key and not versions you can utilize the slmgr /ipk <AVMA_key> instead of the DISM!

If you are working with Azure and want to learn more there is an opportunity to go to a conference in April that is free of charge and in the center of Stockholm!

I will do a session there in the MVP theater:

Making real world Infrastructure as code in Azure, or how to make an MSP-dinosaur survive in the cloud

It’s incredibly fast change in today’s IT delivery, and for a service provider, it’s about embracing the new or risking the latest T-Rex. In this session we review how to automate and create standardized solutions in Azure where management and monitoring are included as a service. Interaction with customers through Microsoft Teams and Bots that speeds up change cases and provides quick feedback! 24/7 you can know status and costs as well as order new services that automatically end up under NOC when it reaches production status.

So now we have come to some interesting parts in my experience with our multinode Stack, In this post I will go through Marketplace management and installation of App Service RP.

Marketplace

To actually get something into the marketplace for tenants we need to either populate it ourselves with custom images or utilize the marketplace syndication. After deploy of the stack you need to register it to Azure.And when you have successfully done that you will get possibiltiy to download azure images that have been made available.

In the powershell tools you can find the command to upload a custom image that you might want to make available for your tenants, there is though no way to make them just available for one tenant. I utilized the superfunction Convert-WindowsImage and created a new insider Windows Server 17093 for my tenants marketplace.

then using the Azurestack tools you upload it with Add-AzsVMImage

Important: You as a Stack Admin will be responsible to make sure that the latest Images have been updated on your Marketplace, there is no automation magic that will download a new Windows Server Image once it has been released in Azure and thus keeping your marketplace up to date for tenants and they can deploy without having to patch and patch and patch before they utilize their systems…

Looking at one example my SQL VM that I have downloaded from Azure was version 14.1000 and now there is a new that I need to update to:

App Service RP

Installation of the SQL RP was very much straight forward and just follow the instructions and run (there is though one thing and that is regarding the above marketplace, you will need a Windows Server Core available for the SQL RP)

for the App Service there is a bit of more work and the prerequisite says a file server :” For production deployments, the file server must be configured to be highly available and capable of handling failures.” and a SQL server: “For production and high-availability purposes, you should use a full version of SQL Server 2014 SP2 or later, enable mixed-mode authentication, and deploy in a highly available configuration.“. Luckily You can run these in default subscription and not in a tenant subscription… But still there are some serious life cycle management that needs to be handled here with patch and update, security etc on these 6 servers (AD, FS, SQL)

After when You have those prerequisites in place it is time to start the App Service wizard, and there we had the first encounter of problems.. I had the superduper SSL cert with everything including SANs or so I thought……

Coming back to my second post you should verify a thousand and thousand times with the certificate department at your company that they do not try to take any shortcuts and miss any critical SANs. In our case an assumption that a wildcard was enough to take this rocket out of orbit for a couple of hours and getting a bit more grey hair! So make sure you have a SAN name in your certificate that says sso.appservice.<region>.<xx>.domain.yy and you will not get “The certificate dns is invalid: azurestack”

Next thing that we encountered which showed a bit later was that we deployed using the eval image (this was not to obvious in the wizard, as we had both a eval and a regular in our syndicated marketplace)

And as you can see in the Wizard during app deploy it does say latest 2016 Datacenter and nothing about eval!

Now Microsoft and the Azure team have removed the Eval from the Syndication so if you do not create your own custom image with Eval you will not get into this problems and need to mitigate this..

Once the Win 2016 Eval was removed we could get a ordinary version up on the workers by scaling their scale sets down and up, but we had to fix the controllers manually.

Also make sure that you do not lock down the SQL and fileservers vnet and public IP´s with a too narrow NSG and not let app workers and controllers reach smb shares and sql services or your app service will die and not respond!

Now we have come to my fourth post on my series of AzureStack multinode experince, previously I have been writing on the importance on the network and certificates for a successful deployment.

This post will describe the success path with updates to be applied and keeping us compliant with the support cycle, you can only be up to 3 updates behind or you will be left without support!

AzureStack Updates

When we got our stack it was deployed with 1709 and during the installation process the OEM-engineers that where onsite added 1710 and 1711. When the 1712 came out we did the update ourselves. Based on our learnings it is good to have ms support standby so start with a support case because to have a more successful run of the update pack you will want to check status and space on the infrastructure VM´s and it is only in a broken glass support session connected to the privileged endpoint on a ERCS node you can get help and verify the state! Probably in future update packs they will address issues and thus making it more stable and resilient and you then will not be needing a support call but better be safe than sorry!

First as the documentation describes, upload the update files to the storage account called “updateadminaccount” where you would add a container (private) with the update:

When all the files are uploaded you can go into the update and start. When highlighting the patch the “update now” link above lights up and you can press it to start the update process!

The whole process takes about 8-12 hours depending on the size of the update and how many nodes you have in your Stack!

We had one hotfix that needed to be applied after the update of 1712 and the learning from that was that the apply failed and failed and failed but not giving a good explain on why but we learned that was because we did not RTFM and uploaded the whole folders content and not just the xml,exe and bin but also a Supplemental Notice.txt (in our defence, the update packs does not contain the text file), so removing that one and then the retry succeeded without any issues!