You may get an error indicating that your browser is currently set to block JavaScript. If you get this error, please ensure that you are running PowerShell as the regular AzureStack user (default user when leveraging the ClientVM) and are not using “Run As Administrator” (different context). Additionally, you can perform either of the following options to work around this error:

Disable Internet Explorer Enhanced Security Configuration on the Host / ClientVM (wherever PowerShell will be executed that pops up the AAD login); OR

If you encounter a cookies error when attempting to connect via AAD and AzureRM PowerShell, here is the workaround:

In Internet Explorer, open Internet Settings

Click the Privacy tab

Click on Advanced

Click OK

Close the browser

Try again

NOTE You may need to open iexplore.exe directly from the Program Files\Internet Explorer directory.

At the end of the deployment, the PowerShell session is still open and doesn’t show any output

This can be the result of the default behavior of a PowerShell command window when it is selected. The POC deployment has actually succeeded but the script is paused when selecting the window. Please press the ESC key to unselect it and the completion message should then be shown.

Microsoft.Network related quotas are not applied

In TP1, the quotas you set in the Microsoft.Network service in a plan for your tenants are not enforced.

Testing Site to Site (S2S) gateways

Testing S2S gateways is not a scenario yet in this single-node POC release.

A new image added to the Platform Image Repository (PIR) does not show up in the portal

First, it is important to note that it can take some time (5 to 10 minutes) for the image to show up in TP1 after running CopyImageToPlatformImageRepository.ps1.

Also, if the value for -Offer and/or -SKU contains a space (e.g. “Windows Server 2012 R2 Standard"), the manifest will be invalid and a gallery item will not be created. This is a known issue and the current workaround is to ensure that you don’t use spaces. For this example, change the SKU from “Windows Server 2012 R2 Standard" to something like "Windows-Server-2012-R2-Standard" or “WindowsServer2012R2Standard".

Finally, we’ve seen reports where increasing the number of virtual processors (to 4 or 8) and memory (to 8 GB) for the xRPVM would solve this situation. There is also a troubleshooting section later in this document, related to the PIR.

The AzureRM.AzureStackStorage PowerShell module cannot be found

This module is called out in some of the Azure Consistent Storage (ACS) scenarios, and is preinstalled on the ClientVM machine. If you want to install it on another machine it is provided as part of the Azure Stack installation files. After mounting the MicrosoftAzureStackPOC.vhdx file on the host machine, you can find this module in the Azure PowerShell module from the <drive_letter>\Dependencies folder.

During storage account creation in the portal, the public DNS path is listed

In the portal, “.core.windows.net” is listed as the FQDN for a new storage account being created. This is visual only and does not affect functionality. This is scheduled to be addressed in a future release.

Templates deployment fails when using Visual Studio

You may get a deserialization error in Visual Studio’s PowerShell output. The workaround is to uninstall Azure PowerShell and then reinstall the latest version of Azure PowerShell. It’s important to note that the workaround is to uninstall/reinstall rather than to upgrade Azure PowerShell.

POC Deployment fails at “DomainJoin” step

POC deployment fails if your DNS server resolves AzureStack.local to another address external to the POC environment. As a workaround, if this is not a separate entry that you have control over within your environment, you can add an entry in the local hosts file on the POC host machine that points to the ADVM. To do this, add the following entry in the hosts file under C:\Windows\System32\drivers\etc (you need local administrator privileges to do so):

192.168.100.2 Azurestack.local

Once this is done, re-run the POC deployment script. You do not need to reinstall the host machine.

Account expirations after 30 days running the POC

Some accounts in the AzureStack.local Active Directory on the ADVM do not have “Password never expires” checked. This is something you may want to update proactively in this release.

Tips, Common pitfalls

Deployment fails with an error about a time and/or date difference between the client and server

Please check your BIOS settings to determine of there is an option to synchronize time. We have seen this issue with certain HP servers (e.g. DL380 G9) using the “Coordinated Universal Time” feature. This is what step #8 in the deployment guide means when it says “Configure the BIOS to use Local Time instead of UTC.”

Error “we received a bad request” when logging on Azure Stack as a tenant using PowerShell

Please make sure the tenant GUID used in the URL to sign in is the right one. Here is a process to get the right GUID:

Make sure the templates that you are using are referencing DSC extension version 2.13 (included with TP1 in the \\sofs\Share\CRP\GuestArtifactRepository directory) , or leverage the autoUpgradeMinorVersion option. If you need to explicitly leverage version 2.13, this may not possible when nested templates stored on GitHub are being used. In this situation, you can copy the nested template from GitHub, edit it to use version 2.13, and store it in a local blob storage in your Azure Stack environment.

Error regarding “specific argument was out of the range of valid values” when creating a storage account in PowerShell

Please ensure that the characters that you use in the storage account name are all lowercase. This behavior is consistent with Microsoft Azure (public cloud).

The SQL Server VM templates fail to deploy SQL Server

SQL Server requires .NET Framework 3.5 and the image used in the template must contain that component. The default image provided with TP1 does not include the .NET Framework 3.5. To create a new image with this component, you can refer to this link in the documentation.

Frequent crashes in the ComputeController (CRP.SelfHost) service

The issue can occur when the steps to create and configure the VM NIC for a particular VM fail partway, leaving a partially configured NIC and missing persisted state representing the NIC. Normally, CRP is supposed to handle these partial failures and try to recover from them to complete the configuration, however this specific case is a known issue in the ComputeController service that assumes the persisted state is always there for any NIC discovered via Hyper-V. Until this is fixed, one way to unblock your environment is by manually deleting that Hyper-V NIC:

Remove-VMNetworkAdapter -VMNetworkAdapter $nic

Frequently Asked Questions

Q: Do I need to delete TiP accounts in Azure Active Directory (AAD) before restarting a POC installation?

A: No, new accounts will be created as needed. It is also possible to share the same AAD for several POC installations.

Q: What are the changes made in Azure Active Directory by the Azure Stack POC deployment script?

A: The script will create two AAD accounts for every installation. One is used by TiP to simulate service admin role and the other is TiP's tenant admin. For the first installation, it will add three applications, "AzureStack.local-Api", "AzureStack.local-Monitoring" and "AzureStack.local-Portal". If your security department has concerns regarding that, you may create a separate directory or even a use a separate subscription.

Q: Do I need to format my data disks before starting or restarting an installation?

A: Disks should be in raw format, however if your Azure Stack installation fails for some reason and you start over by re-installing the OS, if you get an error saying ‘not enough disks’ then check if the old storage pool is still presented, and if so, delete it. To do this complete the following:

Open Server Manager

Select Storage Pools

See if a storage pool is listed

Right click on storage pool if listed and enable read / write

Right click on Virtual Hard Disk (Lower left corner) and select delete

Right click on Storage Pool and click delete

Launch Azure Stack script again and verify that the disk verification passes

If not collect logs as described below and post a message for help in this group

Q: After starting my Microsoft Azure Stack POC host, why are all of my tenant VMs gone from Hyper-V Manager but come back automatically after waiting a bit?

A: As the system comes back up, the Azure Consistent Storage subsystem and RPs need to determine consistency. The time needed depends on the hardware and specs being used but it may sometimes take up to 45 minutes after a reboot of the host for tenant VMs to come back and be recognized. Please note this would not happen in a multi-system deployment because you would not have a single box running the Azure Consistent Storage layer unless you restarted all nodes at the same time, similar to a full restart of an all up integrated system.

Q: Can I test usage data at this stage with TP1?

A: In TP1 there is only usage data reported for Storage resources but you can expect to see more services going forward. There is also a REST API that any Azure Stack subscriber can call to get their own usage data as well as REST APIs for providers to call to get data for their customers. You can also just call Get-UsageAggregates in PowerShell as an easier way than calling the API directly. It will ask to supply a start time and end time and will report the data in either hourly or daily aggregation. This is actually consistent with Azure so you can check the online documentation for Azure Usage PowerShell for more instructions.

Q: Can I use all SSDs for the storage pool in the POC installation?

Per the “hardware” section of the requirements page in the documentation, this is not supported in this release but will be improved in a future release.

Q: Can I use Nested Virtualization to test the Microsoft Azure Stack POC?

It is possible to deploy Microsoft Azure Stack POC TP1 leveraging Nested Virtualization and, just like some of our customers, we’ve also experimented with Azure Stack deployments using it. We understand it’s a way to work around some of the hardware requirements, however please note that Nested Virtualization is a recently introduced feature, and as documented here, it is known to have potential performance and stability issues. Additionally, the networking layer in Azure Stack is more complex than a flat network and when you start introducing MAC spoofing and other layers, in addition to the potential performance impact at the storage layer, it also becomes complex. In other words, we are definitely open to hear about your feedback and experience leveraging Nested Virtualization with Azure Stack but remember this is not one of the configurations we’ve thoroughly tested or are fully supporting with this release.

Q: Can I use NVMe data disks for the Microsoft Azure Stack POC?

While Storage Spaces Direct supports NVMe disks, with the POC we are only supporting a subset of the possible drive types and combinations for Storage Spaces Direct. More specifically, the deployment script does not support NVMe based on the way bus types are discovered. While it is possible to edit the deployment script to make it run, keep in mind we would recommend using the disks/bus types combinations that have been tested for this release. Note that this is only related to the single host TP1 “POC” installation. We added a comment to clarify this on User Voice. The limitations and architecture used to run the POC on a single box are not reflective of the architecture and capabilities that will be available as we provide releases of Azure Stack that can run on multiple nodes and scale beyond the single box POC.

Q: I have deleted some virtual machines but still see the VHD files on disk. Is this expected?

About the VHDs remaining after being untouched for more than 14 days of inactivity, here is some more information and a question:

Here is the way this is designed:

When you delete a VM, VHDs are not deleted. Disks are separate resources in the resource group.

When a storage account gets deleted, the deletion is visible immediately through ARM (portal, PS,...) but the disks it may contain are still kept in storage until garbage collection runs. Garbage collection has been updated to run every 2 days in this TP1 release.

So, if you delete a VM and nothing more, VHDs will stay there and may still be there for weeks or months. If you delete the storage account containing those VHDs, they should be deleted the next time garbage collection runs (in a maximum of 2 days, depending when it ran last).

If you see "orphan" VHDs that have not been touched for more than 2 days, it is important to know whether they are part of the folder for a storage account that was deleted. If the storage account was not deleted, it's normal for them to still be there. If the storage account was deleted less than 2 days ago, it's also normal, because garbage collection may not have run yet. If the storage account was deleted more than 2 days ago then those VHDs should not be there, and this should be investigated.

Example flow:

Day 1 : Create a storage account and VM with VHDs in this storage account
Day 2 : Delete VM – VHDs remain, per design
Day 3 : Delete storage account (directly or via resource group) – which should be allowed since there is no VM still “attached” to the disks in the storage account
Day 3 + 2 (maximum, depending on last garbage collector run) : VHDs should be deleted

Note that having this garbage collector enables a scenario where Storage service administrator can "undelete" a storage account and get all the data back. For more information see the Azure Consistent Storage/Storage Resource Provider document.

Q: Where can I find the Product Key to enter during Boot from VHD?

You can use the Datacenter key listed here. This can be done during the first boot from VHD or after installation if you opt out during the first boot.

Additional Information

For more tips, tricks and known issues, as well as help, support and feedback, please see our Azure Stack forum here: