Blog Archives

Last week I wrote a post about setting up Windows PE Peer Caching. One of the limitations of that feature is that it only works within the Windows PE portion of a build task sequence. Once in the full OS or for deployments to established clients Peer Caching is unavailable.

Phil Wilcock, co-founder of 2Pint Software, commented and pointed out that you could use Peer Caching for getting the OS deployed and then use BranchCache within the full OS.

Now, I’ll be honest here. I understand the “textbook” when it comes to BranchCache but I had never actually set it up. It always seemed to fall under the, “One day I’m going to have to give that a try.” Well that day is today. This post will get BranchCache working with Configuration Manager. Once that is done my next step will be getting 2Pint’s BranchCache for OSD up and running.

“Unless you try to do something beyond what you have already mastered, you will never grow.”

Configuration Manager 1511

After completing the first three parts of this series you would have a virtual lab with 4 separate network segments all connected to and routed through a Windows 2012 R2 server (RTR01) acting as the router. This server will also provide Internet access to any virtual machines that are connected to the 4 network segments. You also would have an Active Directory domain controller (DC01) that provides DHCP and DNS services to the lab.

In Part 4 we are going to build out a Configuration Manager 1511 infrastructure. This will include a Primary site server (CM16) and 2 Distribution Points (DP16a & DP16b).

RTR01

Routing between subnets and access to the Internet (required for Windows Activation) is handled by RTR01, a Windows server running Routing and Remote Access (RRAS). This should be the first virtual machine to be built and configured as machines on the other subnets will need this server in place for them to successfully build.

This virtual machine will have 5 network adapters, one on each network. The build will create a basic Windows server. To configure the server you will need to run some PowerShell as well as manually configuring RRAS.

In this first installment we’ll work on getting the foundation set for building up the lab. We’ll configure the virtual networks, the host networking and get our MDT environment installed and configured. We are going to use a number of tricks that I’ve learned from others.

Setting up a lab can be a pretty time consuming project. A number of people, myself included, have created various hydration kits in an attempt to make it easier. One thing that they many have in common is that they use the Microsoft Deployment Toolkit (MDT) to generate a large build ISO to be used for building each virtual machine.

Using an MDT build ISO has both advantages and disadvantages. It is portable. It is simple. But it takes massive amounts of disk space and changes are very time consuming.

First off, I want to thank Johan Arwidmark for the core code used in my script. His blog posting can be found here.

Disk space is tight on my development VM host, very tight. You cannot get too many VMs running on just 256GB. So, I decided I’d make the switch to running Hyper-V on Server 2012 R2 and take advantage of Data DeDuplication. Johan speaks highly of it, so I thought I would give it a try.

On my 2 hosts I have 8 VMs running on each and after DeDuplication I have plenty of disk space for more.

They say a picture is worth a thousand words….

I have 8 virtual machines in this folder. With DeDupe I am able to store 270GB of VMs in less than 6GB of space.

I put together this script to process the drive. It automatically shuts down any running VMs, processes the DeDuplication and then starts the VMs that it shut down back up.

[Update 2: After much trial an error I have narrowed my upgrade problem to Hyper-V. If that is enabled on 9926 then the upgrade to 10041 fails for me. So what ever changed with Hyper-V between 9926 and 10041 is the root of the problem.]

I have had quite a difficult time moving from Windows 10 Build 9926 to Build 10041. Prior build updates didn’t give me this much grief.

My system was a pretty simple setup. Hardware-wise it’s an older AMD A8 with 24GB of memory. I have a single 120GB SSD as the boot drive, a pair of 240GB SSDs in a Storage Spaces pool and a single 1TB HDD. On the software side it’s just Windows 10 (9926), Office 2013 and Client Hyper-V.

I had my machine in the Fast Ring so back on 18 March I ran the upgrade to 10041 and it failed.

Video Driver Problem?

Some searching lead to people getting this error when upgrading from Windows 8 to 8.1 and it being caused by Nvidia video card drivers. I do have an Nvidia video card, but it was using the built-in driver from build 9926. But I thought I would give it a shot and upgrade the drivers to the latest direct from Nvidia.

No luck. Still failed with the same error.

Moving to Slow Ring

I rebuilt my system with 9926 and switched to the Slow Ring. I wanted to buy some time to see about finding anything on the errors before attempting to upgrade again.

This time I only had Client Hyper-V and the Storage Spaces pool. I was focused on spinning up some VMs to test the upgrade.

Then on 25 March Build 10041 went out to the Slow Ring and my system ran the upgrade.

And failed with the exact same error.

Starting All Over

At this point I thought I would start over with a clean slate. So I rebuilt my system with 9926. This time I didn’t install Office and Hyper-V, nor did I configure the Storage Spaces. So all I had was a basic install of 9926 and nothing else.

The time the upgrade worked.

So, what was the culprit?

I don’t know. Client Hyper-V? Storage Spaces? I honestly could not say. I may try to rebuild back with 9926 and try the upgrade with one or the other. Maybe I can narrow it down to a single thing that did me in.

Regardless, it doesn’t bode well if a Windows upgrade could not handle features that are native to Windows.

I’ve updated my MDT Lab Builder with the addition of building a virtual router using Windows Server 2012 R2. I referenced this article from Johan when setting it up. You do not need any additional software, the build of the router will use the same Server 2012 R2 source used in building the other VMs.

The VM (RTR01) will not be joined to the lab domain. I keep it in a workgroup so that it isn’t reliant on the existence of the lab domain and can be used with other projects. The configuration of Routing and Remote Access will allow your lab machines to reach the Internet (as long as your external network has access to the Internet). You can use this VM to explore routing using a Windows Server. You could hang additional virtual switches off of it and configure your own lab network with multiple subnets. Experiment with using DHCP relay on the router. Use Distribution Points in other LANs or even grab an evaluation copy of Nomad Branch from 1E and see how that works.

Ran into this today. I have a PowerShell script that builds a series of VMs as part of a lab build-out. After upgrading my machine to Windows 10 the script no longer works. What is supposed to happen is that it starts building a VM and loops every couple of minutes looking for the VM’s state to see if it is “Running”. At the end of the VM’s build MDT shuts the VM down. The script sees that the VM is no longer running, powers it back on and moves on to the next VM.

What happens when I run this on Windows 10 Hyper-V is that the script never notices that the VM has shut down.

As you can see, the VM is powered off, but the Get-VM PoSh cmdlet shows that it as still running….