Blog Archives

How to provide hard statistics on your build sequences.

A while back my boss gave me two goals for our OS deployments; 1) he set a target for 90% successful builds and 2) build times as close to 1 hour as possible. Okay, getting there is one thing, but how do I report on that?

In this first installment we’ll work on getting the foundation set for building up the lab. We’ll configure the virtual networks, the host networking and get our MDT environment installed and configured. We are going to use a number of tricks that I’ve learned from others.

We’ve amassed a very large number of task sequences since migrating to Configuration Manager 2012 and it got me thinking about ways to archive off older sequences so that we can clean house. So I came up with this script.

The script will first collect all of the task sequences in your site. Next it will enumerate through them, write the sequence to an XML named after the task sequence ID and finally create a CSV index of the TSID, task sequence name, the last modified date of the sequence and when the sequence was backed up.

Several weeks ago Johan Arwidmark published an article about creating a Windows 10 ISO using the install.esd file generated from the upgrade process. He also included a PowerShell script to automate the process. His article can be found here.

I’ve used this a number of times and it works wonderfully. Call me odd, but I have a set of 4 virtual machines that I use simply to generate the ISOs and installation source files. I have a pair of Windows 10 Professional (32bit and 64bit) and a pair of Windows 10 Enterprise (32bit and 64bit) VMs. I use the Professional SKU in my lab and the Enterprise SKU for testing at work.

I modified Johan’s original script to automate some “branding” of the process. For example, the ISO generated includes the build number, SKU and architecture. When a new build is released to the Fast Ring my VMs update and then I just run this script. The script determines what build, SKU and architecture the VM is running and generates a unique name for the ISO as well as the parent folder that also contains the contents of the ISO.

First off, I want to thank Johan Arwidmark for the core code used in my script. His blog posting can be found here.

Disk space is tight on my development VM host, very tight. You cannot get too many VMs running on just 256GB. So, I decided I’d make the switch to running Hyper-V on Server 2012 R2 and take advantage of Data DeDuplication. Johan speaks highly of it, so I thought I would give it a try.

On my 2 hosts I have 8 VMs running on each and after DeDuplication I have plenty of disk space for more.

They say a picture is worth a thousand words….

I have 8 virtual machines in this folder. With DeDupe I am able to store 270GB of VMs in less than 6GB of space.

I put together this script to process the drive. It automatically shuts down any running VMs, processes the DeDuplication and then starts the VMs that it shut down back up.

This was an old problem that I first ran into last Spring and I gave up after getting nowhere. I had forgotten all about it until this morning when a friend and fellow SCCM warrior Paul Winstanley wrote and asked me about it as he was getting the same failure. (Check out his writings here and here.)

First, some background…

Back in May 2014 I was having problems getting the Export-CMDriverPackage and Export-CMTaskSequence PowerShell cmdlets working. At the time I was looking for a way to easily move content from our development site to our production site.

Ran into this today. I have a PowerShell script that builds a series of VMs as part of a lab build-out. After upgrading my machine to Windows 10 the script no longer works. What is supposed to happen is that it starts building a VM and loops every couple of minutes looking for the VM’s state to see if it is “Running”. At the end of the VM’s build MDT shuts the VM down. The script sees that the VM is no longer running, powers it back on and moves on to the next VM.

What happens when I run this on Windows 10 Hyper-V is that the script never notices that the VM has shut down.

As you can see, the VM is powered off, but the Get-VM PoSh cmdlet shows that it as still running….

We leverage Collection Variables a lot in SCCM 2012 for our OS deployments. Using them to store things like the domain join account’s or the “run as” account’s passwords makes it simple to maintain and saves us the trouble of having to slog through the entire task sequence and possibly missing one when the time comes to change the passwords.

But this creates its own hassles. Namely making sure that the variables are not only set but set correctly on any collections when setting up an OS deployment. What’s the saying? If you need to do something twice then automate it?