Subscribe to Blog via Email

This morning, I arrived at the office to find my Windows 10 Anniversary Update desktop crashed after reboot of yesterday’s Windows Updates. No matter what I did, I couldn’t get it to boot. I took to the interwebs, and quickly found this link:

Reset the Device guard RegKeys (delete the DG regkey node) and then enabled Hyper-V in RS1

Reset the Device guard RegKeys (delete the DG regkey node) and then upgrade to RS1 while keeping Hyper-V however customers want (ON or OFF is both fine)

Disable Bitlocker till 8/23

After speaking to a colleague of mine (who I’m guessing would prefer to remain nameless), I found that it is in fact possible to recover from this catastrophe (assuming you have your 48-digit Bitlocker recovery key), by going through the Windows 10 Recovery options, using your 48-digit Bitlocker recovery key, and then booting to a command prompt.

Once you’ve found the drive you want to decrypt (most likely C:), you’ll use the following Bitlocker decryption command:

manage-bde -off C:

You can use the following command to get a view of where things are, both before and after you’ve started decrypting:

manage-bde -status

Update – someone posted via the forum discussion that you can also just disable Bitlocker rather than decrypt the drive, using this command:

manage-bde -protectors -disable c:

Assuming you have a lot of data, and will re-enable in another week, you may prefer to go that route. I’ve not tested this one, but it seems like it should work.

After you see that the drive is decrypted to 100%, you should hopefully be able to reboot back into Windows. At least this worked for me an my unnamed colleague.

After you’re in Windows, I assume you want to keep drive decrypted until said Windows Servicing update above. Alternatively, I believe you can just disable Hyper-V.

Good luck. Give me a shout if you ran into this, and/or if this helped you.

Some of you out there that checked in on this site regularly or had an RSS feed may have wondered what happened three years ago, and why I never posted again.

Though I can’t say it was 100% of the reason, most of it had to do with the arrival of our second child, who’s now a very lively three year old. It’s different with everyone, I know, but this time around, a much more concerted effort was needed for a couple of years to get through the wild times of the baby and toddler stages.

I think I can safely say that the clouds have begun to clear, and I’ll be starting to post again pretty regularly about many the things I used to post about, such as virtualization, PowerShell, fitness, technology, gadgets, and such. In the meantime, I’m migrating the site from Squarespace to a self-hosted WordPress site.

I’ve gone back and cleaned up the worst of the migration mess (orphan HTML randomly spewed upon pages, missing images, et cetera), but I’m sure I’ll have missed a lot. Most of the info is dated enough that I doubt it’ll matter much other than for posterity, but I did find a few items during the import that I’d like to go back and revisit soon.

For the remaining days of our 12 Days of Hyper-V Tips and Tricks, I’ll be focusing on new features that are coming in Windows 8. I’ve been using Hyper-V since it first shipped, and with each release, more and more of the “must haves” and “nice to haves” have been filling in, to the point that with Windows 8, I’m not looking for much more in my Virtualization solution. Some of my favorite things that are new in Windows 8 are:

Cluster Shared Volume (CSV) 2.0

In-box NIC Teaming

Storage Migration

Concurrent Live Migration

Hyper-V Cmdlets

Hyper-V Replica

Today, we’ll focus on NIC Teaming.

In last year’s MMS/Tech-Ed Hyper-V FAQ Tips and Tricks sessions, we had a few questions about NIC teaming, and Nathan Lasnoski wrote up this response regarding NIC teaming in Windows 2008 R2 SP1, posted here

“How do I enable Hyper-V NIC teaming?”

Although Microsoft has offloaded this capability to the network card manufacturers, it is a capability that works, assuming you’ve configured the teaming software properly. There are several different types of load balancing configurations (in Broadcom BASP and Intel):

Smart Load Balancing with Failover: This implementation is sort of like multicast, where all the switch ports have different MAC addresses and theoretically can be implemented without any switch changes. We’ve found this relatively easy to configure, but prone to network integration issues.
*Link Aggregation (802.3ad): This implementation aligns with the IEEE 802.ad (LACP) specification. In this configuration all adapters receive traffic on the same MAC address. In this configuration you’ll need to have a switch which supports LACP integrations. I’ve seen people who have had a lot of success with this option.
*Generic Trunking (FEC / GEC) / 802.3ad-Draft Static: This implementation is similar to 802.ad link aggregation, but instead of integrating with LACP, it uses a trunking mechanism at the switch level, such as EtherChannel. We’ve had success with this on Cisco, HP, and Dell switches. This implementation type has been the predominate option we’ve used because of its ease of configuration and because we’ve experienced very few issues with it. It should be noted that when using Intel NICs, which configuration is called “Static Link Aggregation” vs. “IEEE 802.3ad Dynamic Link Aggregation”

To configure the NIC teaming integration with Hyper-V follow these configuration steps:

*Install Hyper-V role and clear networks

*Install and configure teaming software

*Connect the team to a Hyper-V virtual network

Additional Tips:

We’ve found it useful to enable “VLAN Promicuous Mode” if the feature is available, as that allows for VLAN tagging to work properly.

Make sure to fully test your configuration before moving into production. This is especially true for live migration and access to teams from other networks or VLANs. Also, if you run into issues, with virtual machine networking, make sure you aren’t running into an IC or hotfix issue that is not related to teaming.
*We have tended to be very careful about offloading features, often disabling them completely

Teaming in Windows 8

In Windows 8, the story changes pretty dramatically. Microsoft is looking to support NIC Teaming natively in a totally vendor agnostic way. Regardless of your NIC brand, you just run a simple cmdlet (or make a few clicks in a GUI if you prefer) and you have a team. Virtualization MVP Alessandro Cardoso just wrote a great post on NIC teaming here, and Virtualization MVP Didier Van Hoye just wrote a great post NIC teaming here so rather than re-post that content, I’ll direct you their way. They are quick and easy reads, and I recommend taking a look.

Microsoft also just released a 34-page whitepaper this past week which goes deep into the new feature, which you can find here.

In my early testing, especially on the Beta, I’ve absolutely loved the new feature, and it’s now part of our default build process. You can set it up a couple different ways, depending on what you’re trying to achieve (teaming the guest NICs and host NICs separately, teaming all your NICs together and sharing between the host and guests, et cetera).

There will be more to come on this soon, but for today, know that if you’ve avoided NIC teaming with Hyper-V to date due to the complexities of the 3rd Party implementations, take a fresh look at the new in-box LBFO feature. It may be just what you’re looking for!

Day 6 in our continuing series of Hyper-V FAQs, Tips, and Tricks deals with inbox drivers.
Some might say that this one is obvious and goes without saying, but I’ve run into lots of people that have just installed Hyper-V as is, and begun to deploy workloads, so I think it’s important to bring this one up in case there are those out there that might skip this important step.

There are primarily two locations I’ve personally seen where inbox drivers can wreak havoc on a deployment: Network Interface Cards (NICs), and Host Bust Adapters (HBAs). Especially with NICs, I’ve regularly seen a variety of issues ranging from poor network performance in guest and/or the parent partition to intermittent network failures in the guest and/or parent partition. A lot of these issues have to do with issues around RSS, Offload, Chimney, and other advanced networking features, and it’s almost a guarantee that you’ll see improvements if you update to the latest network drivers. I’ve found this to be especially true around 10 Gigabit network cards like the Intel X520.

We’ve also found issues in early Windows 8 testing with the inbox drivers on our HBAs as well in both the Developer Preview and the Beta, and the first thing we do on a new install now is replace the inbox HBA drivers.

Historically, we’ve seen these issues in Windows 2008, Windows 2008 R2, and Windows 8, and I see no reason that this will change.

To reiterate, though it’s generally a good idea to always update inbox drivers on any physical server you deploy, I’ve found that it can be more critical on Hyper-V workloads due to the extra things that are going on around virtualized NICs (and in Windows 8, virtual HBAs as well).

This concludes Day 6 of the series. I’ll be posting the rest from sunny Las Vegas. If you’re heading there this week for MMS 2012, feel free to give me a shout, and don’t forget to check out SV-B313 “Hyper-V FAQs, Tips, and Tricks!” on Wednesday at 4PM.

For those following along, yes we missed a couple days there in our 12 days of Hyper-V FAQs, Tips, and Tricks. In my defense, I was building out a new scale-out production Hyper-V Cluster on Win8, and it was all so exciting, I lost all track of time for a day or two. 🙂

More on Windows 8 soon.

But back to the focus – Tips and Tricks.

The Dynamic VHD topic is one that’s near and dear to my heart, and also another one that’s a bit controversial. Depending on who you ask, the answers can be “it’s fine – use Dynamic VHDs and don’t think twice” to “don’t use them at all in production“. I lean toward the former statement (with some caveats), and I’ll do my best to explain why.

First, I want to state that I strongly disagree with the statement “You should use fixed disks in production, but dynamic disks are OK for test and dev”. This blanket statement, though great as a type of CYA for someone who either doesn’t know the details of a deployment or doesn’t have time to go into them”, is far from the case in many (if not most) of the cases I’ve come across.

I also want to start out by saying that a fixed VHD will always outperform a dynamic VHD (sometimes by 10%, sometimes more), just like a Corvette will always outperform a Ford Fiesta. However, I drive a Ford Fiesta, because it’s a heck of a lot cheaper, and it provides me 100% of what I need to get there, and when I drive to work with mostly 40 mph speed limits along the way, I’m never the least bit aware of any limitations in performance. I can take the difference in money saved, and apply to it to whatever else I want.

Likewise with Dynamic VHD, when you look at the cost of storage, and balance that against performance requirements for your application, more often than not, I think you’ll find that the money you can save by going with Dynamic VHD can significantly outweigh the benefits of using Fixed VHD for the workload.

In my role with Indiana University’s Auxiliary Information Technology Infrastructure team, I support about 300 virtual machines running a variety of workloads, from File, Print, and IIS to SQL, Oracle, SharePoint, and System Center. We run all of these workloads in Dynamic VHD, all on Cluster Shared Volumes, and our performance is acceptable. Could it be 10% (or more) faster? Yes. Would that potentially cost us tens or hundreds of thousands of dollars more in storage (or force us to constantly re-evaluate disk size and try to grow the disk via scripts)? Yes. From a cost/benefit analysis, we chose Dynamic VHD.

From a tips and trick perspective though, make sure you monitor the disk space where the VHDs live! The number one issue people will hit Dynamic VHD (as well as when using snapshots), is that they fail to watch the disk or LUN from the parent, and after humming along for a year, find themselves out of disk space with a bunch of crashed VMs. Make sure you stay ahead of your space requirements on the Hyper-V side. We personally have a Thin Provisioned SAN (Dell Compellent), and I set all our LUNs to be 2TB, so we never run into issues from that perspective, but if you aren’t thin provisioning your LUN, take extra care there to monitor your space.

I’ll close with my CYA caveat as well – For some scenarios, as Aidan described in his post, you can hit perf issues. It depends a lot on what kind of storage you have underneath what you’re deploying to. I can safely say that we’ve not seen the fragmentation issue that Aidan describes, and I don’t actually think that VHDs would grow like that on the disk to begin with for a few reasons: one is that Windows doesn’t just write chunks of data to the disk starting at the beginning and working to the end, for reasons like the one he mentions. Another is that if you’re using a big virtual SAN array anyway, those blocks are spread across tens of disks anyway.

So, as I’ve said before, I’ll say it again. Test dynamic VHD in your environment. If it performs well, go for it. There are lots of guidance docs recommending against it depending on the scenario, but most of those are for CYA reasons. Choose wisely, but don’t be afraid to try these out and use them in production. I do, and am happy we made the leap.

Warning – this example is based on pre-release Windows 8 code, and is subject to change before shipping. This is just a point in time example that may not work in 6 months. Use at your own risk, and validate in the lab.

New-VM, Set-VM

At the most basic level, you only really need to use a couple cmdlets to get your VM up and running (and one additional cmdlet to cluster it): New-VM, and Set-VM

As always, the beauty of PowerShell is that you can take this little snippet and bring it into something much more wondrous and magical with just a little bit of work. In my case, I have a workflow that will do the following:

Create a PS-Session to a host within my cluster

Create a new folder on my Cluster Shared Volume based on the VM parameter

Copy the Gold Image VHD to the Clustered Shared Volume

Mount the VHD on the parent partition

Inject the Computer Name and IP Address, and “IP of the Provisioning Server” information into the VHDx (to create a firewall rule in real-time)

Unmount the VHD

Create the VM (using above code)

Power on the VM

Wait a bit of time for the VM to run through it’s autounattend and come online

With about 25 lines of PowerShell, you can have your own highly flexible Scripted Cloud solution without installing any additional software! (My script is actually about 300 lines, due to lots of debugging code, error handling, et cetera, but the work is done in about 25 lines.)

It’s been almost a full year since I wrote this post on C-States, but I wanted to post an update and state that it still applies. Though most of the issues have been ironed out these days, in early testing of Windows 8, a few issues have crept up around C-States again, and I’d recommend that if you ever run into odd unexplained performance issues on your deployment, this would be the first thing to double-check. One additional comment I wanted to add that I didn’t really mention in the post last year is that there might be multiple settings in your BIOS, such as one for C1E. You’ll want to disable C1E in addition to C-States.

I had an interesting chat a month or two back with an engineer from a certain related technology company who shall remain nameless in which he argued that having C-States enabled on your server is like buying a high-performance race car but only driving below 55 MPH to save gasoline (or some metaphor like that, it doesn’t really matter), and if you wanted performance, you should disable it. The irritating thing about this is there’s a ton of press that comes out about all of the energy efficiency and green initiatives and blah blah blah, going on about how great these Intel procs can manage power, and that they can sleep or power down unused cores etc, but in the end, that’s all total marketing and you just disable all that and run it with your amp turned to 11.

In the end, the best way to conserve power is to load each host as full as you can load it, and then power off the spare hosts, rather than spreading your workloads across all your hosts and letting them run at 50%.

This concludes day 2 of our “12 days of Hyper-V Tips and Tricks”. If you’re coming to MMS 2012, be sure to check out SV-B313 -Hyper-V FAQs, Tips, and Tricks!

Have any questions? Feel free to contact me directly, or leave a comment below.

One of the great additions of Windows 2008 R2 SP1 was the introduction of Dynamic Memory (DM) support for virtual machines. Since memory is generally the limiting factor on most Hyper-V stock deployments, Dynamic Memory can help to significantly increase VM density in many cases, especially those where you always configure a VM’s memory based on the “recommended configuration” from a vendor, only to find that the server generally uses about 40-50% of that memory.

Dynamic memory is very simple to use. You just go into the VM settings and flip the radio button from “Static” to “Dynamic”, and then you can adjust “Startup RAM”, “Maximum RAM”, and “Memory Buffer”. Rather than go into those details here, I’ll simply reference this guide.

The main things I want to point out from a “FAQs, Tips, and Tricks” perspective is that there are a few cases where you might encounter unexpected behavior:

Software Installation Doesn’t Meet Minimum Memory Requirements

Applications that perform their own memory management

Windows 2008 Standard and Web Edition

Available memory in the parent partition

Software Installation Minimum Memory Requirements

Because of the way Hyper-V sets up the “Startup RAM” on a VM, one scenario that you run into on occasion is that software won’t install because the prereq checker doesn’t think it has enough memory to complete the installation (since Windows doesn’t allocate the memory until it’s needed). The quickest and easiest way around this issue is to crank the memory buffer up from Hyper-V settings to something like 200%, then run the installer, and then setting the memory buffer back down to the default. This quick workaround doesn’t require a reboot, and takes less than 10 seconds. Your other options are to temporarily disable DM or artificially use up some extra memory in the guest, but I find the first option to be the quickest and easiest.

Applications that perform their own memory management

You’ll find over time that some applications and some databases (e.g. Oracle and SQL) don’t necessarily seem to perform the way you might expect once you enable DM. Generally, this behavior stems from the fact that memory is configured from within the app or DB, so the app will just keep pushing until it finds it can’t get any more memory or it reaches the max level that’s been set.

In such applications, you can leave DM enabled and receive some benefit, but you want to go in and adjust the in-guest memory settings to line up with the Hyper-V settings. For example, let’s say you have a multi-instance SQL server on a VM. By doing a bit of testing, you find that a particular database can perform fine with 1 GB of RAM, but the left to its own devices, the SQL instance will consume 3 GB of RAM. In this case, turning on DM doesn’t necessarily have the desired effect from an “optimal usage” perspective. In this case, you’ll want to go into each instance and configure a memory cap. Once you have SQL capped appropriately, the OS will continue to use what it needs from a DM perspective, and all will play nicely.

Windows 2008 Standard and Web Edition

Windows 2008 Standard Edition and Web Server edition require a special hotfix to enable Dynamic Memory. For some reason, as of the time of this writing, it’s still a bit of a pain to get this hotfix. You have to contact Microsoft Customer Support Services (CSS) rather than download it directly, but it’s worth the headache if still have quite a few Windows 2008 Standard VMs deployed.

Available memory in the parent partition

In our testing, we found some scenarios where, when under heavy load, the parent partition may become starved of resources, and havoc began to ensue. This generally happens if you have “extras” running in the guest. Such extras could include, Antivirus, System Center agents (OM, VMM, DPM), Hardware Monitoring (OpenManage), et cetera. If you have a lot of this stuff, you might find that you need to up the memory reserves in the parent partition. For more information on this topic, see the “Troubleshooting” section here

Hopefully, these tips will help you find your way to a successful deployment with Dynamic Memory and Hyper-V. Have any questions? Feel free to contact me directly, or leave a comment below.

It’s that time of year again! April 16 marks the beginning of this year’s sold out Microsoft Management Summit. If you’re one of the lucky ones with tickets, I’ll be presenting Hyper-V FAQs Trips and Tricks again this year on Wednesday, April 18 at 4PM in Murano 3301 with fellow Virtualization MVP Nathan Lasnoski, and if it’s anything like last year, we’ll probably have a few special guest MVPs join us as well.

In order to ramp up for the big day, I’ll be doing a post a day highlighting Hyper-V FAQs, Tips, and Tricks, and will also begin to talk about some of the exciting things coming in Windows 8 Hyper-V, and how a lot of the tips and tricks when deploying Hyper-V in Windows 2008 R2 become less important or unnecessary in Windows 8. If you have questions, feel free to post them here or contact me directly, and we can either write up a post or discuss them at MMS.

First, a confession: ever since my first try in elementary school, I was never able to jump rope. I remember getting a “U” on my report card for “Unsatisfactory” in the P.E. skills sections where it documented my inability to perform this simple feat of coordination. Fast forward 30 years.

I joined Hoosier CrossFit in late September, and have loved being presented new challenges every week since I joined. When I find myself thoroughly embarrassed on a new workout, I resolve to “work it to death” until I have it right. One such challenge for me is jumping rope.

Last week, one of our workouts included “Death by Pull-ups”, where you perform one pull-up during the first minute (and then let the clock run out for the minute), then two pull-ups during the second minute, three in the third minute, et cetera, and repeat until you can’t make it through the minute with the required number of reps. On that workout, I made it to round 9, for a total of 8 rounds + 4 reps, or 40 total pull-ups.

Today, I decided to try to apply this workout to my nemesis, the jump rope, partly just to work on my form and partly to get my heart really pumping today. The first twenty minutes or so were pretty uneventful, but a great warm-up for the second half. The result: 46 rounds + 46 reps, for a total of 1127 jumps! My heart rate sat between 180 and 190 for the last 10 minutes or so. It definitely made me feel like I’d been through a workout by the end. And best of all, I no longer have to dread jump rope days. Now to get to work on double unders!

Health/Fitness update: I’m weighing in at 152 now, down 41 pounds from a bit over a year ago when I started. I ordered new blood work to check cholesterol, CRP, triglycerides, et cetera, but it seems to be slow going due to the holidays. I’ll post an update with the full year before/after report once that info arrives.

If you resolved to start a new eating or fitness program in the new year (my I suggest Paleo/Crossfit?), here’s hoping you make great progress!