Microsoftâ€™s approach is subtly different â€“ as should be expected with a product thatâ€™s part of Visual Studio itâ€™s focused on aiding developers to avoid configuration drift and to perform repetitive system tests during the application development lifecycle, leaving the System Center management products to manage the movement of virtual machines between environments in the virtual infrastructure.

The VSTS approach attempts to address a number of fundamental issues:

Reproduction of bugs. Itâ€™s a common scenario â€“ a tester files a bug but the developer is unable to reproduce it so, after a few rounds of bugfix ping-pong, the incident is closed with a norepo status, resulting in poor morale on both sides. Lab Management allows the definition of test cases for manual testing (marking steps as passes/fails) and including an action log of all steps performed by the tester. When an error occurs, the environment state can be checkpointed (including the memory, registry, operating system and software state), allowing for reproduction of the issue. A system of collectors is used to gather diagnostic data, and, with various methods provided for recording tests (as a video, a checkpoint, an action log or an event log), itâ€™s possible to automate the bug management and tracking process including details, system information, test cases and links to logs/checkpoints â€“ all information is provided to the developer within the Visual Studio interface and the developer has access to the testerâ€™s environment. In addition, because each environment is made up of a number of virtual machines â€“ rather than running all application tiers on a single box â€“ so-called â€œdouble-hopâ€ issues are avoided whereby the application works on one box but issues appear when itâ€™s scaled out. In short: Lab Management improves quality.

Environment setup. Setting up test environments is complex but, using Lab Management, itâ€™s possible for a developer to use a self-service portal to rapidly create an new environment (from a template (not just a VM, but the many interacting roles which make up that environment â€“ a group of virtual machines with an identity – for example an n-tier web application). These environments may be copied, shared or checkpointed. The Lab Environment Viewer allows the developer to view the various VM consoles in a single window (avoiding multiple Remote Desktop Connection instances) as well as providing access to checkpoints, allowing developers to switch between different versions of an environment and, because multiple environment checkpoints use the same IP address schema, supporting network fencing. In short: Lab Management improves productivity.

Building often and releasing early. Setting up daily builds is complex and Lab Managementâ€™s ability to provide clean environments is an important tool in the application development teamâ€™s arsenal. Using VSTS, a developer can define builds including triggers (e.g. date and time, number of check-ins) and processes (input parameters, environment details, scripts, checkpoints, unit tests to run, etc.). The traditional build cycle of develop/compile, deploy, run tests becomes develop/compile, restore environment, deploy, take checkpoint, run tests â€“ significantly improving flexibility and reducing setup times. In short: Lab Management improves agility.

From an infrastructure perspective, Lab Management is implemented as a new role in Visual Studio Team System (VSTS), which itself is built on Team Foundation Server (TFS)Lab Management sits alongside Test Case Management (also new in Visual Studio 2010 – codenamed Camano), Build Management, Work Item Tracking and Source Control.

Vishal Mehotra, a Senior Lead Program Manager working on VSTS Lab Management in Microsoftâ€™s India Development Center, explained to me that in addition to TFS, System Center Virtual Machine Manager (SCVMM) is required to provide the virtual machine management capabilities (effectively VSTS Lab Management provides an abstraction layer on the environment using SCVMM). Whilst itâ€™s obviously Microsoftâ€™s intention that the virtualisation platform will be Hyper-V, because SCVMM 2008 can manage VMware Virtual Center, it could also be VMware ESX. The use of enterprise virtualisation technologies means that the Lab Management environments are scalable and the VMs may be moved between environments when defining templates (e.g. to take an existing VM and move it from UAT into production, etc.). In addition, System Center Operations Managers adds further capabilities to the stack.

Whilst the final product is some way off and the marketing is not finalised, it seems likely that Lab Management will be a separate SKU (including the System Center prerequisites). If youâ€™re looking to get your hands on it right now though you may be out of luck – unfortunately Lab Management is not part of the current CTP build for Visual Studio 2010 and .NET Framework 4.0.

As a result of a query I had about the supportability (or otherwise) of running System Center Virtual Machine Manager (SCVMM) 2008 in a Hyper-V virtual machine, Clive Watson pointed me in the direction of Microsoft knowledge base article 957006, which discusses the support policy for running Microsoft server software in a virtual environment.

For anyone working with Microsoft software on a virtual infrastructure (even a non-Microsoft environment via the SVVP) it looks like a useful article to be aware of.

Like this:

I’ve been capturing some network data using a computer with Hyper-V installed this evening and it’s worth noting that I needed to sniff a physical network connection to get anything meaningful. Thinking about it, that makes sense (Hyper-V implements a virtual switch – not a hub – so the traffic on each vNIC is isolated until it reaches a pNIC) but it may be something worth remembering.

A few days ago, I came across a couple of blog posts about how VMware Server won’t run on top of Hyper-V. Frankly, I’m amazed that any hosted virtualisation product (like VMware Server) will run on top of any hypervisor – I always understood that hosted virtualisation required so many hacks to work at all that if it saw something that wasn’t the real CPU (i.e. a hypervisor handling access to the processor’s hardware virtualisation support) then it might be expected to fall over in a heap – and it seems that VMware even coded VMware Server 2.0 to check for the existence of Hyper-V and fail gracefully. Quite what happens with VMware Server on top of ESX or XenServer, I don’t know – but I wouldn’t expect it to work any better.

I’ve always been impressed with John Craddock and Sally Storey’s presentations on Active Directory and related topics so, a couple of weeks back, I was pleased to catch up with them as they presented at the inaugural meeting of the Active Directory User Group.

In that session, John and Sally gave a quick overview of the new features in Windows Server 2008 Active Directory as well as the new read only domain controller (RODC) functionality and, if that whet your appetite (or if you missed it and think you’d like to know more), it may be of interest to know that John and Sally are running one of their XTSeminars later this month, looking at Windows Server 2008 infrastructure design, configuration and deployment. Topics include:

A few days ago, I was migrating a couple of legacy virtual machines from Virtual Server to Hyper-V. I used Matthijs ten Seldam’s VMC to Hyper-V Import Tool to save some time on the import process, although it is an import tool (not a migration tool), so I did need to move the .VHDs manually.

I realised that I’d forgotten to remove the Virtual Machine Additions before I shut the machines down in Virtual Server but I figured I would be able to do that in Hyper-V, before installing the Hyper-V integration components. Unfortunately, that didn’t work for me and attempting to uninstall the Virtual Machine Additions from the Control Panel Add or Remove Programs applet resulted in the following error message:

Virtual Machine Additions

{\Tahoma8}You can install Virtual Machine Additions only on a virtual machine that is running a supported guest operating system.

“You can only install the Virtual Machine additions from within Hyper-V if the virtual machine is running Windows XP SP3, Windows Vista SP1, Windows Server 2003 SP2 or Windows Server 2008 and the additions are version 13.803, 13.813, or 13.820. If the operating system is different or the additions are different, they can only be removed from Virtual Server or Virtual PC.”

Clicking on the support information in Control Panel’s Add or Remove Programs applet told me that my Virtual Machine Additions were still at 13.552.0.0, so I had to load the VHD under Virtual Server to remove them, before copying the .VHDs back to Hyper-V (I could use the same virtual machine configuration that I had created earlier but had to remove the disks and add them again in order to assign the appropriate permissions).

After starting the VMs, I cancelled the Found New Hardware Wizard and installed the Hyper-V integration services (including an automatic HAL update and a couple of reboots). Because I’d used legacy (emulated) network adapters and allocated static MAC addressing (carrying forward the MAC addresses from Virtual Server), the guest operating systems didn’t notice that the underlying NIC had changed and so it wasn’t necessary to reconfigure TCP/IP settings.

In case you hadn’t noticed, it’s Microsoft’s conference season – PDC this week, WinHEC next, TechEd EMEA the two weeks after that… lots of announcements – and I’m missing them all!

Luckily, last week I got the chance to catch up with Ward Ralston (a Group Technical Product Manager in Microsoft’s Windows Server Product Group) and he gave me the rundown on what to expect from Windows Server 2008 R2.

For those who are not familiar with Microsoft’s release cycles for server operating systems, ever since Windows Server 2003, the company has aimed to release a major update every 4-5 years with an interim second release (R2) in between. Windows Server 2003 and Windows Server 2003 R2 share the same basic code but R2 includes SP1 and new functionality. Similarly, I would expect Windows Server 2008 R2 to include SP2 and it certainly has some goodies for us.

One of the reasons for an interim release is to take advantage of new hardware advances and changes in the overall IT market and one significant point to note is that Windows Server 2008 R2 will be 64-bit only. That’s right – no more 32-bit server operating system – and that is A Good Thing. We all have 64-bit hardware (and have had for some time) but many IT administrators don’t realise it, and install 32-bit operating systems even though driver support is no longer an issue (at least for servers) and most 32-bit applications will run quite happily on a 64-bit operating system.

The main themes for the Windows Server 2008 R2 release are: improved hardware, driver and application support; taking advantage of ever-increasing numbers of logical processor cores and new power management features; improvements around virtualisation, power management and server management; new technologies to lay the foundation for the next version of Windows; and a unified release focus – with the Windows 7 client and Windows Server 2008 R2 providing engineering efficiencies to work “better together”.

There are many new features in Windows Server 2008 R2 and, first of all, is the area of most interest to me – virtualisation. Windows Server 2008 R2 includes the second release of Hyper-V with new features including:

Live Migration to allow virtual machine workloads to fail over between cluster nodes with no discernable break in service. I still argue that this is not a feature that organisations need (cf. want) for their server infrastructure but as the dynamic datacentre and virtual desktop infrastructures (VDIs) become more commonplace, it makes sense to support this functionality with Hyper-V (besides the fact that competitors can already do it!).

A new clustered shared volume file system (codenamed Centipede) which sits on top of NTFS and allows multiple cluster nodes to access the same storage.

Support for 32 logical processors (cores) on the host computer (twice the original limit with Hyper-V), paving the way for support of 8-core CPUs and improved consolidation ratios.

Hot-addition and removal of storage (allowing VHDs and pass-through disks on a SCSI controller to be added to a virtual machine without a reboot).

Second level translation (SLAT) – moving past Intel-VT and AMD-V to take advantage of new processor features (Intel Nested Page Tables and AMD Enhanced Page Tables), further reducing the hypervisor overhead.

Boot from VHD – using a kernel-level filter to take a virtual hard disk and boot from it on hardware – even without hardware support for virtualisation.

Microsoft also spoke to me about a dynamic memory capability (just like the balloon model that competitors offer). I asked why the company had been so vocal in downplaying competitive implementations of this technology yet was now implementing something similar and Ward Ralston explained to me that this is not the right solution for everyone but may help to handle memory usage spikes in a VDI environment. Since then, I’ve been advised that dynamic memory will not be in the beta release of Windows Server 2008 R2 and Microsoft is evaluating options for inclusion (or otherwise) at release candidate stage. These apparently conflicting statements, within just a few days of one another, should not be interpreted as indecisiveness on the part of Microsoft – we’re not even at beta stage yet and features/functionality may change considerably before release.

Looking at some of the other improvements that we can expect in Windows Server 2008:

On the management front: there is a greater emphasis on the command line with improved scripting capabilities with PowerShell 2 and over 200 new cmdlets for server roles as well as power, blade and chassis management – working with vendors to deliver hardware which is compatible with WS-Management – and new command line tools for migration of Active Directory, DNS, DHCP, file and print servers; Server Manager will support remote connections, with a performance counter view and best practices analyzer (similar to the ones which we have seen shipped for server products such as Exchange Server for a few years now); and a new migration portal will expose step-by-step documentation for migration of roles and operating system settings from Windows Server 2003 and 2008 servers to Windows Server 2008 R2.

Power management was an improvement in Windows Server 2008 and R2 is intended to take this further with features such as core parking to reduce multi-core process power consumption (only using the power required to drive a workload) as well as centralised control of power policies (allow servers to throttle-down during quiet time, using DMTF-compliant remote management interfaces).

Active Directory Domain Services is improved with: a new management console (with PowerShell integration) to replace the disparate tools that have existed since early NT 5.0 betas; a new AD recycle bin to aid with recovering deleted objects; improved support for offline domain joins (similar to the pre-staging support used in Windows Server 2008 for RODCs); improved management of user accounts and identity services (manage service accounts); and improved authentication assurance in Active Directory Federated Services.

IIS continues to improve with: server core support for ASP.NET; an integrated PowerShell provider (more than 50 new cmdlets); integrated FTP and WebDAV support (previously provided as extensions); new IIS Manager modules (e.g. to support new FTP, WebDAV, request filtering and ASP.NET functionality); configuration logging and tracing (building on IIS 7.0’s feature delegation functionality by providing the ability to centrally log and audit changes made by site managers and web developers); and extended protection and security (channel-binding tokens to prevent man-in-the-middle attacks, hardened accounts to prevent application spoofing, and improved management for custom service accounts).

Scalability and reliability improvements with: improved multi-processor support, reduced Hyper-V overhead and improved storage performance; greater componentisation – server core installations will support more roles and will also support ASP.NET within IIS as Microsoft.NET Framework support will be added (which also allows PowerShell to run on server core installations); DHCP failover, with the ability to pair DHCP servers as primary and secondary servers (based on an IETF draft for the DHCP Failover protocol); and DNS Security, using DNSSec to validate name resolution and zone transfers using PKI to secure DNS records (preventing the interception of DNS queries and return of illegitimate responses from an untrusted DNS server – a real issue with huge potential impact across multiple platforms that was recently highlighted by security researcherÂ Dan Kaminsky).

Finally, whilst there has always been a good, better, best story for integrating the latest client and server releases with Microsoft products, Microsoft is really pushing “better together with Windows 7” with the Windows Server 2008 R2 marketing. New features like Direct Access and Branch Cache are intended to take existing connectivity technologies and couple them in a less complex manner, connecting routed VPNs over firewall-friendly ports with end-to-end IPSec whilst improving branch office performance by caching HTTP and SMB traffic. Read-only DFS improves branch office security (in the same way that read-only domain controllers did for Windows Server 2008). Then there’s more efficient client power management, BitLocker encryption on removable drives and the new DHCP Failover and DNSSec functionality mentioned previously – I’m sure as we learn more about Windows 7 the list will continue to grow.

So, when do we get to use all this Windows Server 2008 R2 goodness? Well, Microsoft is not yet ready to release a beta and, based on previous versions of Windows Server, I would expect to see at least two betas and a couple of CTPs before the release candidates – but the product team is currently not committing to a date – other than to say “early 2010” (which, incidentally, will be 2 years after Windows Server 2008 shipped). They’re also keen to point out that, although Windows Server 2008 R2 is being jointly developed with the Windows 7 client operating system, there are no guarantees that the two will release together – maybe they will, maybe they won’t – read into that what you like, butÂ some are predicting a late-2009 release for Windows 7Â and I would expect the server product to follow a few months after that. No-one needs to get a new server operating system out in time for the holiday season but they do want it to be rock solid.

Of course, at this early stage in product development, there could still be a number of changes before release. Even so, with these new features and functionality, Windows Server 2008 R2 is certainly not just an insignificant minor release.

Microsoft’s virtualisation portfolio is not complete (storage and network virtualisation are not included but these are not exactly Microsoft’s core competencies either); however it is strong, growing fast, and not to be dismissed.

Around about now, Microsoft is due to announce that they have released System Center Virtual Machine Manager (SCVMM) 2008 to manufacturing. For those watching Microsoft’s virtualisation strategy unfold, this is an extremely important release – many of the critics of Hyper-V have been concerned about the management tools but SCVMM integrates with other System Center tools to provide a fully-featured management solution for both Hyper-V and VMware ESX â€“ so organisations can manage their physical and virtual workloads as one, whether they are running a Microsoft or a VMware virtualisation platform.

I’ll write separately about the various System Center management products and how they complete the Microsoft Virtualization story but this post looks at some of the features in SCVMM 2008.

Originally released in 2007, SCVMM is a recent addition to the System Center family of management products and provides centralised management for virtual machines whilst integrating fully with other System Center products to allow administrators to use the same interface and common foundation that they use for managing a physical infrastructure in the virtual world.

Built on Windows PowerShell, making the product fully scriptable, SCVMM uses the concept of jobs which are executed against virtual machine hosts and guests for centralised management.

With the 2008 product release, Microsoft has added cross-platform management functionality(Hyper-V, Virtual Server and VMware ESX – note that the VMware management does require Virtual Center in order to provide the necessary APIs and does not include non-task-oriented functions, such as cluster creation), integration with Windows Server 2008 failover clusters (including intelligent placement), delegated administration and performance and resource optimisation (PRO) to provide guidance for administrators for automatic or manual actions when alerts are raised, integrating with the management frameworks provided by leading server hardware providers.

Microsoft’s algorithm for intelligent placement of virtual machine workloads uses the CPU, memory, network and disk requirements for virtual machines to project the required resources and then balance this with the defined resource thresholds for each host, before providing a rating for each host, according to its suitability for servicing a given virtual machine workload. It also takes into account the prospect of cluster node failure, whereas competitive solutions will allow resource overcommitment to artificially increase the consolidation ratio (but may be creating a problem if a node does fail). Through integration with SCOM, SCVMM can be used to discover potential virtualisation candidates and the product also includes the ability to perform physical to virtual (P2V) and unidirectional virtual to virtual (V2V) conversions.

Delegated administration should be a key consideration for infrastructure deployments and SCVMM enables this with a role-based model, including self-service. Templates may be used for rapid provisioning of new virtual machines and the web portal provides a quota system for users to create and destroy VMs, based on administrator-defined rules.

As for how to buy SCVMM – it will be available from November 2008 as a standalone product, or as part of the Server Management Suite Enterprise (SMSE) which allows organisations to use several System Center products to build a complete management solution for the entire infrastructure, both physical and virtual.

Management is clearly a strong element of Microsoft’s virtualisation story and SCVMM addresses many of the issues that the basic tools provided with Hyper-V cannot. With the added advantage of the “Windows that you know” – i.e. familiarity for administrators – and, according to Microsoft, a greatly reduced total cost of ownership, SCVMM not just a perfect companion to Hyper-V but it also provides management tools for legacy virtual infrastructure and finally brings enterprise virtualisation features within the reach of most organisations.

Like this:

Posts navigation

By using this website you allow cookies to be placed on your computer. They are harmless and never personally identify you. For more information about cookies and how they are used, visit the
Privacy Policy and Data Protection Notice