Friday, January 28, 2011

A nice surprise arrived in my mailbox yesterday. (Thank you Microsoft, for this recognition J )

“The Microsoft Community Contributor Award is reserved for participants who have made notable contributions in Microsoft online community forums such as TechNet, MSDN and Answers. The value of these resources is greatly enhanced by participants like you, who voluntarily contribute your time and energy to improve the online community experience for others.”I spend most of my spare time in the Hyper-V forum, but also the SQL, and the various Virtual Machine Manager forums. I really enjoy solving puzzles, and tries to help people with their questions related to Microsoft Virtualization. It gives me great satisfaction when the questions are answered, and when people come back to ask us further questions.

Thursday, January 27, 2011

I want to present a checklist for what you need to consider if you intend to create a Failover Cluster for Hyper-V.

A key word is that you need common components. If you want a successful implementation, you better make sure that everything is supported.

Server Hardware:

I`ve created some clusters by now, and since I`ve been part of the planning process as well, I have luckily made it clear the servers that should be part of a cluster should have the exact same configuration and components. This is really basic. You may have cluster nodes with different CPU`s, but at least from the same manufacturer. We`ll come back to that later. Remember that the whole idea of a High Availability solution is that if one node fails, - the workloads should failover to a second (or a third) node. So that brings us to the RAM. It`s a good idea to have enough RAM installed on every node, so that the VMs on node 1 can also run on node 2. You should even calculate the size of the RAM so that both hosts are able to run the entire workload, especially if you`re planning for an active-active cluster. So let`s assume that the CPU are identical and so is the amount of RAM. What about the firmware, BIOS settings, NICs, and storage? If you are not familiar with the Cluster Validation wizard, you should spend some time with it. It will pick on every little detail that may affect the clusters stability, and guide you through whatever you have configured wrong, or at least notify what it suggest to do about the errors/warnings.

One great thing about the Cluster Validation is that it will not deny the creation of the cluster even if there is not ‘enough’ NICs installed. This is nice if you intend to use the cluster for testing/training.

Network:

Network is the core of everything today - also when it comes to clusters. Communication between the nodes, iSCSI to storage, dedicated NICs to host management, VMs, and so on. Best practice is to assign at least one NIC dedicated to a single purpose. This is for assuring adequate performance, security, availability and stability. One example is that the Live Migration process relies on a good network configuration, and should have a dedicated network for its purpose.

CPU:

In an ideal world the CPU`s would be identical. But sometimes the ideal is not always possible and they should at least come from the same manufacturer.

CPU`s operates different in the way they manage memory, and varies what instructions are available. If you do not have identical CPU, you must enable the ‘Migrate to a physical computer with a different processor version’ on the CPU settings on the VM.

Storage (SAN):

If you want HA and Failover Cluster, you must have some sort of shared storage, and the storage provider must support iSCSI or FC. In addition, SCSI-3 Persistent which is a command which controls disk arbitration must be supported. It`s quite often a focus on the hosts when it comes to CPU`s and RAM. But a key to a successful Hyper-V implementation is that the storage is well configured. The VMs is nothing but a set of files on a disk, and may be very I/O intensive. And since the VMs are located on shared storage that may be connected through iSCSI, the network throughput must be adequate.

Quorums and the voting:

If your cluster should act as a cluster, there must be some mechanism that would identify a failure of the node, and the health of the cluster. For this we have majorities, quorums, and voting.

One thing to note, is that you can`t combine a mix of Server Core and Full install of Windows Server Enterprise/Datacenter in the cluster. Although you could do a mix of Enterprise/Datacenter (the validation would give a warning).

If your budget is low and you are familiar with Hyper-V, you could also use the free Hyper-V 2008 R2 edition to build a Cluster. This edition runs the windows kernel and is based on Enterprise/Datacenter which supports Failover Cluster.

Yes. P2V means Physical 2 Virtual. But as you may know already, the source do not need to be a physical machine. With that knowledge, you are able to figure out that you could run an Online P2V on a machine (please, do not consider Domain Controllers for this purpose) even if it is a virtual machine.

-Pros: An easy method of backup if you have SCVMM, and the bandwith required for the transfer.

-Cons: Requires bandwith, storage, some expertise, and it`s not free (though you could use the 180 days trial of SCVMM)

2)Windows Server Backup

WBS, which is included in 2008 and 2008 R2 – through GUI and cmdlets, is fully VSS-aware and can minimize the VM downtime during backup. I personally ignored WSB in 2008 and missed the good old NTBackup. But with 2008 R2, WBS is an approach to NTBackup and I find it very useful.

-Pros: “No” software cost since Included in Windows Server

-Requires expertise to manage

3)DPM 2010 (or other Enterprise solutions)

DPM is the enterprise backup and recovery solution when it comes to Microsoft products.

It provides an optimized backup and recovery solution for Microsoft based technology that ensures supportability, reliability, and customer satisfaction with the core operating system or application. DOM is intended to ensure that customers are confident in their Hyper-V deployment because they are assured of reliable protection and recovery. One of the key benefin with DPM, is that DPM uses only thos constructs provided by Hyper-V in order to protect Hyper-V.

-Pros: Designed by Microsoft for Microsoft products. The best protection for Windows Virtualization. Protection of servers running on CSV in Hyper-V R2

-Cons: Not free

4)Export/Import

This one is one of my favorites. An easy way to protect your VMs when you have thepossibility to have some downtime during the night. Using either Hyper-V Manager console or WMI APIs to export a VM is quite simple, cheap, and effective to create a backup. I wrote a post about Export/Import in Hyper-V on my blog: http://kristiannese.blogspot.com/2010/12/things-you-should-know-about-import-and.html and it should give you the basics about the procedure.

-Pros: “No” software cost since included in Windows Server/Hyper-V, do not require bandwith if exporting to DAS, flexible recovery, easy to use. Can be scheduled automatic with scripts.

-Cons: Requires storage (yes, it creates a copy of your VM, and includes snapshots as well), the VMs need to be powered off.

5)Manual backup and restore

Remember that VMs is nothing but a set of files on a disk. This means that you can copy, move, duplicate, and protect your VMs as they were – yes, files. You can move and recover a VM via an entirely manual process.

-Pros: No software cost, easy to use, no bandwith if using DAS. Can be scheduled automatic with scripts.

-Cons: Requires storage (same as export), expertise (know what you`re doing, since the VMs may have snapshots), the VMs need to be powered off.

As you can see, the IC provides more services than just the synthetic devices, and let`s take a closer look at each of the above.

·Operating System Shutdown

With this enabled, the VM would be shut down from Hyper-V Manager when you select the ‘Shot down’ button. In other words, this would allow you to shut down the VM as you were logged into the console and issued the shutdown command. This is not the same as the ‘Turn off’ button in Hyper-V Manager. (In Norway, we actually call this the ‘Swedish button’..)

·Data Exchange

This is the service that provides a way to exchange management data (FQDN, OsName, OsBuildNumber, OsMajorVersion, OsMinorVersion ++) between the Hyper-V host and the VM. The WMI can be called to use this information, and this one gives you the opportunity to use PowerShell to build management capabilities based on the obtainable result.

·Heartbeat

This is the service that reports the state of health for the VM to the host. It is indicating if the VM is ‘alive’, and enables management agents to take actions based on the status of the VM.

·Time Synchronization

This is the setting for the system clock. It keeps the VMs clock synchronized with the system clock of the Hyper-V host. You may find it useful to disable this service when the time (in a virtual environment) is different than the host time.Also note that a physical machine has a battery-operated clock from which it gets its initial time when it is booted (unlike a virtual machine). So the VM would always get its initial time from the Hyper-V host regardless of the setting. When the VM is up and running, and you are disabling this service, further synchronizations are disabled as well.

·Backup (Volume Snapshot)

When you are using backup features that are dependent on Volume Shadow Copy Services (VSS), you need this one enabled. The result of unchecking this service would place the VM offline before a VSS backup is taken.

Tuesday, January 18, 2011

When an operating system is supported for Hyper-V, it means that it supports the Integration Components/Integration Services.

This is an optimization technique for a guest operating system to make it aware of virtual machine environments and to shape its behavior when it is running in a virtual environment. There are some important and great benefits with the IC. The VSC (Virtual Service Client) is a synthetic device that resides in a child partition. When you install the IC you also install the VSC that makes the VM able to communicate over the VMBus – with the VSP (Virtual Service Provider).

The VSP provides the needed support for those synthetic devices in the parent partition requested by the VSC, in the child partitions. In other words: the VSP is how the VMs gains access to the physical devices on the parent.

IC does not require changes to the OS, and help to reduce the overhead of certain operating system functions such as disk access and memory management.

The Integration Components supports the following services:

1)(IDE, NIC, SCSI, mouse, video) Synthetic devices

2)Time Synchronization

3)Data Exchange

4)Volume Shadow Copy Service

5)OS Shutdown

6)Heartbeat

What if the IC is not installed?

VMs without IC installed (so called ‘legacy guests’) do not have the access to the VMBus or VSCs. In other words, there is no support for synthetic devices. This affects performance, and will result in a software emulation of hardware device drivers in the operating system to the parent partition. Back to performance: VMs without IC installed, would perform at a much lower level than the VMs with IC installed. Software emulation of hardware devices may require thousands of instructions, compared to a few instructions directly in the hardware.

In addition: the IC is included in the distribution of Windows Server 2008 R2 and Windows 7, so there is no need to install the IC separately after the deployment in Hyper-V.

But to understand Hyper-V in general, I think it‘s important to know the architecture.

Hyper-V is a microkernelized hypervisor (not a monolithic where the various hardware device drivers are part of the hypervisor) where only the functions that are absolutely required to share the hardware among the virtual machines are contained in the hypervisor.

The reason why Hyper-V supports all the various OS and HW device, is that the parent partition runs Windows Server OS.

Hyper-V requires a parent, a root partition running Windows Server 2008 R2 x64 (Hyper-V 2008 R2 is not a Windows Server, but runs a Windows kernel). The virtualization ‘stack’ runs in the parent partition and has the required direct access to all the hardware devices in order to share them among the child partitions (VMs). When you install the Hyper-V role in Windows Server 2008 R2, you cause the hypervisor to be installed between the physical hardware and the Windows kernel at system boot time. This turns the Windows installation into this special guest – the parent. The parent is still the boss when it comes to access to the hardware, but are responsible to provide additional services to the other partitions (child partitions/VMs).

How does the parent and child partitions communicate?

The VMbus is a communication mechanism (high-speed memory) used for interpartition communication and device enumeration on systems with multiple active virtualized partitions. If you do not install the Hyper-V role, the VMBus is not used for anything. But when the Hyper-V role is installed, the VMBus are responsible for communication between parent/child with the IC installed.

Remember that child partitions do not have direct access to the physical hardware on the host. They are only presented with virtual views (synthetic devices). The synthetic devices take advantages when IC is installed for storage, networking, graphics and input subsystem. The IC is a very special virtualization-aware implementation, which utilizes the VMBus directly, and bypasses any device emulation layer.

VMs make requests to the virtual devices and these requests are redirected via the VMBus to the devices in the parent partition, which handles the actual requests. It`s important to have supported VMs with the IC installed to have this access to the HW through the VMBus. The parent partition use VPS (Virtual Service Provider) and the VMs use VSC (Virtual Service Client) when communicating through VMBus.

In general, that`s Microsoft recommendations, and most of the time, I truly respects MSFT`s recommendations. But I have never encountered any issues when exporting the DC VM from host A to host B, as long as the VM on host A never comes online again. I guess that`s why you want to move the DC VM - to use it on host B, and not as a clone.

From that perspective, I can’t see how there would be a USN rollback on a DC that is exported and imported. Exporting a VM does not change the SID etc. since the VM is poweredoff before the export. Everything is included in the export, so I can’t see how it should create USN rollback etc.

I have never had issues with this - at all, with this procedure.

But if you could not export/import Virtual Domain Controllers, how could you actually move a Virtual Domain Controller then? You can even run a P2V on DC, as long as it is an offline conversion. The ‘Key word’ in both terms: don’t start both machines (the source and the ‘new’ VM).

Monday, January 3, 2011

Let`s take a (really quick) look at some interesting parts of Windows Azure with the eyes of an IT-pro

If you could answer these questions to yourself, then you know why this is relevant for you and your job-role.

·Affinity groups

·Deployment

·Updates

·Monitoring

·Security

·Backup

·VM Role/Worker Role/Web Role

·Azure Connect

Pay close attention to Affinity Group. Affinity groups are intended to group dependent Windows Azure Services and deploy those in one place if possible.

Think of it as a Failover Cluster. Azure will deploy those services of yours in one place if possible. The benefits here would be lower cost (bandwidth within data center is free of charge but transactions would still be charged), performance – especially if your services are dependent on each other. Key word: Network hops

Windows Azure will optimize the deployment on services where you have specified two or more hosted services in the same Affinity Group.

Hosted services and storage services can both be located in the same Affinity Group.

What about the deployment?

- Start to get your Apps running locally.

Explore the different stages in deployment: Staging vs. Production, port number and protocol, service definition changes, service configuration changes, affinity, upgrade domains, operating system versions.

If you`re using SQL server and planning to deploy applications on Azure, you should also have a database migration plan from SQL Server to SQL Azure. So a thing that is important is to know the difference between SQL Server and SQL Azure. And how would you backup your databases in SQL Azure?

When should you use Azure Connect? Would it be suitable for an IT-pro to secure, create, connect, and manage this?

The different roles: Web/Worker/VM – what suits you and your applications? Don’t get caught of consider the VM Role as an IaaS solution, but rather a new method for deployment.

And last: What about PowerShell? If you`re experienced with this scripting tool, then you might find your skills very useful here as well. You can deploy, manage, monitor, and respond to the events in Azure with PowerShell – and that`s quite what an IT-pro should do - also in the Cloud.

I started late in August 2010 to answer questions and share knowledge in the forums. Primary the Hyper-V forum

Earlier I contributed mostly in the ‘offline’ community. Doing some charity and taught some great children at the local school. I wanted to contribute in a way that gave the students a better understanding of PC`s and infrastructure – rather than just Facebook and online gaming. J Virtualization was of course the baseline for every session. It was quite successful.

I also participate in an offline user group, where we share knowledge, experience from our work day – everyone with various background since we`re not living in a city with a high numbers of citizen.

But I have to say, the forums are really fun to participate in.

I guess I`m a bit addicted to follow the threads, read other great MVPs contributions, and try to answer some questions myself.

So to all of my fellows in the MS forums: - Happy new year, and I hope we will learn a lot from each other in 2011 as well, and be able to help every single one of you who gives us the opportunity to share our knowledge in a technology that we are so devoted to.