A virtual machine generation in Microsoft terms, determines the virtual hardware and functionality that is presented to the virtual machine. Starting with Windows 2012 R2, Microsoft has added Hyper-V generation 2 VMs. Generation 2 VMs have a simplified virtual hardware model, and it supports Unified Extensible Firmware Interface (UEFI) firmware instead of BIOS-based firmware. Additionally, the majority of legacy devices are removed from generation 2 virtual machines. The relevance to an operating system such as Red Hat Enterprise Linux is that Hyper-V hypervisor innovation is on generation 2 VMs.

MondoRescue is a backup and recovery tool for Linux, it’s packaged for variousdistributions and supports common architectures (i386, x86_64 and ia64). Itallows online and offline backups to network storage, local disks, cd/dvd andtape. It supports a large variety of filesystems (including but not limited toext2, ext3, ReiserFS and XFS) and partition/disk layouts (software raid,hardware raid and LVM1/2). During a restore MondoRescue will also resizepartitions depending upon the new disk geometry. Those coming from an HP-UX background may liken MondoRescue to HP Ignite-UX.

The methods MondoRescue uses to archive and restore a machine means it’swell suited for use as a P2V/P2P (physical-2-virtual/physical-2-physical akaP2V/P2P) migration tool.

One of the issues with Hyper-V is that it does not virtualize the CD device and therefore we rely on the ATA driver in the guest operating system to manage CDROMs.What we would like to do is disable the ATA driver for all device types except the CDROM in the presence of Hyper-V.

A.M.> Then may be possibility of blocking/hiding specific ATA channels orA.M.> devices could be investigated.

mav@ :>> Unfortunately, CAM subsystem used for both ATA and SCSI stacks in>> FreeBSD 9.x and above is mostly unaware of "NewBus" infrastructure>> used for driver probe and attachment. That is why you can’t replace>> driver for a single disk in the same way as you replaced driver for>> the whole controller. The highest level present in "NewBus" is ATA>> channel. So if disk and CD-ROM are always live on different channels,>> you can create dummy driver not for the whole controller (atapciX),>> but for single hardcoded ATA channel (ataX).

Microsoft hyperv dev team:

>> We have decided to have the DVD and the root device on different channels> as that appears to be the simplest way to address this problem. . . .> . . . dummy driver for a single hardcoded ATA channel. >The boot device has to be on the first channel, > this would require the ISO device to be on the second channel.>

Allow the legacy CDROM device to be accessed in a FreeBSD guest, while
still using enlightened drivers for other block devices.
Submitted by: Microsoft hyperv dev team, mav@
Approved by: re@

==

From: vvm To: mav@==Thanks for patch!==

Microsoft hyperv dev team:==Thanks Alexander.==

Microsoft hyperv dev team:==We tested the patch and it seems to work fine. We observed that the CDROM is not accessible on channel 0 but is accessible on channel 1 as intended.The solution is good enough as the Hyper-V UI in general biases a user to attach the root device to channel 0 and the CDROM to channel 1 and we can further reinforce this for FreeBSD users.I think this solution is good enough for now and we can explore more later.==

A.G.> if high availability failover scenarios will work for FreeBSD VMs on Hyper-V.A.G.>if the power plug is pulled from the Hyper-V serverA.G.>then would the FreeBSD VM failover and restart without any issues on the failover server.

Karl, are You want this behavior:

==you walk up and yank the power cord out of the back of the server the secondary mirror will take over with zero client downtime==or?

Karl, are You use entry level fault tolerant system ftServer 2600 by Stratus Technologies? Or analog?

K.P.> – Pulling the power on the active node hosting both VM’s (i.e. Windows K.P.> guest, and FreeBSD guest) – this showed the remaining node trying to bring K.P.> up the VM’s (of which Windows came up OK, and FreeBSD [file system] got corrupted).

A.G.> Yes, it should work. A.G.>My understanding is that the failover should be agnostic to the guest OS but there could be some integration component that we might have missed.

( "Windows came up OK" look like because on this VM file system is NTFS )

K.P.> Hyper-V correctly see’s the node fail, and restarts both VM’s on the K.P.> remaining node. Windows 7 boots fine (says it wasn’t shut down correctly – K.P.> which is correct) – but FreeBSD doesn’t survive. K.P.> K.P.> At boot time we get a blank screen with "-" on it (i.e. the first part of K.P.> the boot ’spinner’) – and nothing else. K.P.> K.P.> Booting to a network copy of FreeBSD and looking at the underlying virtual K.P.> disk – it appears to be trashed. You can mount it (but it understandably K.P.> warns it’s not clean) – however, any access leads to an instant panic (’bad K.P.> dir ino 2 at offset 0: mangled entry’). K.P.> K.P.> Trying to run fsck against the file system throws up an impressive amounts K.P.> of ’bad magic’ errors and ’rebuild cylinder group?’ prompts.

To Karl: I ask You about some details . . . Are You see related e-mail?

Karl is asking if high availability failover scenarios will work for FreeBSD VMs on Hyper-V. He was specifically interested in knowing if the power plug is pulled from the Hyper-V server then would the FreeBSD VM failover and restart without any issues on the failover server.

My response was that yes the above scenario should work. Thanks, Abhishek

Evaluating High-Availability (HA) vs. Fault Tolerant (FT) Solutions

10-06-2010 4:09 AM

High Availability Solutions

High availability solutions traditionally consist of a set of loosely coupled servers which have failover capabilities.Each system is independent and self-contained, yet the servers are health monitoring each other and in the event of a failure, applications will be restarted on a different server in the pool of the cluster.Windows Server Failover Clustering is an example of an HA solution.HA solutions provide health monitoring and fault recovery to increase the availability of applications.A good way to think of it is that if a system crashes (like the power cord was pulled), the application very quickly restarts on another system.HA systems can recover in the magnitude of seconds, and can achieve five 9’s of uptime (99.999%)… but they realistically can’t deliver zero downtime for unplanned failures.They also are flexible in that they enable recovery of any application running on any server in the cluster.

Fault Tolerant Solutions

Fault tolerant solutions traditionally consist of a pair of tightly coupled systems which provide redundancy.Generally speaking this involves running a single copy of the operating system and the application within, running consistently on two physical servers.The two systems are in lock step, so when any instruction is executed on one system, it is also executed on the secondary system.A good way to think of it is that you have two separate machines that are mirrored.In the event that the main system has a hardware failure, the secondary system takes over and there is zero downtime.

HA vs. FT

So which solution is right for you?Well, the initial and obvious conclusion most instantly come to is that ’no’ downtime is better than ’some’ downtime, so FT must be preferred over HA!Zero downtime is also the ultimate IT utopia which we all strive to achieve, which is goodness.Also FT is pretty cool from a technology perspective, so that tends to get the geek in all of us excited and interested.

However, it is important to understand they protect against different types of scenarios… and the key aspect to understand is what are the most important to you and your business requirements.It is true that FT solutions provide great resilience to hardware faults, such as if you walk up and yank the power cord out of the back of the server… the secondary mirror will take over with zero client downtime.However, remember that FT solutions are running a common operating system across those systems.In the event that there is a software fault (such as a hang or crash), both machines are affected and the entire solution goes down.There is no protection from software fault scenarios and at the same time you are doubling your hardware and maintenance costs.At the end of the day while a FT solution may promise zero downtime for unplanned failures, it is in reality only to a small set of failure conditions.With a loosely coupled HA solution such as Failover Clustering, in the event of a hang or blue screen from a buggy driver or leaky application.Then the application will failover and recover on another independent system.

I’ve got a two node server-cluster, WS 2008R2 x64, Hyper-V and CSV, Everything seems to be working fine along with live migration.

I am currently testing the functionality of the setup, he is my current layout:

Node A:VM 1Node B: VM 2

When I simulate a host failure on node A, VM 1 transfers over to Node B but reboots the virtual machine before bringing it back up.

Is this normal behavior for Clustering with CSV? I have another cluster setup in the same manner but without CSV enabled, Its been a while but I’m sure when this was tested the Virtual machine that failed over didn’t reboot.

Is this a difference between High availability and Fault tolerance?

If any of you guys can shed some like, it would of great help…

Thanks

Yes, it works as expected. "High availability" does not mean "no downtime". If you want to have zero (OK, close to zero) downtime then you need either configure guest VM cluster or use your app built-in clustering features. If your app has none and it’s not cluster aware consider moving to VMware to use it’s Fault Tolerance feature (no equivalent for Hyper-V so far).

. . .

As VR38DETT says, what you are seeing is normal. Live Migration, which moves a VM from one host to another, is a planned action. You tell the clustering software to move the machine. This gives the software time to copy the contents of the memory on the currently hosting machine to the memory of the destination machine. In a failover environment, there is no time for the memory on the failed machine to get copied. Therefore, all that can happen is to start the VM with a boot to get the memory loaded into the desitination machine. That’s a pretty typical definition of high availability.

Now with 2012 coming out, there is a capability the Microsoft engineers have built in called ’replica’. It keeps a copy of the memory of a running virtual machine on another virtual machine. However, it is asynchronous, so it is not always up to the second current. But it gets much closer to what you are asking for.

Or, there are third parties, such as Stratus, that create a mirrored environment between two systems in order to keep two copies up to date. As you can imagine, there are additional costs involved in such a solution as this, so you need to make the business case for 100% availability.

And, as VR38DETT says, with additional capabiliies, like clustering the VMs at their operating system/application level can provide a different sort of [near] continuous operation. I say [near] because it is definitely dependent upon the software you are running within the VM. For example, if you are running a particular type of SQL Server, you have SQL running on both nodes of a pair of clustered VMs, and if the Hyper-V host fails, the SQL will continue operating on the surviving VM. But there may be a very brief period of unavailability while the surviving SQL VM takes ownership and starts serving out requests. Neither the OS or SQL would have to restart in this environment, but it does take just a bit of time to transfer ownership of the resources.

Q: Are there any fault-tolerant solutions for Hyper-V?

A: Fault tolerance allows a virtual machine (VM) to carry on running without interruption, even in unplanned host failures such as a host crashing. This is different from high availability, part of Hyper-V, which in a host failure moves VMs to another host but has to restart the VMs, incurring a small outage to the VM. This is also different from planned outages, which allow a VM to be moved between hosts with no downtime using technologies such as Live Migration.

This fault tolerance is achieved by the VM running on multiple hosts with changes from the master replicated in real time to the slave. It should be noted that fault tolerance protects only from a crash of the host; any problem within the guest OS isn’t protected by fault tolerance as any guest problem would just replicate to the copy.

Hyper-V doesn’t have a built-in fault tolerant solution, but there are some options from third parties you can evaluate. However, typically fault tolerance of an application is better handled through application-aware solutions or guest clustering, that provide protection from guest OS crashes. (A good discussion of this can be found at this MSDN blog.) The two main third-party solutions are as follows:

2013-09-13: Good “hot news”:
VVM>> Ok: when we can use Dynamic memory hot add in RHEL ?
> RHEL-6.5 will add auto enable of hotplug memory for the balloon driver

–

http://www.phoronix.com/scan.php?page=news_item&px=MTMyMjU
===
The set of six new patches enhances their memory balloon driver to add in support for memory hot-add. System memory to Linux guests is dynamically managed at run-time and now implemented for the Windows Dynamic Memory protocol, which is a combination of ballooning and hot-add for the dynamic balanacing of available memory across competing virtual machines.
===

( some external URL: )
===
The set of six new patches enhances their memory balloon driver to add in support for memory hot-add. System memory to Linux guests is dynamically managed at run-time and now implemented for the Windows Dynamic Memory protocol, which is a combination of ballooning and hot-add for the dynamic balanacing of available memory across competing virtual machines.
===

I believe what Artenmonk meant was something I experienced on WS 2012 (R1) and Centos 6.4. I noticed that dynamic memory was functioning and it seemed to adjust reasonably well. However, when you looked at an idle system, the load via ‘w’ or ‘uptime’ was always 1.00 rather than 0.00, but only when dynamic memory was enabled. Inside the WS2012 host, there did not appear to be any CPU load. I also experienced a few crashes of that VM. Disabling dynamic memory made it function normally and stable. I hope to re-test it with R2 soon.

2013-09-13: Good “hot news”:
VVM>> Ok: when we can use Dynamic memory hot add in RHEL ?
> RHEL-6.5 will add auto enable of hotplug memory for the balloon driver

Thanks for answer
Unfortunately Yours ( SUSE/Novel) Bug-Tracker not opened by IE from Win Srv Edtions
I’m try do it from OpenSUSE v12.3 LiveCD but can not boot from it in Hyper-V VM when “Dynamic memory” turn on

See details on
{ this page }

Or ask SUSE Support Team write me by e-mail for detail
==

2013-05-18

OpenSUSE 13.1 Milestone 1 Build0466 on Hyper-V

First impression:

Even KDE LiveCD include GParted 0.16.1 ( with support LVM2 PV Resize/Move )

See
hv_vss_daemon.c
==
An implementation of the host initiated guest snapshot for Hyper-V.
==

http://www.altaro.com/hyper-v/linux-on-hyper-v/

==
Backup

Hyper-V’s built-in method of enabling back up of Windows virtual machines while they’re powered on involves the tried-and-true Volume Shadow Copy Service (VSS) that Microsoft has employed since Windows 2000. VSS is triggered at the hypervisor level and it communicates through the integration components with VSS inside the virtual machine. If you’re interested, we provided an earlier series that covered this in some detail. Since VSS is a proprietary Microsoft technology, your Linux virtual machines won’t have it. As a general rule, Hyper-V backups will not be able to take a live backup of Linux guests. That’s because if VSS in the hypervisor is unable to communicate with VSS through integration components, its default behavior is to save the virtual machine, take a VSS snapshot, and then bring the virtual machine back online. Some backup applications, notably Altaro Hyper-V Backup(Disclaimer: this blog is run by Altaro), can override this behavior and back up a Linux guest without interruption. Even with this capability, nothing can escape the fact that Linux does not have VSS. These backups will be taken live, but the backed up image will be crash-consistent, not application-consistent. If you’re not sure what that means, please reference the VSS article linked earlier.
==

Home > Virtualization > Q. Can I back up Linux Hyper-V guest virtual machines (VMs) through Volume Shadow Copy Service (VSS), avoiding the need to stop the VMs?

Q. Can I back up Linux Hyper-V guest virtual machines (VMs) through Volume Shadow Copy Service (VSS), avoiding the need to stop the VMs?

A. VSS consists of several components, including VSS writers that are provided by application authors to enable a consistent backup to be taken of application and OS data without stopping the application or OS. These backups work by making a VSS request. The VSS writers are notified of an imminent backup, so they make sure data is flushed to disk and further activity is cached, ensuring the data on disk that’s being backed up is in a consistent state and is restorable.

Hyper-V extends this backup functionality by allowing a VSS backup to be taken at the Hyper-V host level. The VSS request is actually passed through the integration services to the OS of Windows VMs, which then notifies the registered VSS writers in the VM of the backup. So backups can be initiated at the Hyper-V host level and VM backups will still be consistent and usable, without actually doing anything in the guest OS.

Certain versions of Linux are also supported on Hyper-V, but Linux OSs don’t support VSS. So a backup taken on the Hyper-V host can’t tell the Linux OS in a guest VM to put itself in a backup-consistent state. To back up a Linux OS, either stop the VM while you take the backup or, if you can’t have downtime, perform the backup from within the Linux VM instead of at the Hyper-V host level.
==

8.2. Hyper-V

Integrated Red Hat Enterprise Linux guest installation, and Hyper-V para-virtualized device support in Red Hat Enterprise Linux 6.4 on Microsoft Hyper-V allows users to run Red Hat Enterprise Linux 6.4 as a guest on top of Microsoft Hyper-V hypervisors. The following Hyper-V drivers and a clock source have been added to the kernel shipped with Red Hat Enterprise Linux 6.4:

a network driver (hv_netvsc)

a storage driver (hv_storvsc)

an HID-compliant mouse driver (hid_hyperv)

a VMbus driver (hv_vmbus)

a util driver (hv_util)

an IDE disk driver (ata_piix)

a balloon driver (hv_balloon)

a clock source (i386, AMD64/Intel 64: hyperv_clocksource)

Red Hat Enterprise Linux 6.4 also includes support for Hyper-V as a clock source and a guest Hyper-V Key-Value Pair (KVP) daemon (hypervkvpd) that passes basic information, such as the guest IP, the FQDN, OS name, and OS release number, to the host through VMbus. An IP injection functionality is also provided which allows you to change the IP address of a guest from the host via the hypervkvpd daemon.

. . .

Hyper-V balloon Driver

On Red Hat Enterprise Linux 6.4 guests, the balloon driver, a basic driver for the dynamic memory management functionality supported on Hyper-V hosts, was added. The balloon driver is used to dynamically remove memory from a virtual machine. Windows guests support Dynamic Memory with a combination of ballooning and hot adding. In the current implementation of the balloon driver for Linux, only the ballooning functionality is implemented, not the hot-add functionality.

I’m new to the Unix World. Was able to install two virtual machiens, one hosting an Apache Web Server and one a MySQL server. Problem is that the Integration Services are flawed:

a) They disable the ability to mount CDROMs to that virtual machine:http://support.microsoft.com/kb/2600152I’m not sure if the INSMOD work-around will disable the faster disk driver? The command doesn’t seem to be specific to the CDROM devices…

b) Worse, I’m unable to install any security fixes to the kernel, because of:http://support.microsoft.com/kb/2387594Unfortunately, the DKMS work-around is based on IC 2.1 – when the ISO still contained the source. The 3.4 (and prior) are RPMs.

I wonder if there is a way to "uninstall" the integration services, "re-enable" the necessary native drivers, then run the kernel updates, and after rebooting, re-install the 3.4 integration services?

In general, I’m a bit surprised that the lack of CD/DVD support and the inability to run kernel updates hasn’t bubbled to the top of he priority list after so many months/years – as I would have expected every single user to encounter that?

in order to provide an optimized IDE driver (hv_blkvsc [ VVM: hv_blkvsc is RIP ; hv_storvsc for _both_ IDE and SCSI disks after kernel v3.2 or for LIC/LIS v3.4 ] ) for the root [ VVM to All sysadmins: on IDE _need_ be place only boot loader ( GRUB or syslinux) and /boot , _other_ ( include / ) _best_ place on SCSI. But because current RHEL not contain hv_storvsc in install CD-ROM, we are need use "Mondo Rescue" after Mondo backup ] file system.

WORKAROUND:

To mount an ISO file in the virtual machine, the following command must be run before executing the mount command:

# insmod /lib/modules/$(uname -r)/kernel/drivers/ata/ata_piix.ko

WORKAROUND N2:

Alternatively, copy the ISO file into the virtual machine and mount it using the -o loop option.

[ VVM: Bingo! 8-/ But . . .

How about kernel for LiveCD? In any case need patched ata_piix ]

==}}

I’m not sure if the INSMOD work-around will disable the faster disk driver? The command doesn’t seem to be specific to the CDROM devices…

~

Yes: not disable "faster disk driver", because it is hv_storvsc

Yes #2 : this command not "specific to the CDROM devices" , and after

load both unpatched ata_piix and hv_storvsc

You can see results: by run blkid and look in output of this command ( and in _fact_) duplicates(!) of IDE disks

[ Cyril Brulebois ]
* Apply patch from Arnaud Patard to include Hyper-V linux kernel udebs
on cdrom and netboot images for am64 and i386 (Closes: #690978). This
is needed after a kernel change on the ata_piix side (cd006086fa in
mainline).