Staff Member

Somewhat, we're waiting for a stable bug fix release to base upon, (a 3.x.1 version) to provide a stable base for our users, this either will be build upon a upstream bug fix release or, sooner or later, a pick from currently known "backport candidates" of fixes from us.

I'm under the impression QEMU 3.x now has a better implementation for performing live storage migration. We've been looking forward to seeing a more reliable KVM solution that can do live migration with local storage.

Staff Member

hmm, currently we queue all disk needing local-storage migration and process them the same way on after the other, so theoretically more disks should not really change anything, as the same code paths get hit.
As I did not remember when exactly I tested a VM live migration with multiple local disks I re-checked it now, a Debian VM with three disks, the root disk and an additional on LVM-Thin and another disk on ceph (rbd) - just to make it a bit more complex, all containing an FS (ext4 and XFS) and some data (all > 512MB).
The migration here worked and I had no noticeable interruption in the VM. But, as you said it may not have worked as good with past versions, and maybe it's a bit storage dependent - but also there one or multiple should not really be different.

Anyway, if you have a specific setup where you can reproduce this most or even all times with the most current Proxmox VE (PVE 5.3 at the time of writing) it would be great to have the config, the backing storage types of both target and source, thanks!

We've been testing live migration with local storage on different kinds of KVM platform every now and then in the past 5~6 years, but to honest, we have barely come across any KVM platform that can do this well (not even qemu-kvm-ev) as opposed to VMware or Hyper-V. At one point we even had to put Linux VMs on Hyper-V (I know there are people who don't like to hear this) in order to have a reliable live migration between local storage.

After having performed this kind of migration with VMware and Hyper-V numerous times with no major problems, we're looking forward to seeing KVM/QEMU in general catch up, especially if Proxmox team can help improve this situation.

Live storage migration used to be available in earlier versions of CentOS 6.x but then got deprecated due to some reliability issues and has not been re-added since 7.x I think unless one installs qemu-kvm-ev, which works but still not great. (Barely tested on other distros so can't comment too much on that)

The general feedback has always been 'yeah … it can work ... but not reliable/robust enough ... not recommended'

People may even say 'why bother, just use shared/distributed storage or do offline migration'. Granted there's shared storage in place, there are still times where one may need to phase out an old shared storage platform, or e.g. geographically re-locate the VMs. Surely there's a lot more use cases for this.

The problems with live storage migration we’ve come across are generally:

Surely everything has problems, but we often get very inconsistent bugs/results from different KVM platforms.

In relation to Proxmox, we've also experienced quite a lot of bizarre problems as well, I believe some were probably due to KVM/QEMU, some could be due to Proxmox. For example, while testing 5.3.6 or 5.3.7 few weeks ago, I realised a small running VM with only one blank qcow2 in a test environment with no other workloads would become multiple qcow2/raw disks (the number would vary, sometimes 5, 6, 7) on the destination then went corrupted if '--targetstorage' flag was not specified, even though the destination had the same storage path (e.g. dir /vmstorage).

In relation to Proxmox, we've also experienced quite a lot of bizarre problems as well, I believe some were probably due to KVM/QEMU, some could be due to Proxmox. For example, while testing 5.3.6 or 5.3.7 few weeks ago, I realised a small running VM with only one blank qcow2 in a test environment with no other workloads would become multiple qcow2/raw disks (the number would vary, sometimes 5, 6, 7) on the destination then went corrupted if '--targetstorage' flag was not specified, even though the destination had the same storage path (e.g. dir /vmstorage).

Staff Member

The speed is already "unlimited" (as fast as memory copy/move works), QEMU did not artificially limited anything.
The new patch, which advertises all bus as x32 * 16GT/s is just cosmetic, mostly, but yes some (stupid) drivers actually do non-ideal stuff if they check the link speed and it seems to be to low for them...

Do you have an actual, specific, issue like this, or is this a more general request?

Quick Navigation

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.