If you mount a disk from an instant recovery VM, into a second VM, then cancel the instant recovery (Stop publishing) , Veeam proceeds to power off the second VM and then unregisters it from vcenter.... ( am using restore from NetApp snapshot, but I assume it does not matter what the backup source is?).

Support says this is expected behavior and am not supposed to be using the instant VM for anything else... which kind of makes sense... but would you say this is the best Veeam can do? Would it not be safer to tell the user that Stop Publishing cannot proceed or at least warn the user before proceeding or something like that?

Hello, and sorry to hear about some data loss that you have experienced.

It's fairly common use case to publish a disk from backup and attach it to another VM, there are a plethora use cases for that: scanning the content with some 3rd party tool, getting some files out in a more efficient manner, and so on. None of these use cases actually make any changes to the disk, or require them to persist. So, it makes little sense to preventing users from proceeding with unpublishing the disk, or annoy them with the warnings every single time they do this.

Also, please note that there are actually two options available right next to each other in the menu: "Migrate to Production" and "Stop Publishing". So, I would argue there's a very clear distinction already, where by choosing to simply stop publishing you're doing the opposite from persisting data to the production storage. And to be honest, I don't recall any other users confusing these two options (or what they do) before.

I do agree that your use case of "Instant Disk Recovery", where the desired behavior is to persist published disk changes, is a valid use case on its own. In fact, we've even added the dedicated wizard in v10 to simplify such recoveries. So, I believe that having the dedicated workflow available in the UI for what you were trying to do through IR workaround will help other users to avoid similar issues in the future.

Hello Gostev. Thanks a lot for taking the time to reply and thanks a lot for the weekly newsletter!

Let me explain my use case and how I found this issue:
- A powershell script I wrote runs on nightly basis to perform a "db refresh"
- It starts an instant restore session, using a storage snapshot, of a VM: VM-A
- It then mounts few VMDKs from VM-A, into a target VM-B
- Some files are copied into the target VM
- VMDKs are unmounted
- Instant restore is concluded.

This all works great, except that sometimes we get alerts in the middle of the night about how we lost access to VM-B!

After a lot of digging in the logs, finally figured our how to reproduce the issue... Seems Veeam, as part of cleaning up from the instant restore, I guess, takes it upon itself to power-off any VM that happens to be using its storage? and then un-register them as well? Maybe this behavior should at least be documented? or is it already the case?