I had to share this Oracle VM 3.1.1.399 & with 3.1.1.478 experience with y’all.

We have a two server pool with HPG6 Blades, EMC/FC/SAN storage, 1 12G Poolfs LUN, and 2 1TB LUNs for the VMs with a toal of 5 tagged VLANs, i.e. eth0/eth1 bond0 mode4 (3 vlans), eth2/eth3 bond1 mode 4 (2 vlans). Oracle VM Manager is installed on a VM OL58 has the latest patch .399 and both hosts have been patched via a local repo last week.

On two occasions Oracle VM Manager has started the same VM (OL5U8 PVM w UEK) at the same time on two hosts! As you could imagine having the same VM running twice causes some intresting issues :-)

We will be opening an SR lickety split, although in the mean time I wanted to see if you have ever seen this one, if yes, whats the root cause and fix?

UPDATE:
FYI, we can recreate by restarting any VM using Oracle VM Manager. The VM is restarted on the origianl host and also started the second host. We have a two server pool.

We had this issue early on (3.0.2 I think) where we didnt setup DNS for the OVM hostnames/ips and the manager was trying to query the destination node during live migration via its unresolvable hostname, which ended up causing the live migration to somehow "crash" which caused 2 identical VMs to start (which resulted in a 400GB restore of a Oracle 10G database... :( )

I feel your pain. I've had a Sev2 open on a issue that has spent the lasts 18 days in "review update". The only way to move SR like this along is to call and hound someone to death. Maybe get to speak to a duty manager. Even that doesn't help sometimes. I haven't had to reinstall the VM Manager yet but I can see it coming.

FYI, this bug is a BACK with a vengence! VMs start twice, when you init0 or stop VMs in Manager with HA enabled they a) restart on thier own where they please and/or start twice and corrupt thier disks!

Diable HA on all VMs looks like a short term solution, until we get a patch.