I am attempting to move an application server p2v from a veeam backup. We currently have the veeam agent on the physical server and set to do a whole system backup. When i attempt the restore with a veeam linux iso recovery on the vm side it goes though and successfully recovers but when i attempt to boot it just freezes up. Is there tweaks i have to make in a rescue mode that would make this vm work? This is an RHEL 6.9 Physical machine.

Also other questions i have is can this be restored on a baremetal vm with nothing on it? or does it already have to have a RHEL 6.9 image installed?

I've have had no luck so far and have tried dozens of time to try every scenario i could think of. Even tried converting the drives to vmdks and attaching them to the VM. That way seemed the most time consuming, so i been trying to stick with the veeam recovery iso.

I've merged your post into the existing thread regarding restore issues to another hardware. Although initially the thread was dedicated to discuss restore operation to another hypervisor, the issue that you experience is likely to be related to kernel initramfs image not having necessary drivers, which may occur during P2V migration/restore as well.

Here is KB with instructions. Please review the thread, try the instruction I've provided, and feel free to ask additional questions should you have any.

My direct issue ended up being the lvm.conf filters. I just knew this because i came across a similar issue when i was building out the crash kernel. Once i changed the filters and ran a dracut -f i was able to successfully boot the virtual server.
Other cleanup included getting rid of bonds and installing vmware tools. Other then that it was easy once i got past the first issue.

Just to add more, I tried by creating a diskless VM, then used VEEAM to restore as .vmdk disks.
the only oddity was during the restore it indicated
Disk 0 (2.7 TB): 141.0 MB restored at 38MB/s
Disk 1 (1.1 TB): ...This part hadn't finished yet when I snipped a picture...

The VM seems to be booting ok. Still working on cleaning up some things so have booted it to single user mode off the network till done since it is just a test so don't want to conflict with production.
Will have to try the .iso recovery image another time to see if the disk sizing looks more accurate.

I wanted to add a little bit more. I forgot to mention the server is a CentOS EL6. Creating this VM will allow me to use it as a test system to test updating it to the latest kernel and patches.
As well as application modifications before pushing to production.

I did boot off of a CentOS .iso and ran the troubleshooter. I was able to delete the sda2 and sda3 partitions and create a 2TB sda2 partition. then ran dd from /dev/sdb to /dev/sda2.
That at least allowed me to get rid of the second disk.

Interesting. VM or not I have found lvm helpful. I work in a development environment, but even in production I would think it makes management easier. I do prefer hardware RAID though am familiar with the arguments from those that prefer MD RAID.

It is more difficult sometimes to deal with OS issues that crop up from running out of space in / vs just application issues that occur when they run out of space in their own file system leaving the OS intact and still functioning correctly. Production probably grows more slowly so easier to plan changes, but with developers, they can eat up a lot of space in a very short time for some reason or another.

Plus with the way our developers work they test something which suddenly becomes production, they hate down time so adding another disk to a vg and extending a file system is faster/easier vs taking the VM down to regrow the / partition.

I am not familiar with all the fancy features of VMWare, ours is a fairly basic environment conjured up from old repurposed servers. VCenter came a bit after there were several hosts and then of course grew as more repurposed servers became hosts.