Just take a backup of the system using something like dar, create the VM as per normal, boot off a LiveCD in the VM, and restore to the virtual disk. Then just adjust /boot/loader.conf, /etc/rc.conf, and /etc/fstab to work with the new devices. Reboot the VM and you're done.

You may want to install a GENERIC kernel first, but it's not needed if you edit /boot/loader.conf before booting to enable the needed drivers.

Just take a backup of the system using something like dar, create the VM as per normal, boot off a LiveCD in the VM, and restore to the virtual disk. Then just adjust /boot/loader.conf, /etc/rc.conf, and /etc/fstab to work with the new devices. Reboot the VM and you're done.

You may want to install a GENERIC kernel first, but it's not needed if you edit /boot/loader.conf before booting to enable the needed drivers.

Good to know, thanks. We only use VMWare Server and the odd VMWare Player.

You're welcome. I just completed a 5 day VMware boot camp and I expect to be certified next week if all goes well (wish me luck)

Since ESXi was released for free ($599 before that) if you can get some hardware that will run it, you will see a huge difference in performance and it's ability to handle way more VMs per server. ESXi requires no host OS, and it's footprint is 32MB. You can even get Dell machines with it loaded on 2 EPROMMs (one for backup). I was absolutely amazed at the number of VMs you can run on one box, and still get great performance. This was with Windoze in the lab but with FBSD I can imagine 50 VMs on a single box easily.