Context Navigation

Use of virtualisation tools with Mondo Rescue can be quite helpful for development, testing and debugging. It can also help to give an administrator slightly more peace of mind in situations where real restore runs are not possible.

The following describes the use of virtualisation tools available today in conjunction with Mondo Rescue. Note that the emphasis here is on doing restores in a virtual machine (VM). You are free to run mondoarchive inside the VM or guest.

QEMU

QEMU is a CPU emulator for a number of different architectures both for the host and the guest (VM). Performancewise, QEMU lies probably somewhere between Bochs and VMware.

Getting QEMU

Being GPL'd software (as opposed to VMware), QEMU should come with your distribution, for Debian for instance their is a qemu package that work just fine (for me in sid).

However, if QEMU is not included in your distribution or you want a newer version, you can get it from here: ​http://www.qemu.org/. I have not compiled it myself - so if you have feel free to add the missing details!

Just insure that you're using gcc v3.x for your compilation of qemu/kqemu as gcc v4.x isn't supported yet. It may get tricky when you want kqemu (non-GPL) to load with your kernel as you'll need to have again the same compiler version which probably means for uptodate distros a kernel recompilation with gcc v3.x

Setting Things Up

QEMU comes with quite extensive information which includes a detailed manpage.

Getting going really only requires the following two things though:

Create a disk image using qemu_img:

qemu-img create hda.img 8G

This will create a sparse 8 GB raw disk image. This is not a partition nor a filesystem, this is a disk. Sparse means that it will initially only use very little space in the host filesystem and keep growing as required up to 8 GB. As a side note, sparse also means that you can create a disk image file the size of your hard disk and have Mondo Rescue running a restore of your physical hard disk onto the hard disk image that QEMU uses as long as you have somewhat more than 50% of your physical disk free. This can be useful to avoid mondorestore coming up with errors saying there is not enough space inside QEMU.

The second prerequisite is to have a mondorescue image or CD ready. You can just use mondorescue.iso that mondoarchive has created (under /root/images/mindi as default).

Running QEMU

Run QEMU, e.g. like this using the disk image we created beforehand and mondorescue.iso from our last mondoarchive run:

That's it. If all goes well you get a new window with the familar Mondo Rescue boot screen.

Some remarks:

'-m 256' gives 256M of physical RAM to the qemu instance

'-hda hda.img' could be written as just 'hda.img'. However, I include the switch for clarity sake. As a side note this means that only IDE
disks are suppored by qemu, no SCSI.

'-boot d' makes it so that qemu boots off the CDROM. Once a restore is finished and you want to check the result, i.e. verify that the
restored system is bootbale and working, use '-boot c'.

If the task is to boot off a real optical disk, use 'cdrom /dev/hdc' or whatever the correct device for the optical drive is.

Advanced Topics

Networking

Hardware

QEMU emulates a PCI NE2000 NIC. This means you need support for this on your Mondo Rescue restore media. The only way I have gotten this to work is by appending 8390 and ne2k-pci to the NET_MODS variable in mindi (< 2.0.8) which may then look like this:

Configuration

QEMU does support networking in various ways. The simplest is the 'user networking' which means that no setup (and no command line switches) are required.

QEMU has a DHCP server built in at address 10.0.2.2 which always issues IP address 10.0.2.15. Theoretically adding:

ipconf=dhcp

should make it so that networking is started uding busybox' udhcpc. Unfortunately, I have not had any success with this yet. However, adding the following kernel parameter does the trick for me and yields a working network connection:

Remote NFS Server Configuration

Even if all the above works, access to the required NFS share to do a restore may still be impossible because the remote NFS server will not allow access because the mount request comes from an illegal port. You may find something similar to the following in /var/log/syslog:

This is because QEMU does port translation to avoid clashes with host networking activities. The fix is to add the 'insecure' option in the NFS server's exports file, e.g.:

[...]
/srv/backups aurich3(rw,root_squash,sync,insecure)
[...]

As a side note, NFS requests (and all other network traffic from QEMU) will appear to come from the host's IP address because of QEMU's internal network address translation (NAT).

Issues

mondorestore hangs when extracting configuration files from all.tar.gz from the loopback-mounted ISO image located on NFS server. A workaround is to boot in expert mode, to manually mount the first ISO image, e.g.:

This will take a surprisingly long time. But after it has finished (the copy can be deleted) and /mnt/cdrom unmounted, 'mondorestore --nuke' should start ok.

It looks like this may be somewhat fixed in 0.8.1 at least on Debian systems. For 0.8.1, the restore starts but then stalls after a number of files have been unpacked and then continues again after a few minutes and so on.

Doing a restore from an ISO image works fine, though (minus the disk not found issue discussed in more detail under VMware Issues below).

VMware Server

The following was done on a Debian Sid system. It should work with slight modifications on any Linux system.

Getting VMware Server

Note: At the time of this writing (24 May 06), this is still in beta and has an expiry date not too much in the future. This is supposed to change once the final version is out which will be unlimited.

Note: There are other packages like VMware-mui-e.x.p-23869.tar.gz which are not required for a server/console only scenario.

Setting Up VMware Server

Call me a paranoid bastard, but I don't like running reasonably complex Perl scripts that scatter files all over my beloved Debian system. (The uninstall routine is said to work well, but I haven't used it.) So, before the actual installtion, we'll set up a chroot environment. You can skip this step if you are more trsuting than I am and directly proceed to installation.

Setting Up The Chroot

Note: debootstrap is Debian-specific. I am sure though that other distributions will have simialr tools for bootstrapping a system.

The chroot will potentially be quite large in size depending on the number and sizes of the virtual machines. This needs to be taken into account when choosing the location of the chroot directory.

Settings things up is then quite straight-forward:

use debootstrap to create changeroot, e.g. (using local apt-proxy cache):

Note: xbase-clients is definitely more than is strictly required. However, it comes with some potentially helpful tools.
Note: linux-headers-uname -r is the headers for the running kernel. See below for implications.

allow anyone access to X (needs to be repeated after every reboot):

xhost +

use the following script to set the environment and to launch the console:

Note: The largest remaining issue with the chroot is that the virtual machines are not properly stopped when the system goes down. This could be amended by means of an init.d script.

Installing VMware Server

unpack the server package:

tar xvzf VMware-server-e.x.p-23869.tar.gz

change into the server directory and start the installation:

cd vmware-server-distrib
./vmware-install.pl_

Note: Leave everything at their default values but only choose first networking option, i.e. no NAT and no host only.
Note: Ignore error regarding vmware-cmd.
Note: This should automatically find the kernel headers. If the kernel changes the instllation process will have to run with the new

Running VMserver

After installation, VMware should be running. If you have installed into a chroot the following should start VMware Console:

chroot ./sid-vmware-chroot/ su andree -c vmware

If you have chosen a normal install, there may be an new icon in your menu. Alternatively, the following command should get you there:

vmware

Lean back and enjoy or rather create your first virtual machine. Many things said about disk images and booting off ISOs for QEMU above apply here as well.

Issues

Restore works fine, after reboot GRUB starts up ok, the system starts booting but then can't find the harddisk. This happens both for Linux and Windows.

The underlying reason appears to be the different IDE controller inside VMware as opposed to on the real machine. The restored systems are not prepared to deal with the new IDE controller inside the VM. Funnily enough, trying with kernel 2.4.27 does indeed work, but 2.6.16 does not (both stock Debian). The fact that Windows can't cope is not too surprising. Linux not working is more of a worry.

In regards to Linux or more specifically Debian Sid, the problem is caused by the chipset driver not being included in the initrd image that is built when the kernel is installed. This can be fixed as follows:

After the restore is finished (but before rebooting), remount the newly restored system to, say, /mnt/target. Note that this may entice mounting multiple partitions on top of each other. Then, chroot into /mnt/target, mount /proc and /sys and run dpkg-reconfigure for all kernel images. An example for a full session may look like this: