I'm working on some software that I'd like to test on various distributions of Linux, to be able to say that I know it works on them. The software interfaces with a hardware device over USB and Bluetooth, and so I'd prefer to use it on real hardware rather than in a virtual machine.

One approach is certainly to download the distros I am interested in, and burn them onto CDs/DVDs, or copy the files onto a USB key, and I've used both approaches in the past to good effect. The USB sticks I have handy are only 512 MB, and, while I could shuffle OS installers onto them on demand, I worry I'll start copying files, get distracting, and come back later to be little further in getting the OS installed. I'd really like to start it and forget it. (Admittedly, I expect to have to enter in a user account, hostname, timezone, etc).

PXE booting over the LAN makes sense to me.

Most instructions I've seen on doing this have you set up your own DHCP server, and using it either in the place of whatever device is handing out DHCP addresses, or putting it on its own miniature LAN. I would like to leave my existing DHCP server operating*, and I'd also like the computer being installed to to be able to get updates from the internet**. I have followed some instructions on using dnsmasq to do this, and ran into difficulties. (I don't have the exact steps in front of me right now, and this may be going down the wrong path altogether. If it is a good way to go, I'd be happy to provide further details.) In further looking, I see there are many possibilities, from Red Hat's kickstart to gPXE.

I should point out that I'm not truly imaging the machines -- I just want to put the different distros on it as they come off the disc, so to speak, and not in a manner tweaked for, say, an enterprise desktop. (As a Mac administrator, I've often set up golden masters, and made a disk image of the hard drive, and then copied it out over the network, or used base OS images and added packages to it. What I'm looking for here is more akin to making the installer available over the LAN, rather than a complete drive image).

I would also like to add that I'm not opposed to having a bootup medium (USB, DVD) that tells the computer, "Here's the server. Ask it which OSes it has installers for". Likewise, it is a plus to be able to get two or three computers going at once (yeah, small scale, but n > 1), which is another reason I think doing this over the network is a big plus.

Does anyone have recommendations on how I can quickly and easily deploy different Linux distributions to test machines, such that once things are set up, I can start machines installing and come back to them later to test software on them? (It doesn't strictly have to be over the network, but after the one-time setup is figured out, this sure seems like the easiest way to me).

* I'm actually testing on my home network, and my cable modem / wifi base station hands out DHCP addresses. I recognize that I could turn this off and have some other computer do it, but I haven't been running a box at home 24/7 and it seems silly to do so just for DHCP.

** This makes using a smaller, separate network more tricky, unless I bridge the two networks, and, not having two NICS for one box, that sounds like more effort than I hope to expend. Truly, I could install the OS on a separate network and then move it onto the network with live access to the internet, but that goes contrary to starting it and forgetting it.

4 Answers
4

Why not use a Virtual Machine instead? It will be more convenient than DHCP servers and re-imaging servers. Even if you take the pxe route it will require a lot more effort compared to Virtualization. I would suggest you evaluate your requirement and consider the VM approach if feasible.

My concern with that is that the OS in the VM may act differently when connecting to unusual USB or Bluetooth devices than the OS does when running raw on the machine. I have the impression that sometimes, to use a specific device in a VM client, you have to install the driver for it on the VM host! If that is true, it indicates to me that the host OS is communicating with the device, and not the client OS, which doesn't help you know if the client OS has any difficulties in communicating with the hardware device.
–
Clinton BlackmoreMar 15 '11 at 17:32

He already mentioned in his question why virtualization was not appropriate.
–
Robin SmidsrødMar 30 '13 at 9:23

PXE, as you mention, is what I've had very good luck with for doing what you're talking about. The wrench you throw into the works is that you don't want to change your DHCP server... That means that the stock PXE on systems isn't going to work, because it is going to expect the DHCP server to give it a "next server" and "filename" attributes (which the PXE client uses to get bootstrapped).

So, what you'd need to do is grab gPXE or iPXE (gPXE's replacement), and customize it to include the "next server" and "filename" attributes, then burn it to USB or CD and boot from that. That would allow you to give those attributes without having to have the DHCP server hand them out. This is documented at the iPXE web page on embedded scripts.

Then you need to set up a TFTP or HTTP server (gPXE/iPXE will support HTTP, stock PXE tends to only support TFTP). That server hands out the files needed for the boot and installation. I like to get all fancy and chain "pxelinux.o" with a menu file. Linux Journal seems to have a good article on PXE menus, page 2 is talking about the menus. I have menus set up for each distro class (Fedora, CentOS, Ubuntu) plus some utilities (memtest), then those menus have choices for the versions (Hardy, Lucid, 32-bit, 64-bit), then those menus have choices for "install" and "rescue" or maybe "server" versus "desktop".

If you really want to automate the install, so you can pick a menu item and then just let it take over from there, you need to use "preseeding" (Debian-based) or kickstart (Fedora-based) where you can tell the installer exactly what to do, including formatting discs (DANGER :-), packages, configurations, etc...

PXE is actually not that hard, and it makes many of these sorts of things just so easy. I've run a PXE server for close to 10 years now, and I rarely burn media. Plus the network pulling from a big network server is WAY faster than a CD or DVD to install from. It's not unusual for CentOS installs to take 3 minutes.

Like others suggest, there are tremendous advantages for using PXE + DHCP

But there are many manual steps to get to a completely "hands off" PXE+DHCP deployment.
You need to do some scripting, duct taping to get where you want.

Instead of trying to put together your own solution, you can use an existing PXE/DHCP automation tool. Because if you roll your own solution, you won't be able to take advantage of new features that open source tools offer.

Like Cobbler that takes care of managing several manual tasks like DHCP record addition, TFTP configuration, kickstart templates etc.

Another great tool for same purpose, is Foreman.
It has a different approach to the same problems, and also has deep integration with Puppet(usage of Puppet with Foreman is not mandatory, but helps.) You need the Foreman-Proxy component to take care of DHCP/TFTP for you. Here is a screen cast of Foreman provisioning a VM, but the same applies to a physical host. Another installation guide.

With everything in place, installing servers becomes a really simple task with Cobbler or Foreman. They have APIs so you can script the little remaining manual tasks.

It may take time to get all this setup, but once you do that, you are free to concentrate on your main tasks.