Over the past year I’ve been using a home lab for quick, hands-on testing of OpenStack and Rackspace Private Cloud, and a number of people have requested information on the setup. Throughout the next few blog posts I will explain what I’ve got. This serves two purposes: 1) documentation of my own setup as well as 2) hopefully providing information that other people find useful – and not everything is about OpenStack.

This first post is about the tech involved and how it is set up. In subsequent posts I’ll go into further detail and then detail the installation of Rackspace Private Cloud.

The N40L is an incredibly cheap, 4 SATA Bay (+ CDROM Bay), low-power server with supplied 250Gb SATA. It’s a single CPU AMD Turion II processor with two cores that supports Hardware-VT. It has been superseded by the HP MicroServer N45L and often found with cash-back deals, meaning these usually come in under $215/£130.

There seems to be some caution when choosing memory for these things, with the documentation reporting they support up to 8Gb. I’ve read about people successfully running 16Gb, and through my own trial I grabbed the cheapest memory I could get and found it worked.

When choosing the PCI-X NICs and other cards, be aware that you need to use low-profile ones. The NICs I added to mine are low-profile, but the metal backing plate isn’t. A quick email to TP-Link customer services will get you some low-profile backing plates free of charge.

Networked Attached Storage

I have two QNAP NAS devices. One functions as my main network storage (nas / 192.168.1.1) with two drives in, running DHCP for my home subnet, DNS for all connected devices and Proxy (primarily used to compensate for the slow 6Mbps to 7Mbps ADSL speed I get when installing packages on my servers). The second (nas2 / 192.168.1.2) acts as a TFTP server and proxy for my servers, as well as providing a replication/backup for my primary NAS. The reason I run a proxy and TFTP next to my servers, rather than on the main NAS, is the wireless link I have between my servers and my router. Although WiFi speeds are OK, it’s a much more efficient setup (and I have two floors between my servers and WiFi router). Powerline adapters? I tried them, but due to my home having an RCD (Residual Current Device), it made Powerline adapters useless.

Essentially, I have two parts to my network –separated by two floors of a house and connected using WiFi bridging – all on a 192.168.1.0/24 subnet. I have unmanaged switches connecting the servers and NAS so there’s nothing here that’s highly exciting, but it’s presented here for clarity and completeness (and useful if you think you’ll need to WiFi bridge two parts of your network together)

Share this post:

Kevin Jackson, the author of OpenStack Cloud Computing Cookbook, is part of the Rackspace Private Cloud Team and focuses on assisting enterprises to deploy and manage their Private Cloud infrastructure. Kevin also spends his time conducting research and development with OpenStack, blogging and writing technical white papers.

They’re great little devices for this purpose. I’ve had Swift running on them to test various features and now I’ve got RPC 4.2.1 on here at the moment. They so cheap I need to find an excuse to buy some more!

I am planning on developing a small lab environment to learn on a larger scale than just DevStack. I am looking at the Dell r210’s and its quad core VT-x capable processors. I would like to create a full-scale cloud solution for myself and friends, we are college students (with some cash), CIS majors, and want to get on the band wagon. We know that the OpenStack movement is the way to go. Any ideas on how to focus our energy? We found some cheap r210s, we have the facilities blessing, bandwidth, and a drive to learn.

Those will be fine. Key things to note: ensure at least 2 nics in each – one for host/api access, the other for Networking (Neutron). You’ll want 2 servers for a HA controller setup. You will then want a handful with more RAM to run the Compute/Hypervisors as well as enough disk to satisfy the number of running instances. This will allow you to start. You can then at a later date add on different servers, with more disks in, to run Block Storage (Cinder) to give you the flexibility of adding extra storage to your instances, rather than local storage. So I’d concentrate on compute, and size it according to your lab’s needs.