Month: March 2019

In the last installment of this series, I discussed setting up the Proxmox VE hosts in VirtualBox. At this stage in the exercise there should be 3 VirtualBox VMs (VBVMs) running, in headless mode.

Before you can set up the cluster, storage replication, and high availability, you need to do a bit of housekeeping on your hosts. In this post, I will go over those steps making sure that the hosts are up to date OS wise, that the network interfaces are setup and communicating with eachother, and that your storage is properly configured. Most of these steps can be accomplished via the Web UI, but using SSH will be faster and more accurate. Especially when you use an SSH client like SuperPuTTY or MobaXTerm that lets you type in multiple terminals at the same time.

Log in as root@ip-address for each PVE node. In the previous post, the IPs I chose were 192.168.1.101, 192.168.1.102, and 192.168.1.103.

I don’t want to bog this post down with a bunch of Stupid SSH Tricks, so just spend a few minutes getting acquainted with MobaXTerm and thank me later. The examples below will work in a single SSH session, but you will have to paste them into 3 different windows, instead of feeling like a superhacker:

Assuming you see those two disks, and they are in fact ‘sdb’ and ‘sdc’ then you can create your zpool. Which you can think of as a kind of software RAID array. There’s way more to it than that, but that’s another post for another day when I know more about ZFS. For this exercise, I wanted to make a simulated RAID1 array, for “redundancy.” Set up the drives in a pool like so:

In a later post we will use the zpool on each host for Storage Replication. The PVEVM files for each of your guest machines will copy themselves to the other hosts at regular intervals so when you migrate a guest from one node to another it won’t take long. This feature pairs very well with High Availability, where your cluster can determine if a node is down and spin up PVEVMs that are offline.

Now that your disks are configured, it’s time to move on to Part 3: Building A Cluster Network.

This one network interface is sort of the lifeline for a Proxmox host. It would be a shame if that link got bombed by incessant network traffic. As I discovered (the hard way) one possible source of incessant network traffic is the cluster communication heartbeat. Obviously, that traffic needs to go on its own network segment. Normally, that would be a VLAN or something, but I have some little dumb switches and the nodes have some old quad port NICs, so I wanted to just assign an IP to one port, and plug that port into a switch that is physically isolated from “my” network.

Once a cluster is working, migrating machines happens over the cluster network link. This is OK, but if your cluster network happens to suck (like when some jackass plugs it into a 10 year old switch) it can cause problems with determining if all the cluster nodes are online. So, now I want to set up an additional interface for VM migration. Migration seems like the kind of thing that happens only occasionally, but when you enable Storage Replication, the nodes are copying data every 15 minutes. Constant cluster chatter, plus constant file synchronization, has the potential to saturate a single network link. This gets even worse when you add High Availability, and there is a constant vote on if a PVEVM is up and running, followed by a scramble to get it going on another node.

So, at minimum we will need 3 network interfaces for the test cluster on VirtualBox. I didn’t want to spend a lot of time tinkering with firewall and NAS appliances, so I am leaving the “Prox management on its own network segment” and the “Dedicated network storage segment” discussions out of this exercise. I can’t decide if the management interface for my physical Proxmox cluster should sit on my internal network, or on its own segment. For this exercise, the management interface is going to sit on the internal network. My Synology NAS has 4 network ports, so I am definitely going to dedicate a network segment for the cluster to talk to the NAS, but that won’t be a part of this exercise.

[Virtual] Hardware Mode(tm)

Once you are booted up and VirtualBox is running, you can start building your VBVMs. I recommend building one VBVM to use as a template and then cloning it 3 times. I found that I kept missing important things and having to start over, so better to fix the master and then destroy the clones.

I called my master image “proxZZ” so it showed up last in the list of VBVMs. I also never actually started up the master image, so it was always powered off and the ZZ’s made it look like it was sleeping.

Create proxZZ with the following:

First, make sure that you have created 2 additional Host Only Network Adapters in VirtualBox. In this exercise you will only use two, but it can get confusing when you are trying to match en0s9 to something, so do yourself a favor and make three. Make sure to disable the DHCP server on both adapters.

Create a new virtual machine with the following characteristics :

Name: ProxZZ

Type: Linux

Version: Debian 64bit (Proxmox is Debian under the hood.)

Memory Size: 2048MB

Hard drive: dynamically allocated, 32GB in size.

Make sure that you have created 3 total virtual hard disks as follows:

SATA0: 32GB. This will be your boot drive and system disk. This is where Proxmox PVE will be installed. Static disks are supposed to be faster, but this isn’t even remotely about speed. My laptop has a 240gb SSD, so I don’t have a ton of space to waste.

SATA1: 64GB, dynamically allocated. This will be one of your ZFS volumes.

SATA2: 64GB, dynamically allocated. This will be your other ZFS volume. Together they will make a RAID1 array.

WHile you are in the storage tab, make sure to mount the Proxmox installer ISO

You may be tempted to do something clever like unplugging virtual cables or something. Don’t. You will be cloning this machine in a minute and you will have a hard time keeping all of this straight.

Before you finish, make sure that the machine is set to boot from the hard drive first, followed by the CD/Optical drive. This seems stupid, but you will be booting these things in headless mode, and forgetting to eject the virtual CD rom is super annoying. So fix it here and stop being bothered with it.

When it’s done, it should look something like this:

Once you are sure your source VM is in good shape, make 3 clones of it. Don’t install Proxmox yet. SSH keys and stuff will play a major role in this exercise later, and I am not sure if VirtualBox is smart enough to re-create them when you clone it. I ran into this a few times so just clone the powered off VBVM. I called the clones prox1, prox2, and prox3.

[Virtual] Software Mode(tm)

Now it is time to start your 3 clones. This can get pretty repetitive, especially if you start the process over a couple of times. While you will appreciate cloning the servers, there isn’t really a simple way that I have discovered to build the PVE hosts. In a few iterations of this exercise, I misnamed one of the nodes (like pro1 or prx2) and it’s super annoying later when you get the cluster set up and see one of the nodes named wrong. There is a procedure to fix the node name after you build it, but seriously just take your time and pay attention.

As you do the install, select your 32gb boot drive and configure your IP addresses.
I went with a sequence based on the hostname:
prox1 – 192.168.1.101
prox2 – 192.168.1.102
prox3 – 192.168.1.103
Like I said before, go slowly and pay attention. This part is super repetitive and it’s easy to make a stupid mistake that you have to troubleshoot later. At some point, I guarantee that you will give up, destroy the clones, and start over 🙂

Send In The Clones

Once your hosts are installed, it’s time to shut them down and boot them again, this time in headless mode. This is where fixing the boot order on ProxZZ pays off. With all 3 VBVMs are started up, you are ready for the next stage of the exercise: configuring your hosts.

If you have read my previous post about my first foray into Proxmox, you know that the infrastructure of my home network is, as the Irish would say, not the best. I have been tinkering with routers and smart switches, learning about VLANs and subnets and all kinds of other things that I thought I understood, but it turns out I didn’t.

Doing stuff with server and network gear at home is a challenge because the family just doesn’t get Hardware Mode(tm). Hardware means being sequestered in the workshop, possibly interfering with our access to the Internet. I have to wait for those rare occasions when I am: 1) at home and 2) not changing diapers and 3) not asleep and 4) no one is actively using the Internet. I have been putting things in place, one piece at a time, but my progress is, well, not the best.

Part of my networking woes are design. I don’t know how to build a network for a Proxmox cluster, because I don’t know the right way to build a Proxmox cluster. I also can’t spend hours in my basement lab tinkering. I need to be upstairs with the family. So I decided to build a little portable test cluster, on my laptop, using VirtualBox.

The network design at my house looks a bit like a plate of spaghetti, with old, unmanaged switches in random spots, like meatballs. Little switches plugged into big ones. No tiers, no plan, just hearty Italian improvisimo. Last year, when I fired up two Proxmox nodes, with no consideration for what might happen… Mamma mia!It took a couple of days before the network completely crashed, and a couple of more days to figure out the problem.

The great thing about VirtualBox is that you can build Host Only Networks. A host only network behaves like a physical switch with no uplink. VirtualBox virtual machines (VMs) can talk to each other, and to the physical host without talking to the outside world. This seemed like a decent facsimile of my plan to use a small unmanaged switch to isolate cluster traffic from the rest of the network.

The other great thing about VirtualBox is that you can add lots of network interfaces to a VM in order to simulate network interactions. You can build a router using a Linux or BSD distro and use it to connect your various host only networks to a bridge into your real physical network. I tried that at first, I am not sure that it’s necessary for this exercise.

And last, but not least, VirtualBox lets you clone a VM. As in, to make a procedurally generated copy of a VM, and then start it up along side it. This is a great feature for when you are screwing up configs and installs.

It is the combination of these features that allowed me to create a little virtual lab on a PC so I could figure out how to set up all the cool stuff that Proxmox can do, and figure out what kind of network I will need for it.

Phase 1: The plan

The plan for this exercise is to figure out how to use several features of Proxmox VE. The features are as follows:

Online Backup and Restore – Proxmox has the ability to take and store snapshots of VMs and containers. This is a great feature for a home lab where you are learning about systems and you are likely to make mistakes. Obviously, I use this feature all the time.

Clustering – Proxmox has the ability to run multiple hosts in tandem with the ability to migrate guest VMs and Linux containers from one host to another. In theory, using a NAS as shared storage you can migrate a VM without shutting it down. Since the point of this exercise is to build Proxmox hosts and not NAS appliances, we are going to focus on offline migrations where you either suspend the host or shut it down prior to migrating.

Storage Replication – Proxmox natively supports ZFS, and can use the ZFS Send and Receive commands to make regular copies of your VMs onto the other cluster nodes. Having a recent copy of the VM makes migrations go much faster, and saves you from losing more than a few minutes worth of data or configuration changes. I wish I had this feature working when I was building my Swedish Internet router.

High Availability – If you have 3 or more PVE nodes in your cluster, you can set some of your VMs to automatically migrate if there is an outage on the node the VM is hosted on. The decision to migrate is based on a kind of voting system that uses a quorum to decide if a host is offline. I want to use this feature to ensure that my access devices are up and running to support my remote access shenanigans.

Phase 2: Preparation

To build the lab, you will need the following:

A desktop or laptop computer with 2 or more cores and at least 8gb of RAM. You could probably pull this off with 4gb if you are patient. My laptop has an old dual core i5 and 8gb and it was pretty much maxed out the whole time, so your mileage may vary.

A working OS with a web browser and SSH client. Linux would probably be best, but my laptop was running Win10pro. I recommend a tabbed SSH client capable of sending keystrokes to multiple SSH sessionsLike Moba XTerm.

chris@chrizzle23.com

Husband, Father, Veteran, cypher punk, hacker spacer, gamer, lover of privacy, free speech, and filthy scumm pirates. My opinions are my own and do not reflect those of hive13, Cinci2600, or my current employer.