Recently I managed to acquire myself an old HP DL380 G6 rack server. It’s the kind that you would find in a data center or business server rack. For just £125 it has 2x Intel Xeon processors and 24GB DDR3 ECC memory, a fully fledged RAID controller and redundant power supplies. It’s the kind of upgrade I needed for my ageing HP N54L micro servers.

Because of the amount of power that it holds processing wise, I decided it would be time to try out virtualisation. I decided to go with Microsoft Hyper-V because I’m familiar with Windows operating systems. I haven’t really had time to play with this server too much but I did get PFSense set up as my router as a virtualised operating system, and it’s working very well.

I gained a little experience from doing this and even more recently I decided to upgrade my web server (the one running this website) to a new one, allowing virtualisation. It has an Intel Core i7-7700, 32GB RAM and 2x500GB SSD’s. I currently run BetaArchive, this website and several others for friends from the server.

To separate BetaArchive from the rest I decided to virtualise the whole machine and install two operating systems and gave them both their own IP addresses. I had never done this before so I had to learn how to assign IP’s to each virtual machine as well as understand how the virtual hard disks worked (fixed sizes, dynamic sizes, etc). It’s all very complicated, and that’s the basic parts of it.

One thing that I did find when doing this on both my home server and web server is that SSD drives are an absolute must. Without them your system will run extremely slow because they just aren’t quick enough with standard mechanical drives. That quickly led me on to an issue that made me spend all day thinking there was a hardware issue with the new web server. I discovered that when I ran drive benchmarks that the SSD’s were not performing the way I expected. The reading was mostly OK but the writing was slow. I couldn’t understand it. I thought there was a disk issue so I asked for a replacement since the 2nd disk seemed to be OK.

The disk was replaced and I saw no change. I then theorised it might be a SATA cable issue so I asked for that to be changed. Again no difference. I then asked for the whole server to be swapped out except for the disks, and again, no difference! What?! That made no sense, so it had to be a software issue. For another 2 hours I was stumped, and then it clicked.

Kicking myself, I checked the drive cache settings. Write caching was disabled. Doh! Turning it on instantly gave the speed boost I was expecting. Why it was off by default I don’t know. Is it always off by default? I’ve honestly never noticed this on other systems that I have installed SSD’s into. I feel sorry for the tech that had to do all of those swaps for me!

Disk I/O problems isn’t something just caused by virtualising, but it is very noticeable when you try to run multiple machines at once and they fight for disk I/O. If one machine uses up all of the I/O, the other systems can lock up – not ideal! Thankfully you can restrict the maximum amount of I/O that each machine is allowed to use, as well as give each machine a minimum I/O that it is allowed to be restricted to if another machine is using a lot of I/O too. This stops one from using it all and starving the other.

So now that this is set up, everything is running just great. There are other considerations to make such as RAM and CPU allocation, but you can set up minimums and maximums for those as well.

Hopefully this is something that I can get into more and work with at home when I find the time to do it. The biggest issue I have at home is that my disks are 3.5″ disks but the new server only takes 2.5″, so I will need to use the old one as a storage array. I’ll get around to it. Eventually…