virtualization

Deploying an Openstack test or development platform can be a very daunting task. A traditional installation of an Openstack infrastructure requires many servers and is quite complex. However, there are a few methods that can make this task much easier, and possible with access to a single physical server or virtual machine that has enough resources. Today, we’ll deploy an Openstack Ocata infrastructure using a single virtual machine (in my case, a VMware ESXi based virtual machine) using Devstack. I’ve found this to be the most stable, repeatable, and reliable method to get an Openstack infrastructure up as quickly as possible. Keep in mind, this same guide can be used to install almost any release of Openstack, simply by adjusting one word. More on that later.

Requirements

For this guide, you will need a server at least as good as these specs.

Today, I’m going to guide you through the process of creating an iSCSI target / extent on FreeNAS-9. This will also work on previous versions of FreeNAS, such as version 7 and 8. There are a few different ways you can go about creating an iSCSI share. You can dedicate an entire device (Hard drive, or RAID array) to the iSCSI share, or you can simply create a Volume, and create multiple iSCSI shares and each is simply a file on the volume. This approach works well because you can use part of a volume as an NFS share, part of it as a CIFS share for Windows, and if you want a few separate iSCSI targets you can just create a single file for each. Lets get started.

How to create an iSCSI Target / Share on FreeNAS

First, we need to add a volume using your hard drive or RAID array that is connected to your FreeNAS server. If you have already done this, you can skip this step. Let’s get started with the rest.

Log into your FreeNAS web interface, and go to Storage > Volumes > Volume Manager. Fill in a volume name (make sure it starts with a letter, and NOT a number, otherwise you will get an error). Add one or more of your Available Disks (by clicking the + sign). Select a RAID type if you wish to do so. In my case, I’m using hardware RAID, so I will leave the default (single drive stripe, IE, JBOD). Now click Add Volume.

Now that we have added a volume, we can begin the process of creating an iSCSI share. This process required multiple steps, in the following order:

KVM is an excellent virtualization engine, but it lacks an easy to use user interface. Kimchi changes that. Kimchi allows you to handle the basic management tasks, like creating, starting and stopping virtual machines, adding iSCSI targets, NFS shares, and so much more. The interface is beautiful and it’s pretty easy to set up. Today, I’ll show you how.

Note: Kimchi requires systemd, so Ubuntu 14.04 LTS will NOT work. You might be able to use 14.10, if systemd is installed. I am using Ubuntu 15.04 for this guide, which uses systemd by default.

How to install KVM on Ubuntu 15.04

First, let’s make sure everything is updated and upgraded. I’m working with a minimal installation of Ubuntu 15.04, with only OpenSSH server installed.

oVirt, in my opinion, is the biggest contender with VMware vSphere. oVirt has the weight and development resources of Red Hat behind it, which has undoubtedly slingshotted it ahead of the rest of the open source virtualization solutions out there. It has almost all of the “out of the box” features vSphere has, and it works extremely well.

There have been two major holdback concerning oVirt in the past. First, early on it only supported Fedora. This definitely scared many people away, myself included. That is no longer the case as it now supports Fedora, RHEL, and CentOS. The second major drawback is the complexity of installation. Overall the methodology is pretty simple. At a minimum, you need two machines. An oVirt Engine, which is the brains of the operation and powers the web interface, and you have the oVirt Node, which is the “hypervisor.” Although the overall methodology is simple enough, it can really be a pain to install and get working. But, that’s improving as well.

I wrote this guide to help you get your oVirt infrastructure built on CentOS 6.6 easily, and quickly. You will need two servers, at minimum. The good news is that one of them, the oVirt Engine, can be virtualized, running on your currently configured hypervisor of choice. As far as specs, you’ll want to try to be close to the following.

This is enough to work with and get a good idea of what the oVirt platform is capable of. It’s also a solid foundation that can be grown and expanded on to form a production worthy infrastructure. So, lets get started.

If you’ve read my other recent posts, you’ve probably notice I’ve been spending a lot of time with different cloud architectures. My previous guide on using DevStack to deploy a fully functional OpenStack environment on a single server was fairly involved, but not too bad. I’ve read quite a bit about Ubuntu OpenStack and it seems that Canonical has spent a lot of energy development their spin on it. So, now I want to set up Ubuntu OpenStack. All of Ubuntu’s official documentation and guides state a minimum requirement of 7 machines (server). However, although I could probably round up 7 machines, I really do not want to spend that much effort and electricity. After scouring the internet for many hours, I finally found some obscure documentation stating that Ubuntu OpenStack could in fact be installed on a single machine. It does need to be a pretty powerful machine; the minimum recommended specifications are:

8 CPUs (4 hyperthreaded will do just fine)

12GB of RAM (the more the merrier)

100GB Hard Drive (I highly recommend an SSD)

With the minimum recommended specs being what they are, my little 1u server may or may not make the cut, but I really don’t want to take any chances. I’m going to use another server, a much larger 4u, to do this. Here are the specs of the server I’m using:

Supermicro X7DAL Motherboard

Xeon W5580 4 Core CPU (8 Threads)

12GB DDR3 1333MHz ECC Registered RAM

256GB Samsung SSD

80GB Western Digital Hard Drive

I have installed Ubuntu 14.04 LTS, with OpenSSH Server being the only package selected during installation. So, if you have a machine that is somewhat close to the minimum recommended specs, go ahead and install Ubuntu 14.04 LTS. Be sure to run a sudo apt-get upgrade before proceeding.

Lets Get Started

First, we need to add the OpenStack installer ppa. Then, we need to update app. Do the following:

I’ve always been rather curious about OpenStack and what it can and can’t do. I’ve been mingling with various virtualization platforms for many, many years. Most of my production level experience has been with VMWare but I’ve definitely seen the tremendous value and possibilities the OpenStack platform has to offer. A few days ago I came across DevStack while reading up on what it takes to get an OpenStack environment set up. DevStack is pretty awesome. Its basically a powerful script that was created to make installing OpenStack stupid easy, on a single server, for testing and development. You can install DevStack on a physical server (which I will be doing), or even a VM (virtual machine). Obviously, this is nothing remotely resembling a production ready deployment of OpenStack, but, if you want a quick and dirty environment to get your feet wet, or do some development work, this is absolutely the way to go.

The process to get DevStack up and running goes like this:

Pick a Linux distribution and install it. I’m using CentOS7.

Download DevStack and do a basic configuration.

Kick of the install and grab a cup of coffee.

A few minutes later you will have a ready-to-go OpenStack infrastructure to play with.

Server Setup and Specs

I have always been fond of CentOS and it is always my go-to OS of choice for servers, so that is what I’m going to use here. CentOS version 7 to be exact. Just so you know, DevStack works on Ubuntu 14.04 (Trusty), Fedora 20, and CentOS/RHEL 7. The setup is pretty much the same for all three so if you’re using one of the other supported OS’s, you should be able to follow along without issues, but YMMV.

I recently decided to venture into the Proxmox virtualization world. Being a VCP, i’ve always used VMWare based virtualization for just about everything. I have played around with Xen before, but most all of my virtualization endevours have been purely hypervisor “bare-metal” based. When I found out the Proxmox seems to be the best of both worlds, with hypervisor and container based virtualization in one package, I was intrigued. So, I looked for a quick how-to on creating a bootable thumbdrive to install Proxmox (I don’t have a CD drive on the server, nor any of my servers now that I think about it). I’m using OSX as my primary OS, so I was happy to find that the .ISO could be copied to a USB thumbdrive with one simple command (works on OSX and Linux):

dd if=pve-cd.iso of=/dev/XYZ bs=1M

I plugged in an 8GB USB thumbdrive and needed to figure out what the /dev/ device name was, so I could format the command properly. So, google search it was. I felt pretty stupid when I found out that running this single command, would give me the info I needed: