I frequently, finding myself needing to get data out of it without the pain of awk/sed’ing out the ASCII art.

Thus to quickly access the raw data, we can directly query the API’s using curl & parsing JSON instead, which is much better 🙂

Authentication

Before we can interact with the other Openstack API’s we need to authenticate to Keystone openstack’s identity service. After authenticating we receive a token to use with our subequent API requests. So step 1 we are going to create a JSON object with the required authentication details.

Create a file called ‘token-request.json’ with an object that looks like this.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

{

"auth":{

"identity":{

"methods":[

"password"

],

"password":{

"user":{

"domain":{

"id":"default"

},

"name":"tuxninja",

"password":"put_your_openstack_pass"

}

}

}

}

}

Btw, if you followed my tutorial on how to install Openstack Kilo, your authentication details for ‘admin’ is in your keystonerc_admin file.

The token is actually returned in the header of the HTTP response, so this is why we need ‘-i’ when curling. Notice we are parsing out the token and returning the value to an environment variable $TOKEN.

Now we can include this $TOKEN and run whatever API commands we want (assuming admin privileges for the tenant/project)

I only have 1 VM currently called spin1, but for the tutorials sake, if I had ten’s or hundred’s of VM’s and all I cared about was the VM name or ID, I would still need to parse this JSON object to avoid getting all this other meta-data.

My favorite command line way to do that without going full Python is using the handy JQ tool.

The first command just takes whatever the STDOUT from curl is and indent’s and color’s the JSON making it pretty (colors gives it +1 vs. python -m json.tool).

The second example we actually parse what were after. As you can see it is pretty simple, but jq’s query language may not be 100% intuitive at first, but I promise it is pretty easy to understand if you have ever parsed JSON before. Read up more on JQ @ https://stedolan.github.io/jq/ & check out the Openstack docs for more API commands http://developer.openstack.org/api-ref.html

Pre-requisites

This article is a continuation on the previous article I wrote on how to do a single node all-in-one (AIO) Openstack Icehouse install using Redhat’s packstack. A working Openstack AIO installation using packstack is required for this article. If you do not already have a functioning AIO install of Openstack please refer to the previous article before continuing on to this articles steps.

Preparing Our Compute Node

Much like in our previous article we first need to go through and setup our system and network properly to work with Openstack. I started with a minimal CentOS 6.5 install, and then configured the following

resolv.conf

sudoers

my network interfaces eth0(192) and eth1 (10)

Hostname: ruby.tuxlabs.com ( I also setup DNS for this )

EXT IP: 192.168.1.11

INT IP: 10.0.0.2

A local user + added him to wheel for sudo

I installed these handy dependencies

yum install–yopenssh–clients

yum install–yyum–utils

yum install–ywget

yum install–ybind–utils

And I disabled SELinux

Don’t forget to reboot after

To see how I setup the above pre-requisites see the “Setting Up Our Initial System” section on the previous controller install here : http://tuxlabs.com/?p=82

Change CONFIG_COMPUTE_HOSTS to the ip address of the compute node you want to add. In our case ‘192.168.1.11’. Additionally, validate the ip address for CONFIG_NETWORK_HOSTS is your controller’s ip since you do not run a separate network node.

Fantastic. That command should look familiar from our previous tutorial it is the standard command for launching new VM instances using the command line, with one exception ‘–hint force_hosts=ruby.tuxlabs.com’ this part of the command line forces the scheduler to use ruby.tuxlabs.com as it’s hypervisor.

Once the VM is building we can validate that it is on the right hypervisor like so.

You can see from the output above I have 2 VM’s on my existing controller ‘diamond.tuxlabs.com’ and the newly created instance is on ‘ruby.tuxlabs.com’ as instructed, awesome.

Now that you are sure you setup your compute node correctly, and can boot a VM on a specific hypervisor via command line, you might be wondering how this works using the GUI. The answer is a little differently 🙂

The Openstack Nova Scheduler

The Nova Scheduler in Openstack is responsible for determining, which compute node a VM should be created on. If you are familiar with VMware this is like DRS, except it only happens on initial creation, there is no rebalancing that happens as resources are consumed overtime. Using the Openstack Dashboard GUI I am unable to tell nova to boot off a specific hypervisor, to do that I have to use the command line above (if someone knows of a way to do this using the GUI let me know, I have a feeling if it is not added already, they will add the ability to send a hint to nova from the GUI in a later version). In theory you can trust the nova-scheduler service to automatically balance the usage of compute resources (CPU, Memory, Disk etc) based on it’s default configuration. However, if you want to ensure that certain VM’s live on certain hypervisors you will want to use the command line above. For more information on how the scheduler works see : http://cloudarchitectmusings.com/2013/06/26/openstack-for-vmware-admins-nova-compute-with-vsphere-part-2/

The End

That is all for now, hopefully this tutorial was helpful and accurately assisted you in expanding your Openstack compute resources & knowledge of Openstack. Until next time !

A brief introduction of Openstack + My thoughts

Openstack is open-source software for building clouds. It was created in 2010 by people from Rackspace & NASA, but is currently managed by the non-profit Openstack Foundation, which includes members from the who’s who of the technology sector that have joined forces to continue to invest & develop Openstack (written in Python) for fun and profit( well not so much ). It is definitely fun though :-). In fact I have been having nothing, but a blast since the moment I meant Openstack in 2012. However, admittedly the first time I worked on Openstack it was on the Essex release and I felt it had a ways to go before it was ready for prime time. At that time it was not only hard to install, but most of the people around it were hard core Pythonista’s (Python developers), that were rapidly trying to a mature Openstack & it’s ecosystem that just wasn’t production ready yet. And while the latest release of Openstack ‘Icehouse‘ (which I am covering in this blog) is leaps and bounds better, it can still be quite a PITA(Pain In The Ass) for first timers. In fact I still struggle with the idea of running Openstack in production because it requires an incredible amount of resources, engineering skill & organizational persistence to do so. Companies willing to embark on this journey thus far have included Mercado Libre, Dell, HP, Redhat, Canonical (the Ubunut guys), eBay, PayPal and Symantec among many others. These companies were required to beef up their engineering staff & allocate large amounts of resources just to take on this challenge. In addition their commitments were probably tested over and over again and their leadership had to respond with tremendous faith when timelines began to slip. However, organizations that complete this journey recognize that the return at the end of the tunnel is a compounding return. For starters you aren’t locked into to a vendor like VmWare or AWS (Amazon Web Services) who are both incredibly expensive, but the compounding return comes from skills it will build internally and the culture that will be a bi-product.

Coming here to Tuxlabs means you have personally accepted this mission to learn Openstack and you are in need of some guidance don’t worry you are in good hands. I will show you the light, which can be hard to see through the darkness if you try to take on the documentation all by yourself : http://docs.openstack.org/

So sit back, relax, read, and type exactly as I do. In the end you will have a perfectly functioning Openstack Icehouse cloud and if you decide to bring this into your organization remember with great power comes great responsibility…

You will change everyone in your organization from a POSA (Plain Old Sys Admin’s) to Cloud Engineers and Architects jumping into the Openstack Python code base at the sniff of an issue.

Devstack

Devstack is a shell script used to quickly and easily deploy an all-in-one install of Openstack on any machine or VM for the purposes of trying, testing, and developing on Openstack. It is extremely easy to install, use and get going, if you have not used Devstack already I recommend trying it first because it is a quick and easy way to get a learning win + it will help you decide what you want to learn more about and whether or not Openstack is what you are looking for. However, you do not want to run Devstack in production for starters because production deployments should be multi-node setup’s not all Cloud services deployed on one machine like Devstack does.

The Book

My favorite Openstack book to date is by far is the Openstack Cloud Computing Cookbook written by Kevin Jackson. This book is so great because gets straight to business giving you the commands & understanding needing to get Openstack up and running quickly unlike most other book that bore you with unnecessary details. The first time I installed Openstack Essex I used this book and it worked like a charm, however, that was on Ubuntu, which this book was written for. If you want to install Openstack on Redhat you can do so from scratch or you can use RDO.

RDO

RDO technically doesn’t stand for anything ! http://openstack.redhat.com/Frequently_Asked_Questions#What_does_RDO_stand_for.3F, but I don’t really buy that and have found some people expanding the acronym to the Redhat Distribution of Openstack which I think sounds incredibly fitting. RDO has a website dedicated to a community of people running Openstack on Redhat, CentOS and Fedora. Because Redhat is still the leader in production Linux deployments in the enterprises of the world, I chose to use it in my own lab environment & for the purposes of this tutorial. However, because I cannot afford the license cost of Redhat Enterprise Linux I am using the free community release of RHEL known as CentOS or the Community Enterprise Operating System. For the purposes of this tutorial we will be installing CentOS 6.5 with a minimal install on a bare metal system and then installing Packstack according to the instructions on the RDO website. Then we will go a step further.

Getting Started

A production Openstack deployment has a minimum of 3 nodes one for the Controller, Network, and Compute nodes and that would not cover a Highly Available deployment where you would need the capability to failover functions to standby nodes when there was an outage or maintenance needed. However, for the purposes of this article I will be showing you what is called an all-in-one install where the Controller, Network, and Compute functions all live on the same node and in future articles we will expand on this knowledge to build a production capable multi-node deployment of Openstack.

That system should have dual NIC’s, although I think it is possible to use virtual interfaces for a lab environment if you have to.

To configure the hostname of your controller in DNS mine is diamond.tuxlabs.com, but most people go with controller.yourdomain.com (if you don’t have or use DNS, just make sure you configure your hosts file /etc/hosts with the information)

The system you use should have CentOS 6.5 installed with a Minimal Install using the defaults no extra’s.

Your brain, keyboard, fingers and a fresh beer and possibly an ice chest with more beer depending on how far the fridge is.

Assumptions

Our home network uses 192.168.1.0/24 and has access to the internet.

It has DHCP enabled, but only for 192.168.1.150-199.

We are not using 10.0.0.0/24 for anything so we can use it for our private network in Openstack.

If these are true we should be able to follow my examples exactly, but if you want to change your networks you will have to substitute them as needed.

Setting Up Our Initial System

First become root or login as root, then…configure the following configuration files to match.

Resolve.conf

This assumes our gateway device @ 192.168.1.1 runs DNS (like the Linksys’s do). If 192.168.1.1 doesn’t run DNS do not add it to resolve.conf.

Configuring Networking

There are many ways to configure Openstack networking and it can be quite complicated, so we are going to use what I consider the simplest method using Openvswitch.

Before we begin, make sure you can ping yahoo.com before we go mucking with the configs. We are going to change how your primary network interface is configured and then configure two additional interfaces. Do your best to match these configurations exact, making a mistake here might cause Openvswitch to barf on itself and that would not be fun to troubleshoot.

The above configuration(s) is setting eth0 as an OVSPort that is bridged to the interface br-ex. eth1 is configured normally for our private 10 network, and br-ex is configured as a OVSBridge with our actual IP information from / for eth0.

After Restarting Networking ifconfig Output Should Look Like This

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

[root@diamond tuxninja]# ifconfig eth0

eth0 Link encap:Ethernet HWaddr00:15:17:65:F9:98

inet6 addr:fe80::215:17ff:fe65:f998/64Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500Metric:1

RX packets:322092errors:0dropped:0overruns:0frame:0

TX packets:187420errors:0dropped:0overruns:0carrier:0

collisions:0txqueuelen:1000

RX bytes:436592198(416.3MiB)TX bytes:15605772(14.8MiB)

Interrupt:18Memory:b8820000-b8840000

[root@diamond tuxninja]# ifconfig eth1

eth1 Link encap:Ethernet HWaddr00:15:17:65:F9:99

inet addr:10.0.0.1Bcast:10.0.0.255Mask:255.255.255.0

inet6 addr:fe80::215:17ff:fe65:f999/64Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500Metric:1

RX packets:0errors:0dropped:0overruns:0frame:0

TX packets:8errors:0dropped:0overruns:0carrier:0

collisions:0txqueuelen:1000

RX bytes:0(0.0b)TX bytes:552(552.0b)

Interrupt:19Memory:b8800000-b8820000

[root@diamond tuxninja]# ifconfig br-ex

br-ex Link encap:Ethernet HWaddr00:15:17:65:F9:98

inet addr:192.168.1.10Bcast:192.168.1.255Mask:255.255.255.0

inet6 addr:fe80::a8db:16ff:fed6:f4c4/64Scope:Link

UP BROADCAST RUNNING MTU:1500Metric:1

RX packets:293errors:0dropped:0overruns:0frame:0

TX packets:151errors:0dropped:0overruns:0carrier:0

collisions:0txqueuelen:0

RX bytes:43799(42.7KiB)TX bytes:15786(15.4KiB)

[root@diamond tuxninja]#

For more information on Openstack networking here are some references that I used.

The first command adds the required RDO repo to yum install packstack. The second downloads the openstack-packstack package. And the third and final command is magic. This command installs packstack. We are telling it to install all openstack components to one machine the –provision-all-in-one-ovs-bridge=n tells packstack we are going to be using a single node (although I am still not entirely sure this flag is absolutely necessary) and the final flag tells packstack not to deploy the demo project, because if you do you have to end up deleting it before you can delete the network information and re-create it correctly.

Packstack will take about 10 minutes to run. It uses puppet to deploy Openstack and it’s required configurations. In my experience it works pretty well, however if you do need to re-install packstack there isn’t an automated uninstall script included. So someone created one and here it is in case you need it.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

[root@diamond tuxninja]# cat packstack_uninstall.sh

#!/bin/bash

# Warning! Dangerous step! Destroys VMs

forxin$(virsh list--all|grep instance-|awk'{print $2}');do

virsh destroy$x;

virsh undefine$x;

done;

# Warning! Dangerous step! Removes lots of packages

yum remove-ynrpe"*nagios*"puppet"*ntp*""*openstack*"\

"*nova*""*keystone*""*glance*""*cinder*""*swift*"\

mysql mysql-server httpd"*memcache*"scsi-target-utils\

iscsi-initiator-utils perl-DBI perl-DBD-MySQL;

# Warning! Dangerous step! Deletes local application data

rm-rf/etc/nagios/etc/yum.repos.d/packstack_*/root/.my.cnf\

/var/lib/mysql//var/lib/glance/var/lib/nova/etc/nova/etc/swift\

/srv/node/device*/* /var/lib/cinder/ /etc/rsync.d/frag* \

/var/cache/swift /var/log/keystone /var/log/cinder/ /var/log/nova/ \

/var/log/httpd /var/log/glance/ /var/log/nagios/ /var/log/quantum/ ;

umount /srv/node/device* ;

killall -9 dnsmasq tgtd httpd ;

vgremove -f cinder-volumes ;

losetup -a | sed -e 's/:.*//g' | xargs losetup -d ;

find /etc/pki/tls -name "ssl_ps*" | xargs rm -rf ;

for x in $(df | grep "/lib/" | sed -e 's/.*//g') ; do

umount$x;

done

Don’t forget to chmod +x to that bad boy to make it executable so you can run it, when you are ready to uninstall.

Now then, getting back to our install. If Packstack was successful you should see something like this.

1

2

3

4

5

6

7

8

9

10

11

12

13

****Installation completed successfully ******

Additional information:

*Anewanswerfile was created in:/root/packstack-answers-20140802-125113.txt

*The generated manifests are available at:/var/tmp/packstack/20140802-125113-RzCDrE/manifests

[root@diamond tuxninja]#

Check Out Our Keystonerc_admin File

Just take note of what environment variables it sets. To login to our GUI we need these credentials and to use any command line functionality we have to source this file in our shell.

1

2

3

4

5

6

7

[root@diamond tuxninja]# cat ~/keystonerc_admin

export OS_USERNAME=admin

export OS_TENANT_NAME=admin

export OS_PASSWORD=a57c83f56ccc41f5

export OS_AUTH_URL=http://192.168.1.10:5000/v2.0/

export PS1='[\u@\h \W(keystone_admin)]\$ '

[root@diamond tuxninja]#

Configuring Openstack Networking

Because we did not deploy the demo project, the default network configuration in Openstack should not exist. If it does we can login to our GUI @ http://diamond.tuxlabs.com/dashboard by sourcing the credentials above in our keystonerc_admin and delete the network configuration under

admin –> routers

admin –> networks

Then you are ready to re-create your network. You could use the GUI to do this, which is pretty straight forward…but since GUI’s are dirty we are going to use the command line.

In order to use any openstack command line utilities you must first source the keystonerc_admin file so the required environment variables are set in your shell. Like so…

1

2

[root@diamond~]# source keystonerc_admin

[root@diamond~(keystone_admin)]#

Seeing (keystone_admin) in your prompt means you are ready to run commands and here you go

Copy & pasting this will spit out a lot of messages showing the result set of each command.

The important thing to realize is that your public network has to be configured with an external router. This effectively NAT’s the 10 & 192 networks to make internet access available to your VM’s. When this is configured correctly your Network Topology diagram under the dirty GUI should resemble this.

The router_gateway always shows DOWN. No idea why, but someone else said it’s a bug 😉

Next under your project (the admin project) you have to configure your Security Groups under Compute—>Access & Security—>Security Groups. Once there click Manage Rules for the default security group. Delete what’s there. Add Ingress/Egress for ALL ICMP, ALL TCP, and ALL UDP accepting all other defaults on the form. This will open up your firewall completely.

Time To Restart Openstack

Finally we need to restart Openstack, validating that openvswitch-agent starts.

Note: You can use openstack-service –full-restart to restart all openstack services.

If openvswitch-agent does not start run back through the above steps and / or consult the references or email me tuxninja [at] tuxlabs.com. This can be a real pain to figure out, but I finally have it down pat.

If openvswitch-agent is active we are good to go. Next we have to create an SSH Key.

Creating Our Cloud’s SSH Key

Our Openstack Cloud has no authentication system for our virtual machines by default. Eventually you could configure an LDAP server, and configure your images or configure in Puppet the required pam configurations to use LDAP for authentication by your VM’s, but for now Openstack allows for a post configuration step after building VM’s where it will add your SSH key. It does this by using the metadata service which cloud images will look for, if they can reach the metadata service @ http://169.254.169.254 then they can copy down the public key and install it on the VM for you allowing you to login to that VM using the VM’s operating system default account (i.e. ubuntu, fedora). Now before I continue there are alternatives such as using guestfish to modify an image’s configuration or using a post install cloud-init configuration to specific a password to an account, but the cleanest and simplest way is to use the metadata service after generating & importing the public key into openstack and assigning it to our VM guest.

I don’t enter a password to use password-less SSH, cause again this is a lab environment and typing passwords sucks.

Launching A VM

Next you want to launch a VM that we can use to import our SSH key into. Login to the dashboard @ http://diamond.tuxlabs.com/dashboard again using the keystonerc_admin credentials provided. Once inside navigate to Project—>Compute—>Instances and click the Launch Instance button in the upper right corner. You will be presented with a form. Here are the screenshots to guide you through.

Fill out the details tab…

Click the Access & Security Tab…

Then Click The + …

Now in order to fill this out we have to copy and paste the contents (bold part) of the public key we created.

See the status spawning ? When the virtual machine is done being built you will see status changed to ‘Running’, you can watch the machine boot, by clicking the name of the instance and then going to the Log or Console tab.

This is great our VM is running and it has an IP address configured ! But that IP address is only used for Openstack communication, so we still need to associate a floating IP to this system so we can SSH to it. Click on the More dropdown and select Associate Floating IP.

Then click the + …

Click Allocate IP…

And then click associate…

Now you should see your VM has an internal IP (a 10 dot address) and an External IP on the 192.168.1.0/24 network. Now we can SSH to our VM.

We didn’t mention it earlier, but Openstack comes with only one Linux image by default, it’s called CirrOS and it’s just a tiny-minimal cloud image for testing. The login to this operating system is cirros / cubswin:) … which is visible from the console log of the machine once fully booted. Now you could SSH into the machine on 192.168.1.54 using that login and password like so…

1

2

3

4

5

6

7

[root@diamond tuxninja]# ssh cirros@192.168.1.54

The authenticity of host'192.168.1.54 (192.168.1.54)'can't be established.

Warning: Permanently added '192.168.1.54' (RSA) to the list of known hosts.

cirros@192.168.1.54'spassword:

$

But that is lame ! We imported our SSH key remember ? So how do we use that ? Like this..

1

2

[root@diamond tuxninja]# ssh -i cloud.key cirros@192.168.1.54

$

Wow, that is cool.

Now what ? Let’s add some more images.

Installing More Operating System Images

Ok, Openstack is installed, Networking is working, Metadata service and our keys are working, we are happy campers, but to make this Cloud useful we are going to need some real Linux images. There is two ways to install images using glance (the openstack image service).

In the first method we download the image using wget & then we run the proper glance command. Here is this approach in action.

Remember whenever you run openstack commands you are going the status or result set back from the command.

Once you have installed the Fedora and Ubuntu images you should have a real cloud on your hands. Now you can do things like expand your setup for multi-node or get LBaaS working for load balancing requests to your web servers for example. Or if you want to be amazeballs you could install things like Puppet inside of Docker, or Cloud Foundry and build your PaaS (Platform As A Service)!

Final Comments

Openstack can be tricky even for an experienced Sys Admin. While learning Openstack I found it difficult to find tutorials on exactly the setup I was looking for mainly in terms of how the network was being configured, and this often made me second guess myself when I would run into an issue as I learned Openstack. I wanted to write this article to give back and help to educate my brothers and students of life long learning as you embark on your Openstack adventure, Godspeed. Here are some commands of note to help you along your way.

you can do this for every openstack service, but below I am just showing the two most popular to troubleshoot for getting your cloud up and going, nova and neutron aka controller, compute and networking logs.

1

2

[root@diamond tuxninja]# tail /var/log/nova/*.log

[root@diamond tuxninja]# tail /var/log/neutron/*.log

Want to change which partition Openstack uses on local disk (ephemeral storage) to deploy VM’s ?

If the majority of your ephemeral (local) disk is under a different partition other than /var… For example, mine was under /home, then you need to change your state path and restart openstack services. Don’t forget to copy any existing files in /var/lib/nova/ to the new location.

Additionally, I have seen different errors presented referencing vif.py

The fix for this could be multiple different things. What you should do first is verify your network interfaces are configured 100% correctly. Compare your ovs-vsctl output to mine above… Look different ? Shorter, stuff is missing ? Ok this error usually means Openstack is having difficulty inserting port’s into Openvswitch for whatever reason. What I found is the most common error is that you did not restart networking, before restarting openstack networking after making a configuration change to the interfaces. So to resolve this try…