Posts written by Walter Bentley

Since the initial launch of the OpenStack Innovation Center back in July of 2015, much work has been done. Wanted to take a moment to share the current status and some details about its next phases. If you are unfamiliar with OSIC, let me start off with some very quick background information.

Rackspace Private Cloud (RPC) powered by OpenStack has done a great job incorporating and enabling many of the great capabilities natively found within Cinder. With RPC, you gain the ability to leverage either Cinder nodes (commodity hardware using ephemeral storage exposing that storage as Block storage to your cloud) or to connect your OpenStack cloud directly to a shared storage solution via Cinder integration drivers. This is where our friends at NetApp come into play. Rackspace and NetApp have formed a unique relationship to improve the Cinder shared storage capability within OpenStack. These two teams worked together to create a repeatable, approved, and tested process to integrate NetApp storage solutions into Rackspace Private Cloud footprints within a Rackspace datacenter or at the customer's datacenter.

So you have spent months convincing your leadership to go with OpenStack. Finally the keys of the cloud are turned over to you as the Cloud Operator, and you then look over at your co-workers and say “now what”. The next set of phrases normally are something like: Now how do we best administer this cloud? Cloud is suppose to be easier, right?

Through the course of technology, infrastructure and application monitoring have changed positions. Not so long ago, monitoring was an afterthought when rolling out your new application or standing up your new rack of servers. More recently, I have observed monitoring to be one of the first considerations, to the point where it is actually in the initial project plan.

This evolution, while late in my mind, is the right direction…not just for the System Admin who gets the 2AM email alert or the application owner who on a monthly basis sadly report to his leadership 97% SLA on his app. Truly knowing how your application is affecting your infrastructure is one of the keys to a successful cloud.

With monitoring now being in an elevated position, that then leaves you to think: what should I use for monitoring? While there are plenty of software solutions in the market, many of which solve for different problems.

Last week I had the privilege to attend the OpenStack Super Bowl, aka the OpenStack Summit, in Vancouver. It was incredible just to be around so many other folks who also believe strongly in OpenStack.

So in between sessions, I stumbled across a friendly competition sponsored by Intel called, Rule the Stack. It was a competition to see who can build a fully functioning OpenStack cloud the fastest on (6) six physical servers. My coworker had mentioned it to me a week earlier, but, frankly, I forgot about it. I was focused on my two workshops and did not have extra time to plan. Anyone who knows me would know I love a challenge and never turn one down. Yes, of course you know I had to sign up to give it a go.

Before going much further, I wanted to fully disclose that I did not win the main prize in any way :D. I watched the SUSE guys do it in 6 minutes (which is a whole other discussion). Despite knowing I would not 'win' the competition, I went for it anyway. For me personally, it was not about winning but was more about solving this real life puzzle in a real-life repeatable way. The Intel guys appreciated my determined nature and awarded me as the 'Most Determined' participant.

When dealing with OpenStack, one of the challenges is designing an architecture that can scale horizontally and make decisions based on the commodity hardware presented to you. Holding true to the foundation OpenStack was originally built on, open cloud platform can run on any hardware (OEM or commodity or Open Compute). This competition pushes you to make all those decisions.

Again, this struck a cord in my heart because this is what I do for a living and because I believe the approach we take with RPC (Rackspace Private Cloud) makes solving those decisions very easy.

The quick breakdown of the competition is:

You are provided with (6) six physical nodes consisting of (3) three different configurations. Two node types had the same processor and memory but had a different number of drives. Then the third node type had a different processor, more memory and TPM module (more details can be found on the Intel site above).

Had to build using the Kilo release of OpenStack

The process for building out your configuration was open to you to decide. You could connect to the local network where the servers are connected via your own laptop or laptops provided.

There was opportunity for bonuses, shaving time from your final clock time, and penalties could be given for unconfigured nodes or nodes that were not optimized for use.

As soon as I saw the node configurations, I knew exactly how I wanted it to be setup. Keep in mind, I was not aiming for the fastest build but, rather the most complete flexible real life design. Despite HA not being a requirement (although I am attempting to have that rule changed for Tokyo (wink wink)), my reference architecture did include a dual server control plane. Also, I decided to include dual Cinder nodes and, of course, dual compute nodes. My complete reference architecture is outlined below.

The next step was to determine how to utilize the (4) four VLAN networks part of the provided specs. RPC asked for three individual network bridges and a management network. Each node had two NICs, and the first NIC was bound to VLAN 11. I sort of went back and forth with this decision for a while but finally settled in on one that worked.

At this point, I am all ready to go, but there is still one last decision to make. How do I lay down the base OS on these servers? Again, not totally concerned with speed, I wanted to use a way that could be repeatable, flexible and cover the most ground possible without requiring post-install configurations. After a quick poll of my team, two contenders came to the forefront: Cobbler or MaaS. I'm not going to say which one turned out to be the most complete option and how I did it, as it could be my secret weapon for Tokyo. I will say is you would be shocked as to which one turned out to be the best option.

So everything is prepped and the clock starts. Let’s just say the first time around was not pretty at all. Did I give up then? Of course not! I just signed up to try again. Second attempt was a bit better, but I literally ran out of time before the next participants arrived (at that point I was still building at the 2 hour mark). Yes, the third attempt was running perfect, and, yet again, I was stopped because the previous participants had cut into my time slot a bit. My fourth, and final, attempt did the trick. This last attempt would have come in right under 1 hour and 30 minutes, but, unfortunately, I was literally being kicked out by security at the end of the session day on Thursday.

Pro tip:Sign up early and do not wait until the end of the Summit, as you will not be allowed a big enough time slot to finish.

All and all, it was a great experience and one that I plan to repeat in Tokyo. Special thanks go out to the Intel staff on hand in Vancouver - they were the best and very supportive/accommodating. Just a great set of guys! Congrats to the SUSE team who I have to assume were the winners in Vancouver. Good thing mostly about this is...you get another chance to step up and show off your stuff in order to be crowned “Ruler of the Stack”.

As the look and feel of the cloud evolves, matures, and hedges toward main stream adoption, the Solution Architects, Developers, and Infrastructure engineers of Enterprises face the challenge to determine what technologies to consume. Should I go with something that requires vendor licensing? Or should I look to Open Source technologies, such as OpenStack? Then if you do decide that OpenStack solves for your technology needs, how best could someone layout its pros and cons to their senior leadership.

Those of us who have ever had to stand in front of their Director/CTO/CIO and figuratively 'fight' for a particular technology/product completely understands that this task is not for the meek of heart. I can remember very vividly holding index cards in my hands with bullet points, as I was attempting to lay out all the reasons why OpenStack should be the company's next major infrastructure shift. Being prepared for this conversation is critical to the overall enterprises architecture, so you need to articulate clearly why OpenStack is the best choice. You can never be too prepared. There will always be questions that you as a technology advocate, will not even think of. In my opinion, being prepared is key. So let’s start on our technology layer cake.

Recently I had the pleasure of hosting a webinar covering the Evolution of OpenStack. No matter how many times I review the history of OpenStack, I manage to learn something new. Just the idea that multiple companies, with distinct unique ideas can come together to make what I consider to be a super platform is amazing. Whether you think OpenStack is ready for prime time or not, it is hard to deny the power and disruptive nature it has in the current cloud market.

While this blog post may seem trivial on the surface, it does pack some very interesting information on how very flexible the Rackspace Cloud Files product can be. While executing another customer project, the age old question of: “Where are we going to put the database backups?” was raised. Back in the day this question only really had one solution. In the current age of the cloud, you have a few options. Since I like to live life on the edge…I raised my hand and said Cloud Files.

For those of you not familiar with Cloud Files, the easiest way to describe it is shared Object Storage. In OpenStack lingo, you could also call it shared Swift. Cloud Files is an API enabled Object storage capability found on the Rackspace Public cloud platform. In this post, we will walk you thru how easy it is to store something as simple as database backups in Cloud Files using simple automation, fronted by Ansible of course (my orchestration drug of choice). I promise this post will be short and sweet.

So after being asked to do what I considered to be a easy thing, I soon realized that it was not :(. Rather it was easy to do, just not easy to automate doing it. Figured others could benefit from my discoveries. Before getting started, please note these instructions are for RHEL, Fedora and CentOS. Some minor modifications would be needed to accommodate Ubuntu, but the same concepts apply.

In the newest release of the Rackspace Private Cloud (RPC v9.0), we made changes to the reference architecture for improved stability. These changes included a different approach for deploying the cloud internally, which may also interest anyone looking into running the Rackspace private cloud. The decision to use Ansible going forward was based on two major thoughts: ease of deployment and flexible configuration. Ansible made it very easy for Rackspace to simplify the overall deployment and give users the ability to reconfigure the deployment as needed to fit their environments. Are you familiar with Ansible? If yes…skip the next paragraph and if not, please read on.

Recently I embarked on a customer project where they wanted to dynamically create (automate) a complete application stack, starting from the base server provisioning all the way up to the application deployment. One piece of the puzzle, previously seen as something to be done manually but now considered as part of the stack, is load balancers and any configurations related to load balancing.

There was an earlier blog post from Jesse Keating on Rolling Deployments with Ansible and Cloud Load Balancers, which also covered automating creating load balancers with Ansible. The major difference with this post is adding in the additional capability to not only create the load balancer but, also a DNS record to associate with it, enable SSL termination and adding an SSL certificate to that load balancer.

One of the HOTest new projects released within the previous release of OpenStack is the Heat project. Described as a main line project part of the OpenStack Orchestration program because Heat alone is not the complete orchestration capability being developed by the community, my gut tells me we have more projects based on orchestration coming soon. Setting some base ground work on what Heat provides capability wise is important. This is covered in two quick topics, what is orchestration and what is a stack?