Technical Engineer focusing on the Cloud and Systems Engineering…

Over the last few days, I have had the super awesome opportunity to deal with the WordPress XMLRPC attack. Yes, Yes it has been a while since it happened, but apparently I have been pretty lucky to not have been used as a patsy, till now. I started off doing the first things that are necessary usually with any vulnerability, patching wordpress. That was easy enough, but I did notice that I was still being attacked. Cripes… What a boy to do.

Back in May, I had the opportunity to attend OpenStack Summit 2014. It was held in Atlanta, GA at the World Congress Center. This was my first OpenStack Conference and I was excited about all the sessions. The most impactful part of my trip actually happened on the first day of the Summit. It was during the opening keynote, where Jonathon Bryce, Executive Director of the Openstack Foundation, interviewed a few major customers. They spoke about how they use Openstack and what it has meant for this business. He interviewed people from Wells Fargo, Sony, and a few others while going over the success of the past year of the Havana release. The interview that stuck out the most to me was Chris Launey from The Walt Disney Company.

Chris Launey is Director of Cloud Services and Architecture at Disney out of Seattle, Washington. Jonathan spoke with Chris and the first question he asked him was what he was doing at the Walt Disney Company. Chris’s comment was quite enlightening. He mentioned how today if someone wanted to create a website, it was super easy for them to do so. They could easily go home register for a WordPress site, buy a domain name, register DNS, and license some clipart all in about 20 minutes with a credit card. They would have everything these needed to get a web presence up and running. On the other hand, if you needed to do the same thing at work, you have to open a ticket, fill out some forms, have your manager respond to an email, and possibly sign some documents in hopes that you get what you actually requested. He went on to mention that we as technologists empower people more at home than we do when they come to work and that struck a nerve with me.

I thought about it all day, and during all my sessions as I listened to some really great discussions about providing services to their users. I thought about what we do today and how we struggle with this exact problem that Chris mentioned. We fill out forms or tickets, call people on the phone, we setup gates so someone can do approve tasks, or we setup meetings to discuss the possibility of attempting an idea. Its easier to just do it on my laptop or look for other ways to get it done. As technologists, we look for ways to make tasks easier for our customers. We create API’s, update UI and UX for ease of use, we design around flexibility, but when we need to do these things for ourselves we forget about it or push it aside.

During the entire week, this idea really stuck with me. As I checked email, or looked at the incoming tickets, I began to think even more about how can this be fixed and it then struck me. We need to think about everything “As A Service”. Every platform that we design or implement should be thought about with the ideal of providing a service. We should not be thinking about creating platforms for just a single application. We should be thinking about how to implement these on a larger scale. I can almost guarantee you that if you are thinking about a particular setup, someone pretty close to you probably is as well.

What does a “As a Service” really entail? Well, I see it accomplishing three primary goals.

The first is that is needs to be scalable. This service can not just be setup for a few transactions before it falls on the ground. If you are working in a web company or a large scale company, you probably have dealt with something like this, either way it should be a primary goal no matter what. Talking about scalability has a lot of different meanings to people. I think the best example of an application being scalable is ElasticSearch. Take a look at how its easy to scale ElasticSearch and you will see what I am talking about.

The next goal is that is should support multi-tenancy. Multi-Tenancy is the ideal that a single piece of software can support multiple clients at the same time. It can be thought of at virtually partitioning an application so that it appears to the client as its own application or service. One way to think about this is that no matter if its internal users or external users or both, they should not have any idea who the other customers are on the platform.

The final goal when I think about “As a Service” is authentication. Now this one is a bit more tricky, but plays a very important part when dealing with Multi-Tenancy and scalability. Depending on your audience, authentication may be necessary when dealing with customer data, but should be thought about no matter what. Find different ways to authenticate. Not everything has to be a username and password. The use of PKI certificates are a great way to authenticate your users to an application and can definitely limit the load on your platform when getting lots of requests. Additionally, look at using a token based system that expires after a certain amount of time. Combining the two like Keystone can within Openstack is a great idea and can support a very high amount of load while serving the application.

“Thinking As A Service” should be a new way for everyone to plan out their infrastructure. While this directly does not solve the how to get a website setup at work quickly problem, it is a major step in the right direction. Teams can now quickly deploy new automation tasks and websites that allow users to get their work done faster. Just think about the amount of time and effort that could be saved with integration from all these services. Today, Openstack provides us Virtual Compute Resources, but soon it will be able to provide us Physical Compute Resources. Platforms such as a Message Bus, a logging/search application, or Database As A Service (Mongo/MySQL) available anywhere for deployment that is scalable and multi-tenant could relief much of the development time.

The possibility of making it easier for ourselves and our team mates is there. We just need to “Think As A Service”.

At the moment, I’m sitting in my hotel room in San Jose after a decent morning relaxing and enjoying a great day outside. While it’s a bit chilly at home (66 degrees or so), and it’s about 80 degrees who am I to complain.

I am in San Jose to attend 2013’s Startup School . It is being put on by Y-Combinator and it has been going on for the last 5 years it seems. This is the first time I applied and was accepted so I am super excited. This year’s speakers doesn’t look to disappoint at all.

One of the build systems I have been spending a lot of time working with is Cobbler. No, Cobbler is not that tasty dessert either. It an installation server that started out at Red Hat and has since been Open Sourced. You can learn more about it at the Cobbler Home Page. It can make a system install very quick and easy, and can be used to install operating systems other than just Red Hat Linux. We use it to do FreeBSD, VMWare, and a few others.

Cobbler relies on templated kickstart scripts to ease the burden of building lots and lots of boxes. Kickstart scripts are instructions that the operating system installers use to configure the systems in a particular manner. They can do amazingly complex tasks like determining the right sector to start a partition on, to setting a root password. It is only natural to have Puppet be one of the tasks that cobbler completes during build.

Easy Stuff

Getting Puppet installed on the system is the easy stuff. All you have to do is point the machine to the correct repository and install the puppet agent. It will grab all the necessary RPMs and install any dependencies that are needed. That can be done during the %POST_INSTALL section of the build or adding it as a package that is installed with everything else. One then ensures that the puppet server is correctly set in the puppet.conf file and you are set.

Harder Stuff

The more difficult part comes when you are continuously building new systems. While adding a new server that has never been created before is simple (if setting autosigning to true), what happens when you want to re-provision a server. That is you want to reinstall the operating system using the same hostname that you did previously. That is where the trouble begins. You can manually go to the puppet master and revoke and remove the certificate by hand. That may work for one or two servers, but not for 10’s or even 100’s of servers at a time, or automatically during a continuous integration workflow. There has to be a better way.. and there is

Pretty simple solution

The solution is actually pretty simple. During every build, simply use the hostname that is being set to always attempt to revoke and delete the certificate. You attempt to do this for every server during every build. It does not matter if the server has never been built before. This ensures that the certificate is removed and the server can successfully grab the necessary information from the puppet master.

The Code Snippet

The way I do this in the build system is that I created a snippet within cobbler. I then add the snippet to every kickstart that I plan on using. The code snippet looks like this:

This code makes two calls. The first is to revoke the certificate and the second is to actually delete it from the server. You simply can not make a delete call, as you will receive an error and the certificate will still be there on the server. Be sure to place this snippet closer to the top of your %POST_INSTALL portion of the build. It should come before you do any other puppet tasks.

So, go try out cobbler, add this snippet to your builds to make sure that your certificates have been deleted from your hosts. This will allow you to get hosts connected to Puppet up and running much quicker.

Trying to upgrade VMware Tools can sometimes be a difficult task. There are multiple ways to upgrade it, just as there are multiple ways to install it. At my company, the best way we have found was to use Cobbler and RPM to install VMware Tools.

We sync with the repository daily to ensure we always have the updated tools package. Once we have those repo’s available, we include them into our Kickstart scripts by checking to see if the host is a VMware server or not.

Now that you have VMware Tools installed, imagine you are happily installing Tools on all your VM’s and one day you decide to upgrade to the latest version of ESX and decide Tools needs to be updated as well.

RHEL5

Lets start off with RHEL5 first, you decide to use the normal:

yum update vmware\*

Unfortunately, you will see this error instead:

...
Transaction Check Error:
file /lib/modules/2.6.18-8.el5/extra/vmware-tools-vmxnet3/vmxnet3.ko from install of kmod-vmware-tools-vmxnet3-1.0.48.0-2.6.18.8.el5.3.x86_64 conflicts with file from package kmod-vmware-tools-vmxnet3-1.0.47.0-2.6.18.8.el5.3.x86_64
file /lib/modules/2.6.18-8.el5/extra/vmware-tools-vmxnet/vmxnet.ko from install of kmod-vmware-tools-vmxnet-2.0.9.2-2.6.18.8.el5.3.x86_64 conflicts with file from package kmod-vmware-tools-vmxnet-2.0.9.1-2.6.18.8.el5.3.x86_64
file /lib/modules/2.6.18-8.el5/extra/vmware-tools-pvscsi/pvscsi.ko from install of kmod-vmware-tools-pvscsi-1.0.3.0-2.6.18.8.el5.3.x86_64 conflicts with file from package kmod-vmware-tools-pvscsi-1.0.2.0-2.6.18.8.el5.3.x86_64

Well that just makes me a very sad panda. I’ve tried just upgrading partial packages, but still nothing will get this to install, until now. RHEL5’s solution is actually very easy. The solution is:

yum install yum-kmod

The ‘yum-kmod’ package is there to help facilitate any package that involves the kernel. Once you install the package, you can run the update again on your host, and VMware Tools will properly install.

RHEL6

With RHEL6, the upgrade is not as easy as it is for RHEL5. First, there is no yum-kmod package (more about this in a second) that will help facilitate the install of kernel modules. If you try to update VMware Tools via yum, you will see an error like this:

Not a fun time in upgrade-ville to say the least. The reason that we are seeing different packages than we did in RHEL5 is because yum already has the yum-kmod built in. So, the problem isn’t directly with the kernel modules, but the way that VMware did it’s packaging for these RPM’s. Hopefully, in ESX 5.1, they fix this issue, but I haven’t tried that one out yet. I’ll update everyone once I get around to upgrading our 5.1 VM’s.

The solution for this is a little bit more brute force. Instead of having a package help us, we are going to have to remove the troubling packages first, upgrade VMware Tools, and then reinstall the missing packages. Not very hard but it can freak a few people out since we have to uninstall a few packages.

This starts off by removing VMCI and VSOCK from VMware Tools. In truth, most people are not using VMCI or VSOCK very much, as the solution really didn’t take off as much as VMWare would like. Even so, you are only removing that functionality for a second, while upgrading everything else before putting them all back. The core functionality of VMware Tools still remains, VMCI and VSOCK are just extra functionality that can be used if desired but is not required.

Conclusion

Updating VMWare Tools isn’t that difficult at all, but it did take some banging of my head on the desk for a while, till I figured out what was really going on. I hope VMware fixes a few of these problems. The kmod issue with RHEL5 is easy to fix and that is fine by me, but the idea of uninstalling and reinstalling packages in order to really upgrade them just doesn’t seem “nice and clean” as it should be. Hopefully VMware gets around to updating the RPM’s and getting us back on the road to simple “yum update” functionality.