dev += ops

With the proliferation of Virtualization/Cloud infrastructure our traditional Ideas and practices when it comes to “The System” are not keeping pace. With every IOP, Cycle, and byte we consume in our workloads we pay.
It used to be that a system was idle in one of its dimensions a fair amount of time. This idle nature allowed us to build “Fat” management tools to ensure things worked.

What follows is a rant that has been percolating in me for a couple years now. I have attempted to try to organize the rant in such a way to be comprehensible. Mind you this post started in November of 2012, and I have been slowly adding to it.

Where I rant about things

Waste

All the management tools seem to be wasting heaps of resource. Especially the most popular (and my favorite) chef/puppet. Things like Ansible and Cfengine3 have a much more desirable overhead for config management. Other monitoring tools in common use are crazy overhead. Most nagios plugins are not resource minded. Things like collectd are.

In the same vein Most of what is a “distro” is waste in the modern datacenter.

I/O

Most of the tools are doing some file caching and state caching but still tend to tear the shit outta file i/o when ran. On some boxes with chef (puppet before that) with heaps of files to stat the runs go nuts with tiny I/O. On stuff like ec2 this is money.
Instead Why can’t we plug these things into the OS’s file event notifications. see my hacked chef-inotify stuff for a non functional experiment.

Disk

Most distros put heaps of shit on disk that we don’t need in the modern data center. Most documentation on every VM you build is pretty useless. Man pages are great, I love them and use them almost every day, but i don’t need them on every one of a thousand vm’s eating up 100’s of MB (oh woop a couple gigs).. Yea but thats a couple gigs we pay for!

Generally it doesn’t stop with docs. The mentality today is disk is cheap. In many ways it is, but there are other costs to disk. Especially when it comes to the IaaS images. Time to provision is something most people care about.

CPU

All the monitoring and management things tend to have a heap of crazy overhead. Mainly cause individually its not an issue, but collectively they add up to be a nuisance.

I dunno how to solve this really. It’s a necessary design trade off between being able to automate/monitor easily with whatever tools you may partially know, and thrash your system :D

Memory

Again in our IaaS infra. As a consumer and a provider these resources matter. As a provider that capacity I can’t sell. As a consumer thats money out of my pocket. Yet it is treated as tho its no thing.

Chef/Puppet trade off heaps of memory for ease of use

90% of the time the daemon isn’t doing anything except chewing on memory

Granted I run chef on a cron schedule so it only eats what it needs for a run, and the fork mode solves some of the leaking issues it used to have. Fundamentally tho these issues are side-effect of using ruby or any other high level lang that wants to treat memory like shit.

I got no constructive Ideas here other than write your shit in a systems language when its meant to be a systems service. (not really constructive more curmudgeon, but yea)

Supervising the Supervisors

So yea, Chef and puppet etc.. like to watch services for state. Most distros today implement service supervision in one way or the other (upstart/systemd/launchd). Yet our frameworks aren’t plugging into this system for service management in a direct way. Instead they are executing shell scripts to interrogate state. Never mind the fact that they are polling said state when some of those supervisors support subscriptions for notifications.

Init scripts are imperative. Supervisors and CM want to be declarative systems. I think some issues arise because of the conflict in these approaches.

Our service infra should not poll.

Our Config Management should subscribe to services.

Init scripts should die

Maybe the 2 systems really could be merged in some way they are both doing so much of the same things.

Notifications > Poll

srsly

Package Managers are not Config management

This one was hot on the chef list recently. A package manger is something we did before we had awesome ways to manage all our configuration state. They are also a way to deliver a binary with its configuration setup in a way that should mean the binary will run. Unfortunately they have their own opinions on this, and frequently that clashes with the goal of the Config management framework.

Packages are not CM

CM can be a packager tho.

The network is the computer

Sun was right. Look at where we are today with Infra and Platform services. I want my systems to be aware of one another in a deep way. Not by some overarching orchestration framework. I don’t need a distro for my laptop I need one for my datacenter that isn’t a pile of shit.

I want my distro to only have what it needs to do the thing its doing, and nothing more. When those needs change it should be able to get to that state without issue. Possibly talking to it’s neighbors.

Would be nice to see promise theory baked in

Torrent as a basic service

Dep Hell

Everyone who has used any packager or any programing language knows this. Cfg Management can help in some ways by building abstractions over the dependencies, and obfuscating the pain to a point. Even the management tools in this scenario fall victim to dephell. A good example from chef land. I need to pull an apt cookbook to manage a yum based system.. Now I understand why, and the root cause within chef and the metadata that leads to this. I also understand that this is not really an issue when it comes to how the recipes are evaluated, but it still makes me upset/uneasy.

Source based distros get around this in a heavy handed way, but it is my experience that they tend to have less issues than binary distros. Tho this is anecdotal. As is this entire rant I am calling a blog post.

there has to be a better way

Too Much central Bullshit

Everyone likes to build central management of things. Central monitoring. Central aggregation of logs. This just doesn’t play in the scale game very well. It lends to management and scaling pains.

there has to be a better way

Everything is shoe-string

Scripts on Scripts on Scripts all glued together with python/ruby does not a system make. :\

there has to be a better way

My Ideas

Aside from the ideas in the rant, I have these other rough ideas rolling around in my noggin, and I feel its good to get them out. Again mostly ranting, and notes to self.

Hackers are Smart

Maybe they can teach us stuff

When you look at many of the botnets out there. They have some really admirable qualities. I mean heres complex networks of machines working together with pretty minimal C&C, and across very hostile and turbulent environments. Why aren’t we using more of this shit in Datacenters for good, not evil!
Look at these qualities:

One Config FrameWork to Rule them All

Leverage services Notification system
** s6 or Systemd (both have service bus arch)

Notifications on state change

Registration of new services

Init -> Calls CM for service actions

Canonical file changes should notify the CM.

CM doesn’t even stat files unless FS event has been triggered

This makes our basic OS so much smarter than stupid init scrips

Merge the Packager and The CM ?

Everyone is CI Pimpin’ why cant we just build the whole fucking OS. (especially if its tiny) and described in CM.

Chef already has the basic resources (ark) to be a packager, and produce binaries that could be disted out. Whole thing could sit on github.

In this scenario I imagine chef (or something like it) describing exactly what a system should be. I.e. packages and package deps. The entire enchilada, but this enchilada is pretty small to be fair. The entire distro is then assembled from a stage2 bootstrap up to “base”. This process is built publicly on Travis or something akin to this. No releases. Just the CI distro.

Brew Rocks

Can’t we just do that for everything. Or kinda. I mean just make it super simple and easy to write recipes. Something like a brew + chef or some wacky way to define a packages build flags and deps, but integrated in the config management that sits at the center of this new Un-Distro.

Ok Ok binaries have a place.

First I want to give huge props to the guys at RiotGames. Berkshelf is shaping up to be an awesome tool in the chef toolbox.

Second I want to say that Tim Dysinger’s chef-box was what gave me the idea, and this is largely borrowed from the concept he started there, but with Berkshelf, Vagrant chef provisioner, and cookbooks instead of custom scripts.

This is a tiny git repo that you can check out, run vagrant, and have a working chef-server and client up in no time. I am using this right now as a skeleton for testing and developing other things. It uses Berkshelf and Vagrant with minor special sauce for managing client/validation keys and knife configs.

Why?

Ease

I needed to get a quick chef server + 1 client up for testing some cookbooks. I can’t do this in test-kitchen, and test-kitchen is using librarian (all my infra is berkshelf).

Complex Test Scenario

In working to build datacenter infra over and over on a CI pipeline. I need model complex multi-node interactions in my dev and testing.

Education

I also wanted to have an easy thing that can use for teaching other people chef without the hassle of getting a server installed and a knife client created.

Fun

I wanted to see how berkshelf was integrating with vagrant, and this seemed like a simple project.

Getting Started

Before you move forward make sure the following prerequisites are satisfied.

Pull down the git source

Install the gems required

Run Vagrant

Usage

If you’re familiar with vagrant all the vagrant stuff is the same, if you’re not familiar with vagrant go do so

Basic Vagrant stuff

Note that vagrant destroy will remove the client from the chef-server as well as delete the node form the chef server.

Login To Client

bundle exec vagrant ssh client1

Delete client

bundle exec vagrant destroy client1

Knife stuff

The chef/knife.rb file sets up a relative configuration that should work inside your server vm as well as in the base directory of your host. So you can run all your knife commands, and it will talk to your server vm.

The server install run_list runs a recipe that generates the knife credentials and stores them in /vagrant/chef. As well as copying the validation.pem there so that the subsequent clients can register with the server. When you remove a client via vagrant destroy the client and node will be removed from the server.

Spinning more clients

You can mod the Vagrant file to spin as many clients as you wish. I will probably add in some stuff down the road to make this easier.

TODO

I may work out some simple externalization of run_lists and box type/url. This would provide a simple primitive for building up your own test framework around vagrant.

Knife-XAPI

Just finished making a knife plugin (my first) and gem (also my first) that enables knife xapi support. Right now it is only supporting guest create, but I plan on adding more commands in the near future.

you can install it with gem

1

gem install knife-xapi

Now you can spin up a guest on a Xen API host with knife. This is basic usage

Here I am using the default xenserver 5 template to install a cent5 based vm with -T. Setting the VM Boot Arguments with -B that are instructing anaconda where to get the kickstart and how to setup the netinstalls networking.
The -C switch is setting up 4 cpu’s on the guest. The -M is allocating 4 gigs of mem and -D is creating the root disk (xvda) as a 5GB box. This particular kickstart
is a minimal centos install. This kicks off my kickstart install and in ~3 minutes i have a fresh machine installed via knife.

TODO

This is the stuff i want to add into this plugin:

Fix SSL issues

Guest Destroy

Guest List

Network List

Network Create

Network Destroy

SR Management

I may or may not include this in the knife command:

VM Metrics

Host Metrics

VIF/VBD Metrics

Bugs

Right now the big one is that the XML::RPC client gem is not supporting a way for me to ignore SSL Self signed certs that xenservers ship with. So https:// will only work on properly signed api endpoints.

Source

You can grab poke/comment and help me make this better!
Codes up on github

About 2 years ago i wrote a hacky ohai plugin to push all sysctl values into node data, and then a crappy recipe that would run an exec if they didn’t match values in node-data. A few months back i re-wrote all this to just use an lwrp that would just set these value, and a recipe that would pull them from attributes. So if you want to be able to see what nodes have huge pages enabled or whatnot.

Today I took some time to clean up the readme and release that code for others to hopefully find useful.

Posterous just doesn’t support markdown very well, and every time I felt like writing a blog entry I would get frustrated. So after a couple people have expressed their pleasure with tools like jekyll using github pages. I decided to give it a go.

I love this setup. I tried jumping in with octopress without the rtfm. Having never setup github pages before i mucked up by creating a project site vs a user site, but once I fixed that (by creating a spheromak.github.com repo) everything clicked in.

Pulling old posterous posts down into jekyll was simple, and now im off and running with real markdown blog. Backed by git, and I am happier for it.

Why doesn’t XenServer enable shadow passwords or authconfig. What year is this 1996 ? My guess is so that it can readd the hashed root without privs for things like xencenter to be able to edit the passwd via xapi. Tho i don’t see any password manipulation api hooks. I am really curious why this isnt set.

I have been manually (well chef does it) setting the dom0 passwords to shadow by detecting if its a xenserver and running pwconv. There hasn’t been any repercussions in the last 3 years.

Untill now.

Now moving up to xen6 in my test pool im running into all sortsa auth isues requireing me to single user my upgraded Hosts and pwconv and passwd them. Blargh.

if you don’t already use Vagrant you should. If you are doing any dev you should use it! if your an ops person then you should use it, and tell your developers about it. If you are a nerd you should use Vagrant. Nuf said.

Arch is also pretty sweet. Arch Linux has a system approach that rings totally right with me (I feel another post about arch in here someplace). Mainly Arch lets you do what you want how you want which is awesome!

Anyhow I have a little pet project that originally I was gonna do an LFS build up on, but realized arch already had a lot of what I needed. systemd support, and an not ancient/screwed up ruby install.

I was using lazyweb to solve this problem. Google’s results had solutions where people were calling all sorts of commands and parsing different files to detect system types. I didn’t want to do a syscall. I wanted this solution to be as platform independant as possible. I only needed to know if it was linux/solaris/osx/bsd etc etc. Not version or somehting special. SImple enough. 1 minute of RTFM’n the bash manual turned up $OSTYPE and $HOSTTYPE

Man I love simple solutions!

P.S. First post!

P.P.S posterious’s syntax support sucks. took longer to figure out than this post did to write.