Present Perfect

2016-10-99:31 pm

Over the past year I’ve chipped away at setting up new servers for apestaart and managing the deployment in puppet as opposed to a by now years old manual single server configuration that would be hard to replicate if the drives fail (one of which did recently, making this more urgent).

It’s been a while since I felt like I was good enough at puppet to love and hate it in equal parts, but mostly manage to control a deployment of around ten servers at a previous job.

Things were progressing an hour or two here and there at a time, and accelerated when a friend in our collective was launching a new business for which I wanted to make sure he had a decent redundancy setup.

I was saving the hardest part for last – setting up Nagios monitoring with Matthias Saou’spuppet-nagios module, which needs External Resources and storeconfigs working.

Even on the previous server setup based on CentOS 6, that was a pain to set up – needing MySQL and ruby’s ActiveRecord. But it sorta worked.

It seems that for newer puppet setups, you’re now supposed to use something called PuppetDB, which is not in fact a database on its own as the name suggests, but requires another database. Of course, it chose to need a different one – Postgres. Oh, and PuppetDB itself is in Java – now you get the cost of two runtimes when you use puppet!

So, to add useful Nagios monitoring to my puppet deploys, which without it are quite happy to be simple puppet apply runs from a local git checkout on each server, I now need storedconfigs which needs puppetdb which pulls in Java and Postgres. And that’s just so a system that handles distributed configuration can actually be told about the results of that distributed configuration and create a useful feedback cycle allowing it to do useful things to the observed result.

Since I test these deployments on local vagrant/VirtualBox machines, I had to double their RAM because of this – even just the puppetdb java server by default starts with 192MB reserved out of the box.

But enough complaining about these expensive changes – at least there was a working puppetdb module that managed to set things up well enough.

It was easy enough to get the first host monitored, and apart from some minor changes (like updating the default Nagios config template from 3.x to 4.x), I had a familiar Nagios view working showing results from the server running Nagios itself. Success!

But all runs from the other vm’s did not trigger adding any exported resources, and I couldn’t find anything wrong in the logs. In fact, I could not find /var/log/puppetdb/puppetdb.log at all…

fun with utf-8

After a long night of experimenting and head scratching, I chased down a first clue in /var/log/messages saying puppet-master[17702]: Ignoring invalid UTF-8 byte sequences in data to be sent to PuppetDB

I traced that down to puppetdb/char_encoding.rb, and with my limited ruby skills, I got a dump of the offending byte sequence by adding this code:

(I tend to use my name in debugging to have something easy to grep for, and I wanted some verification that the File dump wasn’t triggering any errors)
It took a little time at 3AM to remember where these /tmp files end up thanks to systemd, but once found, I saw it was a json blob with a command to “replace catalog”. That could explain why my puppetdb didn’t have any catalogs for other hosts. But file told me this was a plain ASCII file, so that didn’t help me narrow it down.

This turned up a few UTF-8 candidates. Googling around, I was reminded about how terrible utf-8 handling was in ruby 1.8, and saw information that puppet recommended using ASCII only in most of the manifests and files to avoid issues.

It turned out to be a config from a webalizer module:

webalizer/templates/webalizer.conf.erb: UTF-8 Unicode text

While it was written by a JesÃºs with a unicode name, the file itself didn’t have his name in it, and I couldn’t obviously find where the UTF-8 chars were hiding. One StackOverflow post later, I had nailed it down – UTF-8 spaces!

I have no idea how that slipped into a comment in a config file, but I changed the spaces and got rid of the error.

Puppet’s error was vague, did not provide any context whatsoever (Where do the bytes come from? Dump the part that is parseable? Dump the hex representation? Tell me the position in it where the problem is?), did not give any indication of the potential impact, and in a sea of spurious puppet warnings that you simply have to live with, is easy to miss. One down.

However, still no catalogs on the server, so still only one host being monitored. What next?

users, groups, and permissions

Chasing my next lead turned out to be my own fault. After turning off SELinux temporarily, checking all permissions on all puppetdb files to make sure that they were group-owned by puppetdb and writable for puppet, I took the last step of switching to that user role and trying to write the log file myself. And it failed. Huh? And then id told me why – while /var/log/puppetdb/ was group-writeable and owned by puppetdb group, my puppetdb user was actually in the www-data group.

It turns out that I had tried to move some uids and gids around after the automatic assignment puppet does gave different results on two hosts (a problem I still don’t have a satisfying answer for, as I don’t want to hard-code uids/gids for system accounts in other people’s modules), and clearly I did one of them wrong.

I think a server that for whatever reason cannot log should simply not start, as this is a critical error if you want a defensive system.

Given the name of the command (replace catalog), I felt certain this was going to be the problem standing between me and multiple hosts being monitored.

The problem was a few levels deep, but essentially I had code creating fragments of vimrc files using the concat module, and was naming the resources with file content as part of the title. That’s not a great idea, admittedly, but no other part of puppet had ever complained about it before. Even the files on my file system that store the fragments, which get their filename from these titles, happily stored with a double quote in its name.

So yet again, puppet’s lax approach to specifying types of variables at any of its layers (hiera, puppet code, ruby code, ruby templates, puppetdb) in any of its data formats (yaml, json, bytes for strings without encoding information) triggers errors somewhere in the stack without informing whatever triggered that error (ie, the agent run on the client didn’t complain or fail).

Once again, puppet has given me plenty of reasons to hate it with a passion, tipping the balance.

I couldn’t imagine doing server management without a tool like puppet. But you love it when you don’t have to tweak it much, and you hate it when you’re actually making extensive changes. Hopefully after today I can get back to the loving it part.

2014-7-165:01 am

It’s two weeks shy of a year since the last morituri release. It’s been a pretty crazy year for me, getting married and moving to New York, and I haven’t had much time throughout the year to do any morituri hacking at all. I miss it, and it was time to do something about it, especially since there’s been quite a bit of activity on github since I migrated the repository to it.

I wanted to get this release out to combine all of the bug fixes since the last release before I tackle one of the number one asked for issues – not ripping the hidden track one audio if it’s digital silence. There are patches floating around that hopefully will be good enough so I can quickly do another release with that feature, and there are a lot of minor issues that should be easy to fix still floating around.

But the best way to get back into the spirit of hacking and to remove that feeling of it’s-been-so-long-since-a-release-so-now-it’s-even-harder-to-do-one is to just Get It Done.

2014-6-2910:09 pm

It’s been very long since I last posted something. Getting married, moving across the Atlantic, enjoying the city, it’s all taken its time. And the longer you don’t do something, the harder it is to get back into.

So I thought I’d start simple – I updated mach to support Fedora 19 and 20, and started rebuilding some packages.

Get the source, update from my repository, or wait until updates hit the Fedora repository.

2013-7-3010:23 pm

The 0.2.1 release contained a bug causing “rip offset” find to fail. That’s annoying for new users, so I spent some time repenting in brown paper bag hell, and fixed a few other bugs besides. Hence, my bad.

I can understand that you didn’t all mass-flattr the 0.2.2 release – you tried it and you saw the bug! Shame on me.

Well, it’s fixed now, so feel free to pour in your flattr love if you use morituri! Just follow this post to my blog and hit the button.

2013-7-159:02 am

I finally managed to set aside a few hours this weekend to fix some smaller issues in morituri and put out a new release. (For those who don’t know, morituri is an accurate CD ripper for Linux)

Life’s been a little busy lately and my spare time hacking has been suffering. But I’m happy I got a nice stretch of hacking hours in on morituri, and hope to repeat it in the next few weeks to knock out some more complicated issues, like tackling the reports of problems with latest pycdio releases.

The most important change is probably the filtering of non-FAT and other special characters, which I ended up doing a lot like sound-juicer does, because I trust Ross to have looked at this in detail.

In addition, after curiously reading Lionel Dricot‘s posts about Flattr, I decided to get a little more serious about trying Flattr again (I had only flattr’d about 4 things so far due to lack of content). I integrated Flattr in my wordpress install, upgrading it in the process, and installed the chrome extension which should give me many more options to flattr other people’s content – for example, github repos.

So if you like morituri, go to this post on my website and click the Flattr button you see at the bottom of this post or on the morituri homepage!

I don’t expect to get rich off it, but I think it’s a nice way of showing you appreciate someone’s work.