I woke up this morning and the site was down, dud dum de dum

This time I will dump here what I’ve done to use Ceph storage backed volumes with DC/OS and Rexray.

I have considered using rexray/rbd plugin but I find it more flexible to talk to Ceph via S3 interface. If you would like to go RBD way consider this blog post instead. If you don’t have Ceph, give Minio a go. It’s easy to set up minio in DC/OS

I wanted to use rexray/s3fs docker managed module / plugin, same way I did for EFS but it doesn’t support setting custom endpoint (only allowing AWS S3, not minio for example) at the moment. So I am using rexray binary / service.
I have followed this gist and tuned the set up to match my needs.

Notes / gotchas:
* s3 endpoint needs to be provided both in s3fs.endpoint and in s3fs.options.url
* setting libstorage.integration.volume.operations.mount.rootPath to “/” because default “/data” doesn’t exist in freshly created volume and fails to be created (at least for me, perhaps solvable in different way) – may be related to this issue in rexray
* setting libstorage.integration.volume.operations.remove.force = true, because of this issue in rexray

Note: Marathon doesn’t allow mounting the same volume across different applications and also using rexray service instead of docker plugin restricts the mount to single instance. See the ticket here

The goal is to enable data sharing between containers with persistent volumes. The volumes will be created by specifying them in DC/OS (marathon) app definition. We have to prepare the DC/OS agents by installing docker REX-Ray plugins on them for that to be possible [1].

We need to allow rexray to manage volumes in AWS. To do that we will set up an IAM policy and a user with that policy attached. Of course do it the way that fits your setup best using roles, groups etc. I’ll leave it up to you. We could also attach the role to EC2 instance and skip using credentials in REX-Ray plugin configuration. I’ll skip AWS IAM setup details for the sake of brevity of this tutorial.

The important part is permissions to include in the policy. Here is the policy definition I used. Note that it also covers EBS permissions in case I’d like to use EBS:

Step 2. Install docker plugin

The following is how you install the plugin manually, passing configuration in the form of environmental variables. Read on to see how I have automated that with Chef.

The variables are:
EFS_ACCESSKEY and EFS_SECRETKEY – credentials of the user you have created in step 1
EFS_SECURITYGROUPS – security groups (space delimited list) that you use to allow traffic to/from your networks. If creating new, dedicated security group for use with EFS, allowing traffic on port 2049 is enough.
EFS_TAG – custom string
NOTE: I’m using --alias rexrayefs because DC/OS allows only alphanumeric characters in the driver name and will refuse a name with slash in it.

Here’s a Chef recipe and example attributes hash I use to deploy the plugin

Here’s an example DC/OS (marathon) JSON app definition that mounts a volume, and echoes timestamps to a file on it.

Start multiple containers to see how they share the volume.

Note: In DC/OS UI, when I look under “volumes” for my app, it reads “unavailable” for some reason while it works just fine.

Step 5. Unused volume prune

AFAIK DC/OS will not clean up after apps not using a volume anymore. Get yourself familiar with docker volume prune command. I’m planning putting a cron job in place to run it.

[1] Notes:
– I’m on DC/OS 1.9.0. There’s a REX-Ray service (dcos-rexray) delivered with it, with rexray binary delivered with DC/OS is 0.3.3 (old). I’m not sure what is this useful for as we’re not going to use this service or binary. We will use REX-Ray docker plugin. The DC/OS manual reads that REX-Ray volume driver is provided with DC/OS. In my installation it wasn’t.
– docker version needs to be > 1.13

For those looking to moving from Cloudflare to AWS services. I’ve done that migration a few months ago. I don’t have it fresh in my mind, it’s just a brain dump, hope it’s helpful anyway, at least as source of keywords for further search.

Cloudflare provides DNS service with a CDN with additional DoS protection. Their services are integrated, you buy the whole suite, there’s no way to use CDN service alone from another provider – you will have to take DNS with you as well.

So we wanted to move all the things to AWS (not my decision, and mainly driven by cost. If you want my opinion – consider Fastly for CDN and Dyn for DNS) .

Breaking it down to components, we have 3 migrations to do – CDN, DoS and DNS, in the following order:

CDN: set up AWS CloudFlare distribution(s)

Cloudflare CDN is pretty dumb, so is CloudFront, hence setting up the CloudFront distributions should be rather straightforward. I’m not going to go through this part in detail. You would in most cases need 2 of them – one for static and one for dynamic objects.

DoS protection: set up AWS WAF

Cloudflare DoS protection is just a req/s rate based IP address blacklist. We can totally do the same or better using WAF with AWS Lambda function processing access logs from CloudFront (I followed the tutorial from AWS

I have manually copied recently blacklisted IP’s to WAF for the start.

DNS: migrate records to Route53

We’ve had tens of domains with tens of records so I wanted to automate the process. of Please forgive me my coding skills, following is the quick and dirty script I’ve used to migrate DNS records.

One manual task after running the script: Reconfigure the zones manually changing A records to “alias” type records pointed to respective CloudFront resource. I tried doing that with the script but for some reason it didn’t work.

Recently, I have been given an opportunity to share some of my use cases for varnish logs at Dublin DevOps Meetup. Below, you’ll find the slides.

In short: What is missing from NewRelic RUM regarding website performance data and what you can get from your ELK stack is enough granularity and flexibility for thorough debugging when it comes to edge cases, such us extremely slow client (or bot, or DoS etc) screwing up averages, large image slowing down page loads, badly cached object and more.

Another contributor to page slowness, are 3rd party assets. Here, as well, RUM is not enough to find out. One can manually track down the cause using developer tools, or webpagetest.org but of course every Ops engineer wants a graph or two! Here, if you don’t know sitespeed.io yet, get yourself familiar with this excellent tool. It’s super easy to get it up and running using docker (I’m planning to write a how-to post), but if you don’t have time, you can choose to pay speedcurve.com to run sitespeed.io for you.

In modern web stacks, your website sits behind some front end cache and/or a CDN. You collect logs from your backend servers (apache, nginx, etc) and track visits to the page with Google Analytics or similar tools, but you might be missing visibility from your cache or you may not yet be shipping cache logs to your ELK stack. If either is the case, read on.

There is not many CDNs that enable you to collect logs directly from edge cache (for example Fastly enables you to do so). Check with your CDN provider how to get the access logs shipped to your ELK. If your provider suck with that, maybe you can consider choosing better CDN. Your border cache Varnish servers seem to be good place in typical web stack, for access log collection. Every request (or every cache miss on CDN) goes through it, so the visibility is better than on the backend servers where only requests that were cache miss on CDN and border Varnish will be recorded. So how to do it?

How to ship logs from Varnish to Logstash

I want logs structured in JSON format, to have them easily digested by Logstash. No grok to parse classic NCSA formatted log lines. I couldn’t find a way to achieve that using std.syslog VMOD, but I found it quite easily doable using varnishncsa. Varnishncsa does not directly support writing logs in JSON, but it’s easy to customize the format. Here’s example init config that will make varnishncsa talk JSON to Logstash.

Varnishncsa will only write to file or to std out, so we have to pipe logs from std out, to logger which will deliver them to local syslog (rsyslogd, syslog-ng etc) from where they will be shipped to central log server. Note the sed search and replace trick. This is to prevent messing up field mapping in Elastic Search by sending strings (which happens when there’s “-” printed for the missing value) to fields that are meant to be numerical.

Note, that you can add any header to the log, including headers you add in Varnish VCL, so for example you can attach specific cookie value (session cookie, AB-testing cookie etc) to log entries.

In the example above, I also log TTLB (time to last byte) – duration_usec, and backend response time – berespms. Having such performance stats collected from border cache helps identify web performance bottlenecks better.

Coming to the office, today morning I was struck by the picture of one of my colleagues with couple of managers surrounding him. Looking at their faces, it was clear to me, that they stayed up whole night in the office. They were not fighting any fire, just finishing the release (sadly we do releases in the nights in my current work). The release was one that we do yearly, that took the whole night last year as well, and the year before and…
Last year I was driving the release and together with other guys we came to some ideas about things that could have been automated or prepared earlier to save us time and let us all have some sleep and avoid morning debugging (which is no better then debugging while drunk), and most important – shorten the time when site was under maintenance.
But we didn’t get support from managers to spend day or two to automate those little things. Let’s say I understand managers’ point of view. They are chasing due dates, while working with limited resources.

But why don’t they want to go to sleep and leave the job to engineers they trust?

Some other day while I was headed home, on my way between my desk and the office exit, passing near QA desks I was caught into discussing some issue. I have cleared that with them that the issue wasn’t urgent, it was actually there for some time already, and could definitely wait until next day.
A minute after, I was about to leave, my manager stopped while passing by as he noticed the crowd.
He stopped, even though he wanted to catch his train. He stopped not knowing what we were discussing, just out of generic “what’s up” need to know. As we told him the story, I’ve seen his face turning from tired to energized, excited. He would take off his coat and happily assist the investigation and/or fire fighting.

But why didn’t he prefer to leave that with his trusted engineers and go home?

I have actually seen that happening many times. Here’s a meme from devopsreactions tumblr that reminds me of that moments when I’m firefighting some issue while my manager is assisting me.

What makes some engineers energized when things are falling apart, excited with firefighting, more than with building resilient, self healing, automated systems? Why is it the opposite for me and many other engineers?

In my previous work, we used to joke that we don’t sleep, while we hold dresser from falling (context: there was that ridiculous article, that become viral in one Polish tabloid about a guy who claimed that he actually was holding his dresser from falling when trucks passing nearby shook his house)
Our dresser was of course the website. And it was funny because it really wasn’t. We hated those moments when we didn’t have enough time to focus on improving reliability because we were busy firefighting emergencies.

What makes some engineers energized busy holding the dresser from falling, while others get their energy from productive work do on stabilizing the stack to leave the office early and have good sleep?
Or why some managers trust their engineers and leave releasing code to them and some attend releases and hover over every move?

My checklist for when a new service is ready for production.
I know, nothing new, no discovery here. But it happens that even across one team different engineers will put different weight to different items or have different lists. So here’s mine.

I won’t mention testing here, which belongs more to development phase.

If you run a blog or other web service, on a single (virtual) machine, hosted somewhere or maybe on a raspberyPI at home, and you want to set up some monitoring for it, read on for what I think should work for you. I assume you don’t want to spend too much money and that your webserver is not your playground for experimenting with new technologies or setups. You just want the blog up and running, and for playing with Riemann and Sensu and what not, you have your lab at work, right?

1. APM

Let’s start with monitoring application performance. There is number of players on the market, NewRelic and AppDynamics being very well known. Both offer free plans with limited data retention and limited functionality (i.e. RUM not available in free plans)

Of course you want to know when your service goes down. Again, there is number of services you can choose from, one well known and cool is Pingdom which in free plan allows you to set up only one check, but the check can be of the transaction type with which you can easily create scenario based test. Other service I like very much is UptimeRebot, which allows you to create 50 checks in it’s free plan.

3. System metrics

Setting up anything like even quite lightweight Ganglia for a single VM is kind of overkill to me. So since we already have, say, NewRelic doing APM on the server, why not use it to collect CPU, RAM and I/O metrics as well?

It creates some basic overview graphs as well as nice visualizations of per process memory and CPU consumption, top like rankings and graphs.

NewRelic top5 processes by memory consumption

4. Log collection and analysis

You want your make sens of your logs, but you don’t have enough resources to set up ELK stack. Logentries and Loggly and probably other players too, have free plans with data retention and volume limitations.

That’s it! Setting all that up should take not much time and it’s all free and doesn’t eat too much of resources on your server, so WordPress will not starve 😉

While for most of daily operations, when searching for subset of nodes with certain run_list element or environment or attribute in general, knife node search is just enough, it’s not sufficient when it comes to making modifications. There, knife exec comes to let you execute some ruby against chef, with knife oneliner.

In automated infrastructure, for me, installing from source is not an option. So I build deb packages and tell chef to install these. To avoid external dependencies packages are installed from local repo.

One day I will learn proper Debian packaging… but before it happens I’ll keep enjoying building packages using FPM.

This post is quick cheat sheet for using FPM to build .deb packages. First we need to install FPM.