Yay metrics! I have sort of a love-hate relationship with metrics. That is, I hate them, but they like to come pester me. That said, having metrics is a useful way of knowing what is going on in your various systems and if the services you are responsible for are actually doing things they are supposed to be doing.

Generically metrics collection breaks down into 3 smaller categories:

System stats collection and storage

Log collection and storage

Using data to answer questions

In order to keep things manageable, however, this post will cover how to get metrics data into one place. In later posts, we will handle stroring metrics data in TICK stack, log collection with ELK stack, and using the collected data to answer some questions.

Today’s metrics collection

When I was getting my footing in IT, the state of the art was syslog-ng, with logwatch & grep, or if you could keep your logs under 500MB/day, Splunk. Metrics collection was done with Cacti, and service status was watched by Nagios. All of these tools still exist, but in the world of 2004, they were… well, they had not evolved yet.

Today, what have new tools, techniques, and methods of handing data that can be more effective. ELK Stack (Elasticsearch, Logstash, Kibana) along with rsyslog for log shipping provides centralized log collection and storage, along with an interface to make things easy to query.

TICK Stack (Telegraf, InfluxDB, Chronograf, Kapacitor), like ELK, provides a set of tools to collect time series data, and do interesting things with it. Alerting, anomaly detection, dashboards, and so on. To get data into TICK stack, there are a number of tools to collect system level statistics. My tool of choice is Netdata. It allows for creative architectures, store and forward logging, and integration to a large number of backends. The default dashboard is also pretty slick.

If you have vagrant, virtualbox or libvirt and would like to play along, the lab that accompanies this post can be found here. To start the lab, run vagrant up and go fetch a coffee.

Netdata - System stats collection

Netdata is a sort of all-in-one system stats package. It can pull metrics from about everything. Netdata’s architecture lets you mix and match components as needed. Either storing data in its local statsd server, proxying data between instances, keeping a local cache, and so forth.

For this exercise, we will be configuring Netdata to operate in Proxy mode. That is, each netdata agent will collect system metrics and ship them upstream to the proxy node.

As you read, keep in mind, that the goal here, like in the afore mentioned posts is not to be practical. Rather this is a “because I can” project. The conclusion of which will be to run the miners inside containers backed by Kubernetes. But, that is for another time.

The Gear

For this project, I reused my Kubernetes / OpenFaaS cluster, and added some ASICs. Here’s a reminder of the parts:

Summary

This was a fun one. To get this a bit more stable, I likely need to relocate the “cluster” to my server cabinet for better cooling, the little USB keys get painfully hot. Another thing on the todo list for this, is to have cgminer run inside a container, and then on K8S.

As I spend quite a bit of time (an understatement, I assure you) standing up and tearing down different virtualized lab environments, I wanted to spend a little bit less on it overall. Thus, in addition to tuning some parameters at runtime, I spent some time benchmarking the difference between Virtualization engines, filesystems, and IO schedulers.

Before we begin, let’s get a few things out of the way:

TL;DR - The winner was libvirt/kvm with ext4 in guest, xfs on host, with noop

Test Workload

Rationale:
To be honest, this is the workload I spend the most time with. Be it standing one up to replicate a customer issue, test an integration, build a solution, and so on. Any time I save provisioning, is time I can spend doing work

Further, the build process for an all-in-one is quite extensive and encompasses a wide variety of sub workloads: haproxy, rabbitmq, galera, lxc containers, and so on.

In this post, we’re going to setup a cluster of Raspberry Pi 2 Model B’s with Kubernetes. We are then going to install OpenFaaS on top of it so we can build serverless apps. (Not that there aren’t servers, but you’re not supposed to care, or some such).

Update!

The prior release of this post ended up duplicating a lot of the work that Alex Ellis published here..

In fact, once I’d hit publish, my post was already out of date. So, to get Kubernetes and OpenFaaS going on the PI, start there. What follows here, then, are the changes I made to the process to fit my environment.

Requirements

For my lab cluster, I wanted the environment to be functional, portable, isolated from the rest of my network, and accessible over wifi. The network layout looks a bit like this:

Networks:

Green - Lab-Net - 10.127.8.x/24

Blue - K8s Net - 172.12.0.0/12

Build

The build has two parts, hardware and software. Rather than provide you with a complete manifest of the hardware, we’ll summarize it so we can spend more time on the software parts.

Next, you will need to create some supporting files for packer. Specifically, we need to create a json file that tells Packer what to build, a customization script to install our additional packages, and a user-data.yml for the flash process.

The files I used can be found here, and places into the /vagrant/ folder of the VM.

Note: If you are using Raspberry PI 3’s with built in Wifi, the user-data.yml I supplied will have them all connect to your network, rather than forcing them to communicate through the master node.

Finally, we’re ready to build the image, to do that, from inside the Test VM, run the following commands:

Burning the image to the cards

Now that you have a custom image, it’s time to put it on the card. To do that, with the flash command installed, the following command will burn each SD card and set it’s hostname:

for i in {01..07}; do ~/Downloads/flash --hostname node-$i --userdata ./user-data.yml ./raspbian-stretch-modified.img; done

Place the SD cards into your RaspberryPIs and boot! This will take a few minutes.

Setting up networking

From the network diagram earlier, you’ll have seen that we’re using node-01 as both the Kubernetes master, as well as the network gateway for the rest of the nodes. To supply the nodes with connectivity, we need to configure NAT and DHCP. The following commands will do this for you:

This post is a bit belated, and will honestly still be more raw than I would like given the time elapsed. That said, I’d still like to share the keynote & session notes from one of the best put on tech conferences I’ve been to in a long while.

Before we get started, I’d like to make some comments about KubeCon itself.

First, it was overall one of the better organized shows I’ve been to, ever. Registration & badge pickup was more or less seamless. One person at the head of the line to direct you to an open station, along with a friendly and detailed explanation of what and where things are… just spot on.

Right next to registration was a small table with what looked like flair of some manner, however, on closer inspection contained what might be the most awesome bit of the conference: communication stickers.

From the event site:

Communication Stickers

Near registration, attendees can pick up communication stickers to add to their badges. Communication stickers indicate an attendee’s requested level of interaction with other attendees and press.

Keynote Summaries

What follows are my raw notes from the keynotes. You’ll note, that these are from day 1’s afternoon keynotes. This isn’t because the rest weren’t interesting, these were just the ones I found most interesting.

Overall

About a year ago now, I began an adventure into the world of home coffee roasting in the hope of, maybe one day, backing out of tech, and into coffee. While that is a story for another time, here is some of the hard-won knowledge from my first year, in the hopes that it will help someone else.

Home Roasting

I’ve broken this post into a few sections, in the order I wish I’d taken.

Training materials

Roasters

Beans

Training Materials

Before you grab a roaster, spend some time reading about how it all works. Watch a bunch of videos on how other folks are doing it, the different machines, etc. To that end, here are a few hopping off points:

Reading material

Most anything by Scott Rao is great. The book mentioned here, however is specific to roasting. It breaks down the styles of different machines, the roasting process, and the chemical changes that happen inside the beans as roasting happens. Along the way, Scott provides a world of guidance that will both help you get started, and fine tune you process.

Video material

Sweet Maria’s (the place I source beans from) is a sort of full-service home roasting shop. To that end, they have their own YouTube channel which has some decent guides: https://www.youtube.com/user/sweetmarias

In Person materials

This will largely depend on your area & willingness to spend, but there are a number of coffee roaster training classes available. The ones I found near us, were few, far between, and more expensive than I was ready for at the time.

Roasters

Roasting coffee can be done with most anything where you can apply heat and some movement to the beans while they, well, roast. This means you can get there with anything from a frying pan and wooden spoon, to something commercial grade. At the home roaster / small scale batch roaster end, you have three basic types:

Air Roasters

The most common here is an air popcorn popper. If you go this route, be sure to choose one with high wattage, strong plastic, and an easy to bypass thermal cutoff. Coffee beans roast at a much higher temp than popcorn does, so the higher wattage will help get you there faster, and the plastic / cutoff help ensure the machine can withstand the roasting process more than once*.

_ I didn’t learn this lesson the first time, or, the second or third. While it can be done, most machines from big box stores just won’t stand up to the punishment._

Also in the air range, you have things like the SSR-700 which, while having a small capacity, allows for manual and computer control. It also has a decently sized community of home roasters that share roast profiles for various beans. Overall, this makes the on-ramp from “I want to roast” to “my first good cup” much quicker.

Electric

After air roasters, the next step up is some flavor of electric roaster. Electric roasters are generally of the ‘drum’ type, with an honorable mention of the Bonaverde which is a pan style thing that takes you from green beans to a pot of coffee. In this realm, the Behmor 1600+ is my roaster of choice, and the one I use currently.

It has the largest capacity for home / hobby coffee roaster allowing you to roast up to a pound at a time. Additionally, it has a number of baked in roast profiles, and the ability to operate as a full manual controlled roaster. While I wish it had better / more instrumentation, I’ve learned to live without, working by smell, sight, smell, and such.

Electric roasters, however, have a few eccentricities:

They all require ‘clean’ and consistent power. Home power outlets may vary from one to the next on the actual voltage coming out of the outlet, further said voltage may vary as other things on the same circuit add or remove load. All of this will have a direct impact on the ability of the roaster to heat the beans. To this end, I’ve added a sine wave generating UPS.

In the electric realm, I’m watching the Aillio R1, which like the SR-700 offers complete computer control, like the Behmor offers a huge capacity, and like higher end gas machines, lets you roast back to back with less overhead / cooldown time.

Gas

There are a number of gas roasters aimed at the home market, however, as they usually require a gas source of some sort (stove at the small end, propane tank or natural gas line at the mid/high end) and some way to vent the smoke and other fumes.

Given the requirements for running a gas machine, I did not research these as deeply for a starter machine. However, the greater capacities and control offered by a gas powered roaster will have me looking at them if/when I expand.

Beans

It used to be sourcing beans was much harder. There was only one shop in my town that would sell them, and you basically got whatever selection they had. Then Sweet Maria’s came around and changed the game for home roasters. There are now a double-handful of dedicated shops like Sweet Maria’s, and well, even Amazon will ship you some via Prime.

My recommendation here, is to grab a green bean sampler, one that’s larger than you think you’ll need, and use it to learn your chosen roaster, and to get a really good feel for a particular bean/varietal/region.

Atlanta VMUG Usercon - Containers: Beyond the hype

Herin lie the slides, images, and Dockerfile to review this session as I presented it.

Behind the scenes

Because this is a presentation on containers, I thought it only right that I use containers to present it. In that way, besides being a neat trick, it let me emphasize the change in workflow, as well as the usefulness of containers.

I used James Turnbull’s “Presenting with Docker” as the runtime to serve the slides. Then mounted three volumes to provide the slides, images, and a variant of index.html so I could live demo a change of theme.

Viewing the presentation

Assuming you have Docker installed, use the following commands to build and launch the presentation:

In openstack-ansible versions 15.1.7 and 15.1.8, there is an issue with the version of shade and the keystone db_sync steps not completing properly. This is fixed in 15.1.9, however, if running one afore mentioned releases, the following may help.

Symptom:

Keystone reports 500 error when attempting to operate on the credential table.

--- edit out this section ---
BEGIN
IF NEW.encrypted_blob IS NULL THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Credential migration in progress. Cannot perform writes to credential table.';
END IF;
IF NEW.encrypted_blob IS NOT NULL AND OLD.blob IS NULL THEN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Credential migration in progress. Cannot perform writes to credential table.';
END IF;
END */;;
--- end edits ---
--- add this to the first line ---
USE keystone;
--- end addition ---

Resources

Console logs are critical for troubleshooting the startup process of an instance. These logs are produced at boot time, before the console becomes available. However when working with a cloud hosted instances, accessing these can be difficult. OpenStack Compute provides a mechanism for accessing the console logs.

Getting Ready

To access the console logs of an instance, the following information is required:

openstack command line client

openrc file containing appropriate credentials

The name or ID of the instance

For this example we will view the last 5 lines of the cookbook.test instance.

How it works…

The openstack console log show command collects the console logs, as if you were connected to the server via a serial port or sitting behind the keyboard and monitor at boot time. The command will, by default, return all of the logs generated to that point. To limit the amount of output, the --lines parameter can be used to return a specific number of lines from the end of the log.

Rescuing an instance

OpenStack compute provides a handy troubleshooting tool with rescue mode. Should a user lose an SSH key, or be otherwise not be able to boot and access an instance, say, bad IPTABLES settings or failed network configuration, rescue mode will start a minimal instance and attach the disk from the failed instance to aid in recovery.

Getting Ready

To put an instance into rescue mode, you will need the following information:

Note: When in rescue mode, the disk of the instance in rescue mode is attached as a secondary. In order to access the data on the disk, you will need to mount it.

To exit rescue mode, use the following command:

# openstack server unrescue cookbook.test

Note: This command will produce no output if successful.

How it works…

The command openstack server rescue provides a rescue environment with the disk of your instance attached. First it powers off the named instance. Then, boots the rescue environment, and attaches the disks of the instance. Finally, it provides you with the login credentials for the rescue instance.

Accessing the rescue instance is done via SSH. Once logged into the rescue instance, you can mount the disk using mount <path to disk> /mnt.

Once you have completed your troubleshooting or recovery, the unrescue command reverses this process. First stopping the rescue environment, and detaching the disk. Then booting the instance as it was.