Scott's WeblogThe weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

It should come as no surprise to anyone that I’m a huge supporter of Spousetivities, and not just because it was my wife, Crystal Lowe, who launched this movement. What started as the gathering of a few folks at VMworld 2008 has grown over the last 11 years, and this year marks the appearance of Spousetivities at an entirely new conference: Oktane 2019!

Oktane is the conference for Okta, a well-known provider of identity services, and the event is happening in San Francisco from April 1 through April 4 (at Moscone West). This year, Okta is bringing Spousetivities in to add activities for those traveling to San Francisco with conference attendees.

A wine tour in Sonoma/Napa with private transportation (lunch is included, of course!)

A walking food tour of San Francisco combined with a bus tour of the city and tickets to Beach Blanket Babylon

A whale watching tour

…and more!

If you’re attending Oktane19 and are bringing along a spouse, domestic partner, family member, or even just a friend—I’d definitely recommend signing them up for Spousetivities. What’s particularly cool about the activities at Oktane is that some activities—the wine tour and the walking tour—are available on Sunday, March 31, for folks arriving into San Francisco early. Nice!

It’s been a little while now since I published my 2018 project report card, which assessed my progress against my 2018 project goals. I’ve been giving a fair amount of thought to the areas where I’d like to focus my professional (technical) development this coming year, and I think I’ve come up with some project goals that align both with where I am professionally right now and where I want to be technically as I grow and evolve. This is a really difficult balance to strike, and we’ll see at the end of the year how well I did.

Without further ado, here’s my list of 2019 project goals, along with an optional stretch goal (where it makes sense).

Make at least one code contribution to an open source project. For the last few years, I’ve listed various programming- and development-related project goals. In all such cases, I haven’t done well with those goals because they were too vague, and—as I pointed out in previous project report cards—these less-than-ideal results are probably due to the way programming skills tend to be learned (by solving a problem/challenge instead of just learning language semantics and syntax). So, in an effort to align my desire to increase open source contributions along with a desire to improve my programming/development skills, I’m setting a goal to make at least one code contribution to an open source project this year. For the purposes of this goal, I will count “infrastructure-as-code” contributions (Ansible, Terraform, etc.) as one-fourth of a code contribution. Contributions/commits to my own Polyglot project do not count. (Stretch goal: Make three code contributions to open source projects.)

Add at least three new technology areas to my “learning-tools” repository. Established a few years ago, my “learning-tools” repository contains tools and tutorials for learning new technologies. It’s gotten a bit stale over the last couple of years, so this year I want to add at least three new technology areas to this repository. I have a few ideas about some of the technology areas I’d like to add, but I’m going to leave this open so as to account for directional changes over the course of the year. These contributions/commits do not count against my previous project goal. (Stretch goal: Add five new technology areas to the “learning-tools” repository.)

Become more familiar with CI/CD solutions and patterns. In 2018 I focused the majority of my energy on becoming more fluent in Kubernetes (and I did reasonably well, though there is still plenty to learn). In 2019, I need to “move up the stack” a bit and increase my knowledge and experience with CI/CD solutions and usage patterns, particularly in containerized environments. I know that this goal is rather vague, but at this point I’m not really sure how I can make it more specific, measurable, and concrete.

Create at least three non-written content pieces. I’ve been blogging for a long time (14 years as of May 2019), and previous attempts at other forms of content creation have not been quite as successful. This year, I’m going to try again, but without specifying what type of content (only that it is non-written content). It could be a presentation published via Slideshare or SpeakerDeck, a video tutorial published on YouTube, or a graphic/diagram posted somewhere. Audio content created for the Full Stack Journey podcast will not count against this project goal. (Stretch goal: Create five pieces of non-written content.)

Complete a “wildcard project” (if applicable). As I’ve done in previous years, I’m going to allow room for a “wildcard project.” It’s difficult, if not impossible, to completely chart where career or projects will take me, so I use the “wildcard project” as a means of addressing that variability in the future. I won’t grade myself negatively if I don’t complete one.

So there’s my list of 2019 project goals. I’ve tried to take the lessons learned from previous years to make this year’s goals as specific and measurable as possible (where possible), and to align these goals with each other and with the larger trends in my career and the industry. Time will tell how effective I was with that alignment.

Feel free to hit me up on Twitter if you have questions or comments about these project goals. I’d certainly love to hear your feedback!

vpnc is a fairly well-known VPN connectivity package available for most Linux distributions. Although the vpnc web site describes it as a client for the Cisco VPN Concentrator, it works with a wide variety of IPSec VPN solutions. I’m using it to connect to a Palo Alto Networks-based solution, for example. In this post, I’d like to share how to set up split tunneling for vpnc.

Split tunneling, as explained in this Wikipedia article, allows remote users to access corporate resources over the VPN while still accessing non-corporate resources directly (as opposed to having all traffic routed across the VPN connection). Among other things, split tunneling allows users to access things on their home LAN—like printers—while still having access to corporate resources. For users who work 100% remotely, this can make daily operations much easier.

vpnc does support split tunneling, but setting it up doesn’t seem to be very well documented. I’m publishing this post in an effort to help spread infomation on how it can be done.

All this information, naturally, has to reflect the correct configuration for your particular VPN setup. This is all reasonably well-documented on various vpnc tutorials. If you stop here, you’ll have a “regular” vpnc connection that will route all traffic across the VPN.

To do split tunneling, add this line at the end of your configuration file:

Script /etc/vpnc/custom-script

You can use whatever filename you want there (and put it wherever you want in the file system, although I prefer keeping it in /etc/vpnc). In the file you specified, add these contents:

The CISCO_SPLIT_INC value specifies how many networks are going to be configured to route across the VPN. In this example, there is only a single network being routed across the VPN. That network is provided by the CISCO_SPLIT_INC_0_ADDR, CISCO_SPLIT_INC_0_MASK, and CISCO_SPLIT_INC_0_MASKLEN entries, and in this case equates to 10.0.0.0/8.

If you have multiple/non-contiguous networks, then specify how many networks on the CISCO_SPLIT_INC line, and then repeat the lines above for each network, incrementing the number for each section. For two non-contiguous networks, you’d have a series of CISCO_SPLIT_INC_0_* lines (for the first network) followed by a set of CISCO_SPLIT_INC_1_* lines (for the second network).

The last line is important—this ties back to the script that comes packaged with vpnc to set up all the routing and such, as modified/directed by the values specified in your custom script. This allows you to customize the behavior of split tunneling on a per-connection basis.

Once you have your custom script in place, you can connect using sudo vpnc /etc/vpnc/config.conf (as normal). Once the connection is up, you can use ip route list to see that only the specified networks are being routed across the VPN. All other traffic still uses your local gateway.

Note that this solution does not address custom DNS resolver configurations. If you need to be able to resolve corporate hostnames and a DNS domain on your home LAN, additional steps are needed. I’ll try to document those soon (once I’ve had a chance to do some additional testing).

I recently had a need to do some “advanced” filtering of AMIs returned by the AWS CLI. I’d already mastered the use of the --filters parameter, which let me greatly reduce the number of AMIs returned by aws ec2 describe-images. In many cases, using filters alone got me what I needed. In one case, however, I needed to be even more selective in returning results, and this lead me to some (slightly more) complex JMESPath queries than I’d used before. I wanted to share them here for the benefit of my readers.

What I’d been using before was a command that looked something like this:

The part after --query is a JMESPath query that sorts the results, returning only the ImageId attribute of the most recent result (sorted by creation date). In this particular case, this works just fine—it returns the most recent Ubuntu Xenial 16.04 LTS AMI.

Turning to Ubuntu Bionic 18.04, though, I found that the same query didn’t return the result I needed. In addition to the regular builds of 18.04, Canonical apparently also builds EKS (Amazon’s managed Kubernetes offering) versions and special minimal versions of Ubuntu 18.04 AMIs. It was one of those AMIs (the EKS-related one, in fact) that was getting returned by my command. So how do I go about filtering out images in a more granular fashion?

After reading through a few blogs (see here, here, and here—great work by their respective authors, thank you!), I finally stumbled upon the syntax that would work:

The first part of it uses --filters to show only those AMIs that are for the x86_64 architecture, using an EBS volume as the root device type, using hardware virtualization, with the string “ubuntu-bionic-18.04” somewhere in the name. This is pretty much identical to the Xenial version I showed earlier.

The JMESPath query is more complex, however. The first part removes entries that don’t have a value for the Name attribute. This is to prevent an error with JMESPath trying to evaluate results with no Name attribute and failing.

The next two portions check for results that contain either “ubuntu-eks” or “minimal” in the Name attribute. If either of those strings are present, the query will return a true result, and those AMIs would not be included in the final set of results (because the command is only selecting false results, i.e., items that do not contain those strings).

Finally, it selects the most recent AMI (using the [-1] syntax, since things are sorted by the creation date) and then returns only the value of the ImageId attribute.

You’ll note the command is using JMESPath’s pipe (|) operator, which just takes the result of the previous query as its input.

This got me the result I saw seeking: the AMI ID for the latest Canonical build of Ubuntu Bionic 18.04, not any of the “special” builds that are also offered. Obviously, I can extend this command to exclude other variations as well as needed.

Hopefully, providing this example will be useful to readers in the event one of you needs to do the same kind of slightly more “advanced” filtering of results from an AWS CLI command. This example was for AMIs, but you could equally apply the same concepts to any number of other resources as well.

Welcome to Technology Short Take #111! I’m a couple weeks late on this one; wanted to publish it earlier but work has been keeping me busy (lots and lots of interest in Kubernetes and cloud-native technologies out there!). In any event, here you are—I hope you find something useful for you!

Networking

Daniel Dib has a great article on how network engineers need to evolve. The network isn’t going away, it’s just changing.

I referenced part 1 of Ajay Chenampara’s series on the Ansible network-engine command parser back in Technology Short Take 102 (July of last year). I’m not sure how I missed that part 2 was published only 2 days later, so I’m rectifying that now. Go check out part 2.

I’m not sure I would refer to using kubeadm to bootstrap a Kubernetes cluster as “the hard way,” but if you’re looking for a fairly detailed tutorial on using kubeadm to bootstrap a Kubernetes cluster, this post by Yair Etziony has quite a bit of information on the process.

Operating Systems/Applications

Oriol Tauleria has a write-up on how to layout Terraform code to accommodate a project as it scales. I like some of the ideas Tauleria presents and hope to be able to implement some of them soon in my own project(s).

Maish Saidel-Keesing lays out his thoughts on the death of Docker. In the past, I might have felt the same way. However, Docker’s recent (seeming) pivot to focus on a paid desktop product might change things a pretty fair amount. Let’s face it, Docker’s hold wasn’t on the back-end systems—it was on the developers who valued the workflow. Focusing on a paid desktop solution caters to that audience. Given that containerd seems to be winning on the back-end, this allows Docker to remain influential in the container space, in my opinion.

Recent Posts

A few days ago I was talking with a few folks on Twitter and the topic of using VPNs while traveling came up. For those that travel regularly, using a VPN to bypass traffic restrictions is not uncommon. Prompted by my former manager Martin Casado, I thought I might share a few thoughts on VPN options for road warriors. This is by no means a comprehensive list, but hopefully something I share here will be helpful.

Over the last few weeks, I’ve noticed quite a few questions appearing in the Kubernetes Slack channels about how to use kubeadm to configure Kubernetes with the AWS cloud provider. You may recall that I wrote a post about setting up Kubernetes with the AWS cloud provider last September, and that post included a few snippets of YAML for kubeadm config files. Since I wrote that post, the kubeadm API has gone from v1alpha2 (Kubernetes 1.11) to v1alpha3 (Kubernetes 1.12) and now v1beta1 (Kubernetes 1.13). The changes in the kubeadm API result in changes in the configuration files, and so I wanted to write this post to explain how to use kubeadm 1.13 to set up a Kubernetes cluster with the AWS cloud provider.

On a recent customer project, I recommended the use of Heptio Contour for ingress on their Kubernetes cluster. For this particular customer, Contour’s support of the IngressRoute CRD and the ability to delegate paths via IngressRoutes made a lot of sense. Of course, the customer wanted to be able to scrape metrics using Prometheus, which meant I not only needed to scrape metrics from Contour but also from Envoy (which provides the data plane for Contour). In this post, I’ll show you how to scrape metrics from Envoy using the Prometheus Operator.

Welcome to Technology Short Take #110! Here’s a look at a few of the articles and posts that have caught my attention over the last few weeks. I hope something I’ve included here is useful for you also!

Welcome to Technology Short Take #109! This is the first Technology Short Take of 2019. It may be confirmation bias, but I’ve noticed of number of sites adding “Short Take”-type posts to their content lineup. I’ll take that as flattery, even if it wasn’t necessarily intended that way. Enjoy!

I just finished reading Cindy Sridharan’s excellent post titled “Effective Mental Models for Code and Systems,” and some of the points Sridharan makes immediately jumped out to me—not for “traditional” code development, but for the development of infrastructure as code. Take a few minutes to go read the post—seriously, it’s really good. Done reading it? Good, now we can proceed.

In December 2016, I kicked off a migration from macOS to Linux as my primary laptop OS. Throughout 2017, I chronicled my progress and challenges along the way; links to all those posts are found here. Although I stopped the migration in August 2017, I restarted it in April 2018 when I left VMware to join Heptio. In this post, I’d like to recap where things stand as of December 2018, after 8 months of full-time use of Linux as my primary laptop OS.

Over the last five years or so, I’ve shared with my readers an annual list of projects along with—at the year’s end—a “project report card” on how I fared against the projects I’d set for myself. (For example, here’s my project report card for 2017.) Following that same pattern, then, here is my project report card for 2018.

In early 2017 I kicked off an effort to start using Linux as my primary desktop OS, and I blogged about the journey. That particular effort ended in late October 2017. I restarted the migration in April 2018 (when I left VMware to join Heptio), and since that time I’ve been using Linux (Fedora, specifically) full-time. However, I thought it might be helpful to collect the articles I wrote about the experience together for easy reference. Without further ado, here they are.

I’ve been working on migrating off macOS for a couple of years (10+ years on a single OS isn’t undone quickly or easily). I won’t go into all the gory details here; see this post for some background and then see this update from last October that summarized my previous efforts to migrate to Linux (Fedora, specifically) as my primary desktop operating system. (What I haven’t blogged about is the success I had switching to Fedora full-time when I joined Heptio.) I took another big step forward in my efforts this past week, when I rebuilt my 2011-era Mac Pro workstation to run Fedora.

This is a liveblog of the KubeCon NA 2018 session titled “Hardening Kubernetes Setup: War Stories from the Trenches of Production.” The speaker is Puja Abbassi (@puja108 on Twitter) from Giant Swarm. It’s a pretty popular session, held in one of the larger ballrooms up on level 6 of the convention center, and nearly every seat was full.

This is a liveblog from the day 1 (Tuesday, December 11) keynote of KubeCon/CloudNativeCon 2018 in Seattle, WA. This will be my first (and last!) KubeCon as a Heptio employee, and looking forward to the event.