plip bloghttps://blog.plip.com
Thu, 07 Dec 2017 07:12:01 +0000en-UShourly1https://wordpress.org/?v=4.9.3Laser cut Spirograph and gift boxhttps://blog.plip.com/2017/12/06/laser-cut-spirograph-and-gift-box/
https://blog.plip.com/2017/12/06/laser-cut-spirograph-and-gift-box/#respondThu, 07 Dec 2017 07:12:01 +0000https://blog.plip.com/?p=1954[…]]]>For E’s birthday two months ago I made him a laser cut spirograph. It works pretty well and I really enjoyed making it! I tried downloading some existing plans for them, but they didn’t mesh well. I ended up using Inkscape to draw the gears. That tutorial makes them look pretty for an illustration, but they’re real gears and the mesh well! Not perfect, but well enough. For the case I started with a basic box-joint box from the ever awesome MakerCase, and then used Inkscape to add the florishes.

]]>https://blog.plip.com/2017/12/06/laser-cut-spirograph-and-gift-box/feed/0Poor man’s debugging high load on an Apache serverhttps://blog.plip.com/2017/11/29/poor-mans-debugging-high-load-on-an-apache-server/
https://blog.plip.com/2017/11/29/poor-mans-debugging-high-load-on-an-apache-server/#respondThu, 30 Nov 2017 07:49:20 +0000https://blog.plip.com/?p=1951[…]]]>Hopefully you have a fancy solution to monitor your logs, but if you don’t, then I have a little treat for you. At work today I was trouble shooting why our web server would spike up to a load average of over 200 (I’d never actually seen a quad core server go much over 40 – neat!). At first I stumbled around looking for non-obvious causes and then I just looked at the traffic.

I was pretty sure a single IP was slamming our server at midnight UTC. To verify this I wrote a quick bash script to log the load averages, the count of the top process and it’s name, and finally, the count and IP of the top visitor over the prior minute. It’s 13 lines of actual code and more than double that with comments.

By executing this in a crontab entry every minute, we get nice easy to read logs (and easy for a computer to parse):

While having only 2 of his 24 RAM slots full, we can make do with this – besides, filling it up to 1.5TB of RAM will cost $14,400 – ouch! Specifically, I think we can make this into a nice host of lots and lots of virtual machines (VMs) via a modern day hypervisor.

I’m guessing a techie at Canonical got a hold of a marketing person at Canonical; they realized that with LXD baked into Ubuntu, anyone comfy on the command could line run 4 steps to get an instant container. This meant that while there’s a lot more complexities to getting a LXD running in a production environment (LAN bridge, ZFS Pools, ZFS Caches, JBOD on your RAID card, static IPs dynamically assigned, all covered below), it was indeed intensely satisfying to have these 4 steps work exactly as advertised. Sign me up!

While I will still will take up my friend’s offer to run through an OpenStack install at a later date, I think I’m good for now.

This write up covers my learning how to provision my particular bare metal, discovering the nuances of ZFS and finally deploying VMs inside LXD. Be sure to see my conclusion for next steps after getting LXD set up.

Terms and prerequisites

Since this post will cover a lot of ground, let’s get some of the terms out of the way. For experts, maybe skip this part:

CIMC – Cisco Integrated Management Interface which is a out of band management that runs on Cisco server hardware. Allows remote console over SSL/HTML.

Stéphane Graber – Project leader of LXC, LXD and LXCFS at Canonical as well as an Ubuntu core developer and technical board member. I call him out because he’s prolific in his documentation and you should note if you’re reading his work or someone else’s.

KVM – “Keyboard/Video/Mouse” – The remote console you can access via the CIMC via a browser. Great for doing whole OS installs or for when you bungle your network config and can no longer SSH in.

KVM – The other KVM ;) This is the Kernel Virtual Machine hypervisor, a peer of LXC. I only mention it here for clarification. Hence forth I’ll always be referencing the Keyboard/Video/Mouse KVM, not the hypervisor.

Now that we have the terms out of the way, let’s give you some home work. If you had any questions, specifically about ZFS and LXD, you should spend a couple hours on these two sites before going further:

Stéphane Graber’s The LXD 2.0: Blog post series – These are the docs that I dream of when I find a new technology: concise, from the horses mouth, easy to follow and, most of all, written for the n00b but helpful for even the most expert

Aaron Toponce’s ZFS on Debian GNU/Linux– while originally authored back on 2012, this series of posts on ZFS is canon and totally stands the test of time. Every other blog post or write up I found on ZFS that was worth it’s salt, referenced Aaron’s posts as evidence that they knew they were right. Be right too, read his stuff.

Prep your hardware

Before starting out with this project, I’d heard a lot of bad things about getting Cisco hardware up and running. While there may indeed be hard-to-use proprietary jank in there somewhere, I actually found the C220M4 quite easy to use. The worst part was finding the BIOS and CIMC updates, as you need a Cisco account to download them. Fortunately I know people who have accounts.

After you download the .iso, scrounge up a CD-R drive, some blank media and burn the .iso. Then, plug in a keyboard, mouse and CD drive to your server, boot from it with the freshly burned disk, and upgrade your C220. Reboot.

Then you should plug the first NIC into a LAN that your workstation is on that has DHCP. What will happen is that the CIMC will grab an address and show it to you on the very first boot screen.

You see that 10.0.40.51 IP address in there (click for large image)? That means you can now control the BIOS via a KVM over the network. This is super handy. Once you know this IP, you can point your browser to the c220 and never have to use a monitor, keyboard or mouse directly connected to it. Handy times! To log into the CIMC the first time, default password is admin/password (of course ;).

The upgraded CIMC looks like this:

I’ve highlighted two items in red: the first, in the upper left, is the secret, harder-to-find-than-it-should-be menu of all the items in the CIMC. The second, is how to to get to the KVM.

For now, I keep the Ubuntu 16.04 install USB drive plugged into the C220. Coupled with the remote access to the CIMC and KVM, this allows to me to easily re-install on the bare metal, should anything go really bad. So handy!

While you’re in here, you should change the password and set the CIMC IP to be static so it doesn’t change under DHCP.

Prep your Disks

Now it’s time to set up the RAID card to have two disks be RAID1 for my Ubuntu boot drive, and the rest show up as JBOD for ZFS use. When accessing the boot process, wait until you see the RAID card prompt you and hit ctrl+v. Then configure 2 of your 6 drives as a RAID1 boot drive:

And then expose the rest of your disks as JBOD. The final result should look like this:

Really, this is too bad that this server has a RAID card. ZFS really wants to talk to the devices directly and manage them as closely as possible. Having the RAID card expose them as JBOD isn’t ideal. The ideal set up would be to have a host bus adapter (HBA) instead of the raid adapter. That’d be a Cisco 9300-8i 12G SAS HBA for my particular hardware. So, while I can get it to work OK, and you often see folks set up their RAID cards as JBOD just like I did, it’s sub-optimal.

Install Ubuntu 16.04 LTS + Software

F6 to select boot drive

As there’s plenty of guides on how to install Ubuntu server at the command line, I won’t cover this in too much detail. You’ll need to:

ZFS

ZFS is not for the faint of heart. While the LXD can indeed be run on just about any Ubuntu 16.04 box and the default settings will just work, and I do now run it on my laptop regularly, getting a tuned ZFS install was the hardest part

for me. This may be because I have more of a WebDev background and less of a SysAdmin background. However, if you’ve read up on Aaron Toponce’s ZFS on Debian GNU/Linux’s posts, and you read my guide here, you’ll be fine ;)

After reading and hemming and hawing about which config was best, I decided that getting the most space possible was most important. This means I’ll allocate the remaining 6 disks to the ZFS pool and not have a hot spare. Why no hot spare? Three reasons:

This isn’t a true production system and I can get to it easily (as opposed to an arduous trip to a colo)

The chance of more than 2 disks failing at the same time seems very low (though best practice says I should have different manufacturer’s batches of drives – which I haven’t checked)

If I do indeed have a failure where I’m nervous about either the RAID1 or RIADZ pool, I can always yank a drive from one pool to another.

Now that I have my 6 disks chosen (and 2 already running Ubuntu), I reviewed the RAID levels ZFS offers and choose RAIDZ-2 which is, according to Aaron, “similar to RAID-6 in that there is a dual parity bit distributed across all the disks in the array. The stripe width is variable, and could cover the exact width of disks in the array, fewer disks, or more disks, as evident in the image above. This still allows for two disk failures to maintain data. Three disk failures would result in data loss. A minimum of 4 disks should be used in a RAIDZ-2. The capacity of your storage will be the number of disks in your array times the storage of the smallest disk, minus two disks for parity storage.”

In order to have a log and cache drive (which can be the same physical disk), I’ll split it so 5 drives store data and 1 drive with two partitions store my log and cache. Read up those log and cache links to see how these greatly improve ZFS performance, especially if your data drives are slow and you have a fast cache drive (SSD or better).

To find the devices which we’ll use in our pool, let’s first see a list of all devices from dmesg:

So now we know that sdb, sda, sdc, sde, sdd and sdf are up for grabs. The hard part with ZFS is just understanding the full scope and impact of how to set up your disks. Once you decide that, things become pleasantly simple. Thanks ZFS! To set up our RAIDZ-2 pool of 5 drives and enable compression to it is just these two command:

Note that there’s no need to edit the fstab file, ZFS does all this for you. Now we need to create our log and cache partitions on the remaining sdf disk. First find out how many blocks there are on your drive, 468862128 in my case, using parted:

OK – almost there! Now, because linux can mount these partitions with different device letters (eg sda vs sdb), we need to use IDs instead in ZFS. First, however, we need to find the label to device map:

Great, now we know that wwn-0x5002538c404ff808-part1 will be our log and wwn-0x5002538c404ff808-part2 will be our cache. Again, ZFS’s commands are simple now that we know what we’re calling. Here’s how we add our cache and log:

Looking good! Finally, we need to keep our ZFS pool healthy by scrubbing regularly. This is the part where ZFS self heals and avoids bit rot. Let’s do this with a once per week job in cron:

0 2 * * 0 /sbin/zpool scrub lxd-data

Networking

Now, much thanks to Jason Bayton’s excellent guide, we know how to have our VMs get IPs in on LAN instead of being NATed. Right now my NIC is enp1s0f0 and getting an IP via DHCP. Looking in /etc/network/interfaces I see:

This will allow the VMs to use br0 to natively bridge up to enp1s0f0 and either get a DHCP IP from that LAN or be assigned a static IP. In order for this change to take effect, reboot the host machine. Jason’s guide suggests running ifdown and ifup, but I found I just lost connectivity and only a reboot would work.

When you next login, be sure you use the new, static IP.

Set file limits

While you can use your system as is to run containers, you’ll really want to updated per the LXD recommendations. Specifically, you’ll want to allow for a lot more headroom when it comes to file handling. Edit sysct.conf:

LXD

Now that we have our bare metal provisioned, we have our storage configured, file limits increased and our network bridge in place, we can actually get to the virtual machine part, the LXD part, of this post. Heavily leveraging again Jason Bayton’s still excellent guide, we’ll initialize LXD by running lxd init. This go through the first time LXD guide and ask a number of questions about how you want to run LXD. This is specific to 2.0 ( >2.1 has different questions). You’ll see below, but the gist of it is that we want to use our ZFS pool and our network bridge:

lxd init
-----
Name of the storage backend to use (dir or zfs) [default=zfs]:
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: lxd-data
Would you like LXD to be available over the network (yes/no) [default=no]? yes
Address to bind LXD to (not including port) [default=all]:
Port to bind LXD to [default=8443]:
Trust password for new clients:
Again:
Do you want to configure the LXD bridge (yes/no) [default=yes]?

Note that we don’t accept the ZFS default and specify our own ZFS pool lxd-data. When you say “yes”, you want to configure the bridge, you’ll then be prompted with two questions. Specify br0 when prompted. Again, refer to Jason Bayton’s guide for thorough screen shots:

Would you like to setup a network bridge for LXD containers now? NO
Do you want to use an existing bridge? YES
Warning: Stopping lxd.service, but it can still be activated by:
lxd.socket
LXD has been successfully configured

Success! LXD is all set up now. Let’s prep the host OS for a bunch of open files now.

Hello world container

Whew! Now that all the hard parts are done, we can finally create our first container. In this example, we’ll be creating a container called nexus and put some basic limitations on it. Remember, do not create these as root! One of the massive strengths of LXD is that it’s explicitly designed to be secure and running containers as root would remove a lot of this security. We’ll call lxc init and then pass some raw commands to set the IP to be .52 in our existing /24 subnet on the bridge. If you run lxc list our new container shows up. That all looks like this:

Finally, we want to limit this container to have 4 CPUs, have 4GB of RAM and 20GB of disk. Like before, these commands are not run as root. They are imposed on the container in real time and do not require a restart. Go LXD, go!

You’re all done! From here you can trivially create tons more containers. You can let them have an ephemeral IP via DCHP on the bridge or easily set a static IP. If you have a resource intensive need, you don’t set the limits above, the container will have access to the full host resources. That’d be 40 cores, 64GB RAM and 1TB of disk. If you created more VMs with out resource limiting, LXC would do the balancing of resources so that each container gets it’s fair share. There’s a LOT more configuration options available. As well, even the way I declared a static IP can likely be done via dnsmasq, (see this cyberciti.biz article), but I had trouble getting that to work on my setup, hence the raw calls.

Next steps

Now that you you’ve bootstrapped your bare metal, dialed in your storage back end, specifically deployed your LAN, you should be all set, right? Not so fast! To make this more of a production deployment, you really need to know how (and practice!) your backup and restore procedures. This likely will involve the snapshot command (see post 3 of 12) and then backing those snapshots off of the ZFS pool. Speaking of the ZFS pool, our set up as is doesn’t have any alerting if a disk goes bad. There’s many solutions out there for this – I’m looking at this bash/cron combo.

Having a more automated system to provision containers integrated with a more DevOps-y set up makes sense here. For me this might mean using Ansible, but for you it might be something else! I’ve heard good things about cloud-init and Juju charms. As well, you’d need a system to monitor and alert on resource consumption. Finally, a complete production deployment would need at least a pair, if not more, of servers so that you can run LXD in a more highly available setup.

Given this is my first venture into running LXD, I’d love any feedback from you! Corrections and input are welcome, and praise too of course if I’ve earned it ;) Thanks for reading this far on what may be my longest post ever!

]]>https://blog.plip.com/2017/07/26/amazon-coffee-filters-sold-used-and-with-4-year-protection-plan/feed/0Punk Rock Band Name: Desiccated Ceiling Rabbithttps://blog.plip.com/2017/07/21/punk-rock-band-name-desiccated-ceiling-rabbit/
https://blog.plip.com/2017/07/21/punk-rock-band-name-desiccated-ceiling-rabbit/#respondFri, 21 Jul 2017 18:35:55 +0000https://blog.plip.com/?p=1886[…]]]>I was over at my friends house and they have kids. One of their kids’ favorite games is to take their sticky, stretchy toy (like these, but rabbit shaped) and throw it up on the ceiling. Their son has a patented lick-and-stuff-under-your-armpit-for-5-seconds technique which imbues just the right amount of moisture. It’s amazing.

However, before i saw his patented technique, and before even knowing they had the toys, I noticed something was stuck up on the ceiling and asked what it was.

Desiccated Ceiling Rabbit

Was the answer I got back!

]]>https://blog.plip.com/2017/07/21/punk-rock-band-name-desiccated-ceiling-rabbit/feed/0Dog Poop Reminderhttps://blog.plip.com/2017/07/05/dog-poop-reminder/
https://blog.plip.com/2017/07/05/dog-poop-reminder/#respondWed, 05 Jul 2017 19:08:24 +0000https://blog.plip.com/?p=1881[…]]]>This is a truly free idea idea, like my other one, but not like the category at large which has a lot of open source stuff (also free!).

Often when I’m out for a trail run or for a hike with the fam, I see bags of dog poop. Like, right there on the side of the trail, some one saw their dog poop, they pulled a bag out of their pocket, and then they put the stinky poop in the bag. Super nice of them! Then, for some reason, they put the bag of poop on the side of the trail. (Now that I’ve mentioned this to you, I’m sure you’ll see a lot of these.)

But why leave it trailside?!? Why not take it to a trash can and throw it away? Presumably, this is because they don’t want to hike for 2 hours holding a bag of poop. So they think, “I’ll just leave it here and then remember to pick it up on the way back”. Then they forget.

So, what they need is a reminder! This would start out as super simple app for you phone. You launch the app and it immediately opens your camera. You take a pic of yer dog’s poop, and off you go. When you took that picture, the app also noted your GPS coordinates. Then, when you get with in say 100ft of the poop an alarm goes of, “PICK UP YOUR DOG’S POOP!” and shows you the picture you took to help you remember where/which poop it is.

Subsequent versions could implement:

Time out reminder – This will to go off N minutes after you leave the poop. Say 4 hours, which assumes that you didn’t go back and pick it up. This alert could also be tied to the fact that you didn’t go back to the GPS poop spot.

Multi-dog poop awareness – When the app launches it shows you a picture of all your dogs. You pick the dog which just pooped and then take a pic. This way you can have multiple poops in one trip and know which dig did what.

Poop Analysis – The app could alert you if your dog hasn’t pooped on your walks or something. And, you know, if you wanna be like all the cool kids you could do real time poop analysis using Tensor Flow. As well, if you do the same walk every day, you could heat map where your dog is most likely to go. There’

Poop Points – I’m not sure what they’d be worth, but you could social network this bad boy and people could get poop points for picking up other people’s dog’s poop. Certainly the gamification might draw people, but otherwise, I dunno what you’d redeem these points for

Go forth and make this app! The idea is on me, for free.

Otherwise, if you’re a dog owner, please, please pick up that poop.

]]>https://blog.plip.com/2017/07/05/dog-poop-reminder/feed/0Punk Rock Band Name: bad pighttps://blog.plip.com/2017/06/29/punk-rock-band-name-bad-pig/
https://blog.plip.com/2017/06/29/punk-rock-band-name-bad-pig/#respondThu, 29 Jun 2017 19:03:36 +0000https://blog.plip.com/?p=1879[…]]]>Just after getting cleaned in the bath, my son was going on and on about a book he read at school. Likely, “The Three Little Wolves and the Big Bad Pig“, but I could be wrong.

Anyway, at one point he said you should write “bad pig” all over your body. To which I mentally added, “and go on stage half naked to perform in your band titled,”:

bad pig

]]>https://blog.plip.com/2017/06/29/punk-rock-band-name-bad-pig/feed/0Punk Rock Band Name: super resting bitch facehttps://blog.plip.com/2017/06/24/punk-rock-band-name-super-resting-bitch-face/
https://blog.plip.com/2017/06/24/punk-rock-band-name-super-resting-bitch-face/#respondSat, 24 Jun 2017 19:03:27 +0000https://blog.plip.com/?p=1877[…]]]>I was chatting my buddy the other night and he mentioned something about some one and described them as,

super resting bitch face

My response was unsurprisingly, “awesome punk rock band name!” This post also helps me clear out my in box to get closer to inbox zero.

]]>https://blog.plip.com/2017/06/24/punk-rock-band-name-super-resting-bitch-face/feed/0Howto: Sympa 6.2 on Ubuntu 17.04https://blog.plip.com/2017/06/04/howto-sympa-6-2-on-ubuntu-17-04/
https://blog.plip.com/2017/06/04/howto-sympa-6-2-on-ubuntu-17-04/#respondSun, 04 Jun 2017 23:17:20 +0000https://blog.plip.com/?p=1873[…]]]>This post is a continuation of the my last post on the topic, “Howto: Sympa 6.1 on Ubuntu 16.04“. It should come as no surprise that this is about installing Sympa on the most recent version of Ubuntu to get the most recent version of Sympa (at the time of this writing). That’d be Sympa 6.2.16 and Ubuntu 17.04. The steps only vary a little between between the two, but here’s the all them for completeness.

Assumptions/Prerequisites

Like the last post, this one assumes you have root on your box. It assumes you have Apache2 installed. It assumes you’re running a stock Ubuntu 17.04 install. It assumes you want to run Sympa on your server. It also assumes you’ll be using Postfix as the lists MTA. It assumes you have a DNS entry (A record) for the server. As well it assumes you also have an MX record pointing to the A record or no MX record so the MX defaults to the A record. If this doesn’t apply to you, caveat emptor!

To recap, that’s:

Apache 2 installed and working

Postfix as MTA

Ubuntu 17.04 server

Existing DNS entry

Run all commands as root

I also was using this server solely to serve Sympa mail and web traffic so if you have a multi-tenant/multi-use server, it may be more complicated.

Sympa should now be up and running at lists.example.com! All mail in and out should work so you can run your own list server. Please report any problems so I can keep this post updated and accurate – thanks!

]]>https://blog.plip.com/2017/06/04/howto-sympa-6-2-on-ubuntu-17-04/feed/04k Mac Plushttps://blog.plip.com/2017/04/17/4k-mac-plus/
https://blog.plip.com/2017/04/17/4k-mac-plus/#respondMon, 17 Apr 2017 16:17:47 +0000https://blog.plip.com/?p=1865[…]]]>The amazing Archive.org just posted a Mac OS 7 emulator and it’s amazing. See, “Early Macintosh Emulation Comes to the Archive“. Given my first computer was a “Fat Mac“, seeing this operating system brought to life brought so many memories back. Thank you archive.org!! The even support full screen so you can run a 512×342 resolution emulator on a 4k monitor in a browser under Ubuntu ;)