https://rob.salmond.ca/Ghost 0.11Sun, 07 Jan 2018 02:55:45 GMT60I recently took a new job at a pretty large enterprise in one of the security groups. I happened to join just as they kicked off an annual CTF contest so I signed up on day one and started gleefully hacking about. I was doing pretty well holding a spot]]>http://feedproxy.google.com/~r/phrostuff/~3/Rr85LnI4CG0/4ae4cd8c-310a-4028-a469-251c2a285d77Thu, 09 Nov 2017 04:33:24 GMTI recently took a new job at a pretty large enterprise in one of the security groups. I happened to join just as they kicked off an annual CTF contest so I signed up on day one and started gleefully hacking about. I was doing pretty well holding a spot in the top ten amid some fifty competitors, had fun factoring some weak RSA keys, implementing a login timing attack, and recovering files encrypted by a poorly written ransomware.

Then I came to a challenge that required implementing and mining a blockchain. Given a particular genesis block, difficulty requirements, and a hash algorithm you had to provide a chain of at least four valid blocks to retrieve the flag.

Specifically, the genesis block looked like this.

{
"identifier": "000102030405060708090A0B0C0D0E0F",
"nonce": 3754873684,
"data": "Genesis Block for CTF contest, all block chains must start with this block. This is equivalent to Big Bang, time didn't exist before this",
"previous_hash": null
}

And the difficulty for each block was pre-assigned in the following order: 8 for the genesis block, then 4, 4, 5, 6, 7, 9, 11, 13, and 16 for the remaining blocks. In this case difficulty is defined as the number of leading zeros in the resulting hash.

The content of the data field was dealers choice, so I populated with a poem I'm fond of and implemented the miner over my lunch break. The result produced about 100k hashes per second on my macbook, randomly hashing the block with a different nonce value until hitting upon one which produced a hash that satisfied the difficulty requirement. It completed the required four blocks before I had finished eating. I submitted the chain and collected my flag and went back to work.

But one thing kept bothering me. I kept thinking about the fact that even though it took only four blocks to collect the flag the instructions provided difficulty values for a ten block chain. What would happen if I submitted all ten blocks? Bonus points? A hidden challenge? My weight in dogecoins?

I stopped working on all the remaining challenges and focused on completing the chain. Since the new gig is a Go heavy shop and my Go game is weak I decided to port the miner to Go for a compiled language speed up. First step was to see if I could replicate the hashing algorithm which turned out to be pretty straightforward with the standard library crypto/sha256 package.

I can see why Python people like Go, it's pretty intuitive. This bought me about an 8x speed up. My macbook was churning out 800k hashes per second which ripped through the next few blocks rapidly up until it hit block 7 with a difficulty rating of 11, and there it stayed for quite a while.

Recalling back to the early days of bitcoin mining I decided that a mining pool seemed like a good approach. I have nerdy friends, they have computers, by our powers combined we can do a thing right?

So I implemented a quick and dirty web app to loosely coordinate a distributed pool of miners. I made sure the miners would generate the same identifier field for a given block where previously it had also been random, so the fleet would be working on the same problem. Then I had them poll the server every thirty seconds to see if anyone else had mined a block. Stumbling about with my new Go legs it took about a day and a half to get the server and the miner working in tandem.

On Friday the 3rd I started soliciting friends to run the miner.

I was testing the pool with my work macbook, my personal laptop, and my desktop. All told these were combining to produce about 2.4 million hashes per second. There seemed to be some upper limit at the 800k mark that didn't seem to vary much with CPU speed. As friends came on board the pool hashrate began to climb. Slowly.

The ones with beefy gaming PC's complained that their many cores remained idle while mining so I set about learning how to use Go's concurrency to make use of them. Eventually I got it working after faffing about with channels and worker pools and whatnot. The code is here and it's hideous and no doubt rife with bugs. I haven't had to deal with pointers since the 90's and my approach is roughly on par with one that a friend once half jokingly suggested, "Keep adding asterisks and ampersands until it works".

Anyway it's ugly but functional, the concurrent miner spawned a dedicated goroutine for each available core on the system and did a great job pinning the entire CPU to 100% usage.

We went from this.

To this.

Somewhere around this point on the evening of the Nov 4th I decided to start graphing the hashrate of the mining pool.

We were doing better but for the next two days the entire pool churned and churned on block 7 getting nowhere. I decided to stand up a bunch of cheap google cloud compute instances to help the effort. My friend Adam had a bunch of cores sitting around from a screw up with a cloud provider ages ago so he fired up a dozen miners in there. At its peak the pool hit nearly 50 million hashes per second. By then I'd had to fortify the web app with more workers and a proper MySQL rather than just SQLite.

Some time around 8 am on Sunday the 5th a google instance in South America with miner id 662F4F146E1504EA mined the elusive block 7 and the whole fleet rolled over to working on block 8 with difficulty level 13. At this point we had three days to go till the end of the CTF contest. People had been suggesting it for a while and I'd been hoping to avoid it but I finally decided to cave and take a serious look at implementing a GPU accelerated miner.

I managed to get a hash actually computed on my GPU but writing an algorithm to run at GPU scale parallelism is pretty far from the sort of software I normally write. I was just beginning to see how I could divide the work up and was just skirting the outlines of how I might get meaningful answers back out, but with time running short and brain cells in short supply I sputtered to a stop a day before the contest ended.

Feeling pretty burnt out I threw up my hands in defeat. What I did learn was a fair bit about Go, who my most competitive friends are, and way more than I ever needed to know about how sha256 works. For the curious, here's my tl;dr.

Remember these puzzles?

That's pretty much how sha256 works. When you fire it up the innards are arranged in a standard configuration hand crafted from the finest artisanal entropy. Then the input data is padded to ensure its size is a multiple of 512 bits and it is shoved through the system 512 bits at a time. Each block of bits turns the crank and slides the configuration around in a consistent but input dependent way ensuring that if even one input bit is altered the sliders wind up in wildly varied final states.

And that's it. See? Cryptographically secure hashes aren't that hard. Now whatever you do don't go read FIPS-180. That way lies total madness.

I hope the CTF is this much fun next year.

]]>https://rob.salmond.ca/chaining-blocks-for-disappointment-and-failure/I recently set up urlwatch to alert me if some web pages I'm interested in are changed. It has a nice pushbullet integration and is pretty easy to set up. Too easy in fact. Pro tip, after configuring your preferred notification service and setting enabled: true you're done. I spent]]>http://feedproxy.google.com/~r/phrostuff/~3/dwwYhkJcTmQ/7888d5ee-b38f-4df3-90a2-2a9ac08234a0Mon, 02 Oct 2017 14:57:53 GMTI recently set up urlwatch to alert me if some web pages I'm interested in are changed. It has a nice pushbullet integration and is pretty easy to set up. Too easy in fact. Pro tip, after configuring your preferred notification service and setting enabled: true you're done. I spent a while faffing about thinking there had to be more to it. There isn't.

What I found however is that one of the pages I was monitoring had a dynamically generated <script> tag in it which was triggering spurious notifications I wanted to suppress. There didn't seem to be an obvious way to ignore particular tags so I created a simple hook to do this.

This adds a new filter type called ignore which accepts a CSS selector as parameter. It then uses the magical BeautifulSoup HTML parser to find all the elements which match the selector and remove them before returning the remaining HTML.

Urlwatch then does its normal comparison against the previous run to see if anything has changed and carries as usual.

To use the filter update your config like so altering the CSS selector to suit your needs.

]]>https://rob.salmond.ca/simple-tag-ignore-hook-for-urlwatch/I've been job hunting recently and in keeping with tradition that means I've been working on some coding homework assignments. For one company which I was particularly hoping to impress I got a bit showy and put together a nice containerized environment to work in. I learned most of the]]>http://feedproxy.google.com/~r/phrostuff/~3/Y-tEjS6IHHU/9b4c78fc-c3ff-49d3-9c3f-654b3d7bf1c4Mon, 18 Sep 2017 00:38:13 GMTI've been job hunting recently and in keeping with tradition that means I've been working on some coding homework assignments. For one company which I was particularly hoping to impress I got a bit showy and put together a nice containerized environment to work in. I learned most of the techniques for this approach working with a large dev team who built extensive tooling around their containerized dev environment to support over a dozen custom apps and at least as many supporting service containers.

In the weeks since submitting this project I've had a couple friends mention that they would like to learn more about working in docker so I've extracted the good bits and put them in a public repo. Here I'll describe how it works and how to use it.

The Flask App and Configuration

I reached for Flask to build the web app as that's my go to framework and I wanted to build a strong submission, go with what you know as they say. I have replaced the business logic from the actual assignment so as not to provide reference material for future applicants but the structure is the same.

If you're looking to learn Flask there are better projects out there. I recommend Overholt an oldie but a goody. Cookiecutter Flask which is more geared towards full web apps than APIs. Or my favourite Flusk, a clean, fairly modern, and well organized Flask boilerplate.

The only thing worth mentioning with respect to docker is the way configuration is handled. There is a bit of extra logic in there to deal with the MySQL replicas (more on that below) but basically it just grabs the value of any environment variables prefixed with HELLO_ and hangs them off the Flask config object. I took this approach because environment variable injection is the baseline approach for passing config into a running container both in docker-compose and practically every container orchestration system. This gives us an easy on ramp to move from dev to prod.

If you find yourself baking config files into your containers, or having some script in your container fetch a config file from somewhere you're gonna have a bad time. In that case just go straight to making your app Consul or etcd aware and be done with it.

To make use of the read-only replica I used this flask-replicated extension which is a bit naive in that it uses the HTTP method rather than the database operation to decide which database to execute the query on. For example if you had some user.last_accessed_on datetime field that got updated on every page view this wouldn't cut it but it gets the job done for this simple app.

The Dockerfile

One thing of note regarding the dockerfile is the separation of the requirements.txt file (the python version of a Gemfile or a package.json file) from the rest of the app in terms of layers. Since each ADD statement creates a new layer in the image and minimizing the number of layers is best practice this may seem counter intuitive.

The idea here is to speed up build times. The build process will only rebuild those layers which have been modified since the last build, however it must then rebuild any layers built upon the modified layer. By placing the requirements.txt layer above the layer for the rest of the app code we ensure that rebuilding that layer (and the subsequent apt-get install ... pip install ... layer) only happens when the requirements change. Without this separation every single line of code we changed would mean a tedious rebuild of those layers.

The good news is that you don't need to rebuild the container every time you change the code though, next we'll look at how to hack in this environment.

The docker-compose and override files

This is where much of the development magic happens, using docker-compose we can stand up all the dependencies our app(s) rely on. In this case two MySQL containers in a master/replica configuration (courtesy of Tao Wang) and a memcached container.

There are a few things worth highlighting here. First is the use of healthcheck and restart directives. These will both let you know if your services become unreachable for some reason and try to restart them in case they stop. Useful in dev when trying weird stuff is a common occurrence.

Next and more important is the use of explicit app level configuration for connecting to the services in the supporting containers, in this specific case by providing environment variables for memcached host and MySQL URI strings but this could be any app level config.

When linking containers via the docker-compose depends-on mechanism the Hello app could simply default to looking for the hostname master or memcached which would resolve to the correct container. However the pattern of using code level dev-default values, be they service dependencies or feature flags or basically anything that might be different in production, creates a minefield of unknown unknowns when it comes time to ship your containers.

By explicitly specifying these configurations during development we have a roadmap to follow when we deploy to Kubernetes or ECS or whatever else down the road. Believe me when I say that reverse engineering this sort of config without a guide sucks.

Finally we should look at the docker-compose override file. By default docker-compose will parse the main docker-compose.yml file and then update the config it finds there with any additions or changes it finds in docker-compose.override.yml. We can leverage this mechanism to provide a nice developer experience by setting up the primary docker-compose file with the assumption that every container in the stack will behave normally (that is start running the app it hosts) when it comes up. Then we can use the override file to knock out any container we care to hack on by replacing the command directive so that rather than running the app it just keeps the container running indefinitely, and adding a volume directive so that rather than using the source baked into the container it reads our local copy on the host OS so we can hack using our preferred editor.

This turns the container into our dev system, hooked to all the dependency containers, isolated from our host OS, and fully loaded with all the libraries our app depends on.

We can then hop into the running container to interact with our code as we update it by using the docker exec -it <container_id> /bin/bash command. Or in this case we can use the make target for just this purpose and instead run: make dev.

The Makefile

This provides a lot of convenience tools for interacting with the dev environment. This could be done with any similar tool like rake or grunt or yarn or whatever the cool kids are using now.

Some useful patterns are things like the DB migration target make setup-db. This starts up the db containers then manually runs the app container with the necessary parameters to link with the databases and execute the initial migrations. This could be done with yet another docker-compose override file but those grow numerous quite quickly. Note that this pattern is the reason the docker network is created externally (by make setup) rather than implicitly by docker-compose, so that we can link stand alone containers to those running in docker-compose.

Other handy dev targets are make testdata for generating canned API calls to our app and make nuke to completely blow the environment away when we inevitably screw it all up.

Final Thoughts

As usual this is mostly an exercise in capturing my thoughts for future reference but hopefully someone besides future me will find this helpful. I intend to use this repo as boilerplate for new projects so it should see at least a bit of upkeep here and there as I hack on stuff. I have also been tinkering with redeploying some of my personal projects in containers so I will likely have a follow up post sooner or later about the trip from dev to prod.

Also, I got the job so I guess I must have done something right!

]]>https://rob.salmond.ca/developing-in-docker/This Christmas my brother and his wife sent me an ORBNext, a very nerdy gift somewhat akin to a Philips Hue but more hackable by virtue of being built on the Electric Imp platform. The most interesting part of which is the "blink up" technology which involves an app on]]>http://feedproxy.google.com/~r/phrostuff/~3/YXc8SgdmW3k/9b9d2b26-a6e5-49d2-8253-1cecb84a7944Fri, 20 Jan 2017 05:36:21 GMTThis Christmas my brother and his wife sent me an ORBNext, a very nerdy gift somewhat akin to a Philips Hue but more hackable by virtue of being built on the Electric Imp platform. The most interesting part of which is the "blink up" technology which involves an app on your phone pulsating the screen brightness while the imp sits atop it reading the pulses with a photosensor. The result is a device with no screen and no buttons which can be connected to your wifi with less hassle than a chromecast which is pretty cool.

The mobile app has some basic functionality built in, set the colour based on the temperature or the price of your favourite stock symbol and that sort of thing. It also has IFTTT integration which is a handy service I've used for many years. Unfortunately it's not quite as useful as I'd like. For example if you trigger a "blink the light" event by some action the light just keeps blinking until you intervene which I'm not thrilled about.

But then if you got a problem with a hackable light you go ahead and you hack it.

I noticed that the response time between my pressing a button on the mobile app and the light updating itself was extremely fast, so fast that I assumed the app had to be communicating with the light directly over wifi. I fired up ettercap and ran a MITM attack between my phone and the light (MITM on a light, this is the world we live in?) but found no traffic going between them.

Next I ran it between the light and my router looking for it's control channel. I found that it established a TCP connection to imp02b.boxen.electricimp.com:31314 but couldn't make heads or tails of the binary protocol in use so I moved on to the app. Unfortunately the mobile app was communicating via HTTPS so I switched over to mitmproxy so I could see inside the encrypted traffic.

Here's what I found.

The app was doing a nice simple POST of some straightforward form data to control the light. During setup each unit produces a unique device code which is used to identify it, this is the URI being addressed.

It seemed like it'd be easy to replicate so I set about curling to see if I could do it but it ended up getting late and I found myself frustrated by countless 404 responses. I replicated every header, the user agent, even went so far as to script up a request that would copy the lower case 'h' in the host: header with no success.

My request looked absolutely identical and still wouldn't work. I gave up and went to bed.

As is often the case when I looked at the problem with fresh eyes today the answer jumped out at me. The Content-Length in my spoofed request was way off, lots more data than the app was sending. I compared the payloads in hex and found the problem.

The trick it turns out is to send the payload with an application/x-www-form-urlencoded header but not actually URL encode it. I couldn't figure out how to make cURL or my HTTP client library of choice do something so stupid but of course urllib2 has so such qualms.

And since it works from anywhere on the internet now I can activate disco mode when I'm not even home!

]]>https://rob.salmond.ca/a-quick-and-dirty-orbnext-hack/I watch a lot of youtube. Like a lot. Far more than TV, netflix, or movies. I'm always looking for ways to find good stuff to watch and new channels to subscribe to since youtube recommendations haven't been worth a damn in quite some time.

Since I'm often mentioning interesting

]]>http://feedproxy.google.com/~r/phrostuff/~3/d3jEeLRhRWA/89c26286-9a38-484f-b17b-2c3013e150eeTue, 17 Jan 2017 09:01:00 GMTI watch a lot of youtube. Like a lot. Far more than TV, netflix, or movies. I'm always looking for ways to find good stuff to watch and new channels to subscribe to since youtube recommendations haven't been worth a damn in quite some time.

Since I'm often mentioning interesting things I've watched I've had a few friends recently ask me for channel suggestions so here we go. My first listicle! In no particular order here are a bunch of youtube channels that I find interesting.

After experimenting with using nothing but chromebooks for a while to see how well I could operate with just a shell, browser, and cloud, I treated myself to a top of the line thinkpad last year and decided to give a tiling window manager

After experimenting with using nothing but chromebooks for a while to see how well I could operate with just a shell, browser, and cloud, I treated myself to a top of the line thinkpad last year and decided to give a tiling window manager a try and installed i3wm.

I've become fairly proficient with it and am quite happy with the user experience, but with two external monitors at home for easily moving workspaces around I really miss them when I'm out and about.

I also recently spent the holidays visiting family and as usual performed a bit of tech support. Couldn't figure out what was up with the iPad I gave my grandad last Christmas so this week I went out and got him a new one. Having it sitting around today and having plans to meet a friend for an afternoon of hacking at a local coffee shop I wondered if I could somehow use the iPad as an external monitor.

Turns out you can!

The trick is to use xrandr to define a virtual display device and then xvnc with the -clip flag restricting the shared viewport to the size of the virtual display to make it remotely visible. Then any old vnc client on the tablet will do the rest.

It works pretty great at home but out and about there are a few problems to figure out. First, many public wifi hotspots will dynamically create a small /31 network for each client which joins to prevent hostile users sniffing / spoofing / whatever the other folks. In that situation the tablet won't be able to connect to the VNC server, so I had to do a little network hopping before I could use it.

Another issue is latency. One network I tried out despite having ping times in the 10-20ms range to Google's DNS servers it was producing numbers > 1 second between devices on the network. Using the bloated uncompressed VNC protocol it took 15-20 seconds for a window moved onto the tablet to appear.

Also configuration is a pain in the ass. The VNC client I installed on the tablet has a bookmarking feature that let's you save hostnames and credentials for commonly accessed servers but of course if you're on a strange network the IP of the laptop will change meaning it's a bit less seamless to get started.

To work around all this I'm planning to grab a low profile USB wireless adapter to set up an ad-hoc network between the tablet and the laptop. And of course, my own tablet since I'll be shipping this one off to grandpa this week.

]]>https://rob.salmond.ca/tablet-as-external-monitor-with-i3wm/I recently had a need to move some data from sqlite to mysql and didn't find a solution that suited me. There are some shady looking proprietary apps that do this, lots of janky sed scripts to munge a sqlite dump into mysql format, and I think the mysql workbench]]>http://feedproxy.google.com/~r/phrostuff/~3/8_tfBmaBJNM/f28b042f-0c35-4b3c-936f-d801588f8a4fMon, 05 Dec 2016 07:32:52 GMTI recently had a need to move some data from sqlite to mysql and didn't find a solution that suited me. There are some shady looking proprietary apps that do this, lots of janky sed scripts to munge a sqlite dump into mysql format, and I think the mysql workbench might do it but I wasn't prepared to wrestle with that thing.

I wanted a simple tool for a simple task so I wrote one and called it datahoser.

It's built atop the SQLAlchemy reflection system, creates databases, tables, and inserts rows, and has a simple but thorough verification step after data has been copied.

It's not quite ready for prime time as it relies on an unreleased SQLAlchemy bugfix and could use a bit of tidying up but it functions as advertised. It also has some examples of how to convert between non-native data types in case your source database uses a type not available in the target DB. In theory it should copy data to or from any RDBS supported by SQLalchemy though I've only tried it on sqlite and mysql so far.

I'll throw it up on pypi when I'm able. If you need to use it lemme know how it goes.

In unrelated news my blog turned ten years old yesterday. I still agree with my original assessment. Accelerando is a hell of a book.

]]>https://rob.salmond.ca/sqlite-to-mysql-with-less-jank/Not really, it was actually just my wallet.

Last week I visited San Francisco for the first time. I was there for work and those obligations consumed the bulk of my time but I arranged to spend a couple extra days in town to play tourist. Friday was the first

Last week I visited San Francisco for the first time. I was there for work and those obligations consumed the bulk of my time but I arranged to spend a couple extra days in town to play tourist. Friday was the first day I had free so I did the typical stuff. Walked the Embarcadero. Stuck my nose in touristy shops. Ate greasy food. Checked out Pier 39. Took a photo of the golden gate bridge.

Walked up and took a look at Lombard street, said "fuck that" and went elsewhere.

That evening I met an old colleague for drinks at the Adler Museum cafe in North Beach, a fantastic dive. I also drank some wretched stuff the locals drink called Fernet, it was awful. Stay away from it.

After he went on his way and another coworker who had spent the afternoon wandering about with me headed to the airport to fly home I set about tackling one of my favourite things to do in any city, but specifically a new city. Finding a bar to knock back some drinks and make friends.

I wandered into Buddha Bar in Chinatown, ordered a cocktail and a shot, and struck up a conversation with a couple visiting from Florida. We tried to make sense of a crazy game called "Liars Dice" the bartender was teaching anyone who seemed interested. After they left another group of folks sat down next to me and started playing. We chatted. I ordered more shots and cocktails. They invited me to join them in crashing some party at a nearby hotel.

I accepted. The night was starting to get interesting!

I recall walking some distance to this party. It turned out to be a fancy halloween party at a really nice hotel. I was very underdressed but it didn't seem to matter. The details of this party are somewhat obscured at this point by the copious rounds I'd been sharing with my new friends. I recall talking to a woman dressed as a bird and a guy in a gorilla suit buying me a glass of some scotch that seemed ludicrously expensive.

At some point I decided to say goodnight and head back to my AirBNB. Outside the hotel was a queue of cabs, I grabbed one and left.

Without my wallet.

In a brilliant stroke of luck, at some point in the evening I had bought a drink with my credit card and slid it into my hip pocket instead of back into my wallet. When the cabbie realized I had no cash to pay him he took me to an ATM at a grocery store to try to get a cash advance. My Canadian card wouldn't play ball with the American ATM.

The cabbie left me there.

I'm not exactly sure why I went back downtown at that point, maybe I was hoping to find the hotel and try to locate my wallet. Maybe I could get into the office and crash on a couch. At any rate I had no idea where I was or where my AirBNB was and for some reason I aimed for the biggest buildings I could see and started walking.

For two hours (more on how I know that later).

Somehow I wound up on Russian Hill back in North Beach, then I wandered downtown. My phone was dead by this point. I was sobering up and exhausted, I'd walked almost 20km that day. I started to try to decide by what criteria I should select a doorway to pass out in.

I spotted a guy standing around outside a fast food joint with some friends looking at his phone and in a moment of desperation I approached him and asked if he could look up directions to where I thought my AirBNB was. He graciously did so and informed me it would be a fifty minute walk back to Potrero hill.

He then did an amazing thing and hailed me an Uber and sent me safely back to sleep the night off. I recall only that his name was Sean, I owe you big time man, thank you! I reached out to Uber to see if they can figure out who this generous soul was so I can throw him a few bucks for saving my tail. If they work it out I'll update this post.

Hungover the next morning I set about calling and cancelling all my cards. I also cracked open my laptop to check in for my flight home that evening but found I wasn't able to. I double checked the booking.

I'd missed my flight home. It was actually booked for the night before. No idea how I bungled that one.

In the middle of calling the bank back to desperately ask them to un-freeze my now locked accounts so that I could try to get a new flight, the host of the AirBNB knocked on the door to tell me I'd stayed far past check out and she needed to clean up for the next guests.

It is fair to say that at this point I was freaking out.

I apologized and threw on some clothes, grabbed my bags and was out the door in minutes. I wandered to a nearby diner to sit down and make a phone call and ask somebody for a really big favour.

I called my mother. "Hi Mom, I'm in trouble. I'm trapped in a foreign country with no money. Want to buy me a flight home?".

The conversation evolved from there but we hit a snag trying to purchase the flight, some anti-fraud thing was mucking things up.

I called my brother. "Hey bro, I'm in trouble ..."

He and his wife sorted me out and I made it home later that day. Thanks you guys, I owe you big time!

After a quick change and a shower at home I headed out to a Halloween party to tell the story of the unfortunate and mysterious night out. Lots of questions came about that I didn't have answers to. Who were the people who'd brought me to the party? What hotel was it in? Where had the cab dumped me?

I realized today that I might have some photos in my phone from that night, possibly even geotagged. I checked but all I had were a few blurry shots of the Buddha Bar.

It occurred to me though that Google might know where I'd been, and it turns out it did. Google location services was turned on in my phone so there's a detailed log of my movements.

Here I am walking from Buddha Bar to the fancy party, as it happens my memory of that was pretty accurate. The Fairmont San Francisco is a beautiful hotel!

Here's me about an hour later in a cab, turns out he did take me to where I was staying and then mere blocks away to try to get some cash.

Once he abandoned me there (I do feel bad about burning him on the ride) if I'd known where I was I could have walked back in minutes. Instead here's what I did for the next two hours.

Yes I did wander mindlessly through the infamous Tenderloin in the wee hours of the night despite being repeatedly told to stay out of it. Nothing interesting happened though, another stroke of good luck.

It turns out I actually fairly faithfully retraced my steps. Never did find my wallet, I called the Fairmont today. No luck there either. Ah well. I got a wild story to tell at least.

If you want to see what google knows about where you've been you can try it out for yourself.

]]>https://rob.salmond.ca/i-left-my-heart-in-san-francisco/"People still carry around macbooks and winbooks in their bags but they use them as dumb terminals to talk to FLOSS powered VM's, running in FLOSS powered containers, with FLOSS powered backends, routed on FLOSS firmware, analyzend and developed with FLOSS toolchains, and obsessively checked by people who are carrying]]>http://feedproxy.google.com/~r/phrostuff/~3/CYa_nhWP6dU/a1d0de79-3308-4f8e-9657-15921550e5b3Fri, 21 Oct 2016 16:59:21 GMT"People still carry around macbooks and winbooks in their bags but they use them as dumb terminals to talk to FLOSS powered VM's, running in FLOSS powered containers, with FLOSS powered backends, routed on FLOSS firmware, analyzend and developed with FLOSS toolchains, and obsessively checked by people who are carrying around pocket superpowers; either GNU Linux powered Android devices or BSD powered iOS devices.

FLOSS is everywhere."

]]>https://rob.salmond.ca/floss-has-won/tl;dr I built a thing to track the Vancouver seabus and here it is.

A couple years ago I took my first trip across the harbour to North Vancouver to view an apartment, a few weeks later I moved in and started commuting by seabus to work. It's a

]]>http://feedproxy.google.com/~r/phrostuff/~3/zpdEdo5rwp0/0fbd9d7f-9a97-43ea-a7c7-72b4e6516b0fMon, 17 Oct 2016 19:20:00 GMTtl;dr I built a thing to track the Vancouver seabus and here it is.

A couple years ago I took my first trip across the harbour to North Vancouver to view an apartment, a few weeks later I moved in and started commuting by seabus to work. It's a great commute and I much prefer it over the skytrain which somehow manages to regularly bring out the worst in people. At times it can also be very pretty.

Running every fifteen minutes during business hours most of the time it's not a big deal if you arrive at the terminal just to watch it pull away. But if time is tight or during the off hours when it's running every thirty minutes literally missing the boat sucks. Cabbing from Waterfront to Lonsdale is about thirty bucks and takes longer than the boat, and if the bridges are busy forget about it. This is why you'll sometimes see folks at Waterfront crouching near the turnstiles to get a look at the seabus countdown timer or sprinting down the gangway hoping to catch it.

Accurate-ish.

The countdown timer is a lie however, as is the schedule which suggests boats will be departing every quarter hour on the quarter hour. The timer just counts down from fifteen minutes and then restarts, having no relationship whatsoever to the whereabouts of the boats. Traffic being busy in the harbour and weather and tides being a small but real factor there are a few minutes of wiggle room on those quarter hour departures which can make or break your trip.

So like any self respecting geek I set about building a wildly over-engineered solution to a problem that amounts to a minor annoyance at best.

Because obviously!

At first I thought I might make use of the Translink API which provides access to location data for buses and trains. My then boss and his fiance had recently hacked up an app for their smart watches which worked out well so it seemed a good place to start. Turns out seabus data isn't available though so that was a non-starter.

Next I turned to aprs.fi, an app stumbled across and then used to completely derail productivity one afternoon when I shared it around the office sending everyone to stare out the window watching boats go by and comparing them to the app.

They do offer an API however the time resolution is too low for a trip that takes only twelve minutes so I tracked down and contacted Lee Woldanski a local HAM radio operator who operates the Bowen Island APRS station which is relaying vessel telemetry to aprs.fi.

He was kind enough to do some digging into the receiver and offer me advice on collecting the data. Specifically he suggested I get my hands on an RTL-SDR, a cheap USB TV tuner device which had been hacked into a generic RF tuner. That turned out to be great advice, thanks Lee!

I'd heard of them but never played with one so I picked one up and started screwing around with it. After a bunch of reading, experimenting, and sitting around by the water with a laptop I was eventually able to receive AIS beacons.

My set up uses rtl_fm to tune to the necessary frequency and pipes the output over a fifo into aisdecoder. These are both running on a raspberry pi situated in a window of my previous employer which has clear line of sight to both seabus terminals. The pi is connected by VPN to a server to which aisdecoder relays the decoded AIS beacons via UDP. This aisdecoder guide was instrumental in getting everything working as was this calibration guide on using kalibrate-rtl.

Sophisticated hardware!

Now that I was receiving the beacons I had to actually do something with them. Turns out they're a bit cryptic at first blush.

/* these are all seabus beacons, can you tell? */
!AIVDM,1,1,,A,14eHnRUPA:G<MCTL<uQ`j6nP0D35,0*49
!AIVDM,1,1,,A,14eH07@00hG<T4FL=gE1JiBr06qd,0*1E
!AIVDM,1,1,,A,14eHnRUOhcG<M;4L<tlHko380HDs,0*7A
!AIVDM,1,1,,A,14eH07@00TG<T:pL=h01UQCT0@M8,0*76

Fortunately this clever guy Kurt Schwehr who does some kind of crazy marine science when he's not working at Google or consulting with JPL wrote libais to do all the hard stuff for me. So I set about building a listener to process and store the decoded AIS beacons as they came in. Using libais it came in at just over a hundred SLOC, thanks Kurt!

Also extremely helpful during this process was this exhaustive reference to the AIVDM and AIVDO data sentence formats assembled by none other than Eric S Raymond, thanks Eric!

Once the unpacked and decoded data started coming in I went way off on a data analysis tangent but eventually I got back around to building stuff and started work on a web app to display realtime updates. Until recently it was very proof-of-concept, employing some poorly performing database queries and client side polling every minute to check for updated data. Also it didn't have a cool domain until my friend Conor whose citizenship allows him access to .us domains picked up seab.us for me. Thanks Conor!

Over the last couple weeks I've added caching for the expensive queries and swapped out polling for push updates via websockets which should not only improve performance server side but also be easier on my phone battery since I'm often checking it on mobile.

I've also been enjoying the recent addition of on-board wifi to some (but not all) of the boats in the seabus fleet so I've tried to document which boats do and don't have it and indicate that on the map. If I've made a mistake there (or anywhere else!) please tweet me or file a bug.

Next I want to build a simple model from the data I've collected to provide estimated arrival times based on current position. More on that, geopandas, jupyter, and other stuff to come ... once I figure it out!

]]>https://rob.salmond.ca/seabus-tracking/I recently had an opportunity to present a case study on a small project I was involved in to the attendees of DevOps Days Vancouver. At work we had a need to improve the setup time for a somewhat involved test scenario which gave me a great excuse to play]]>http://feedproxy.google.com/~r/phrostuff/~3/FoeN2vNUCh0/83d2a16b-21ac-46ab-bf45-4856462e058aMon, 25 Apr 2016 15:05:25 GMTI recently had an opportunity to present a case study on a small project I was involved in to the attendees of DevOps Days Vancouver. At work we had a need to improve the setup time for a somewhat involved test scenario which gave me a great excuse to play with Consul, something a well informed colleague of mine had been raving about.

I had a lot of fun both playing with Consul and talking about my experience with it at the conference. Here's my jam (5 min ignite format).

]]>https://rob.salmond.ca/devops-days-vancouver/I recently had a need to get a list of EC2 instance ID's by instance name using boto3. Most of the examples I found just make an unfiltered call to describe_instances() and iterate over the results but I wasn't thrilled with that approach. The docs do have some example]]>http://feedproxy.google.com/~r/phrostuff/~3/hQoKH-jw4ys/33026010-b267-44f3-91b2-a2ca0c3fe774Sat, 28 Nov 2015 23:52:36 GMTI recently had a need to get a list of EC2 instance ID's by instance name using boto3. Most of the examples I found just make an unfiltered call to describe_instances() and iterate over the results but I wasn't thrilled with that approach. The docs do have some example code but it wasn't obvious to me how do what I wanted and took a few tries to work out the approach.

]]>https://rob.salmond.ca/filtering-instances-by-name-with-boto3/I internet pretty hard, surfing far and wide and going to great lengths to find the goods. I write scripts and tools to sort through mountains of links from various sources. I subscribe to, and this is not exaggeration, hundreds of RSS feeds and hundreds of subreddits. I put the]]>http://feedproxy.google.com/~r/phrostuff/~3/kvyIX97jfuA/63f78d4a-8ea6-493b-840b-a4c0e3fdb3b8Sat, 07 Feb 2015 20:04:01 GMTI internet pretty hard, surfing far and wide and going to great lengths to find the goods. I write scripts and tools to sort through mountains of links from various sources. I subscribe to, and this is not exaggeration, hundreds of RSS feeds and hundreds of subreddits. I put the internet to work for me and I want to share the fruits of my labour.

I often post links on Facebook or Twitter which I think will have a broad appeal but I've decided to create a space to share writing that probably won't be of interest to everyone. This is mostly long form writing, essays, blog posts, and articles covering a range of topics from social issues I'm interested in to things like philosophy, art, politics, and yes some seriously nerdy technology stuff as well.

Because they're long and sometimes challenging pieces with each link I'm including a quote or two that I found evocative or thought provoking to provide a hint of what you're getting in to.

So lovers of the written word and folks who enjoy having their imaginations pricked and their preconceptions prodded I invite you to read over my shoulder and I genuinely hope you enjoy: Rob is reading.

]]>https://rob.salmond.ca/i-am-reading/Santa seems to have grown weary of watching me get my earbuds torn from my head by passing doorknobs and cupboard fixtures so he very generously provided me with a nice set of bluetooth headphones this year. They paired flawlessly to my phone before I'd even descended the subway on]]>http://feedproxy.google.com/~r/phrostuff/~3/7TtApe_oNIM/6168a83c-9712-4dcf-ba3a-23bd7eb69bdeMon, 15 Dec 2014 07:06:11 GMTSanta seems to have grown weary of watching me get my earbuds torn from my head by passing doorknobs and cupboard fixtures so he very generously provided me with a nice set of bluetooth headphones this year. They paired flawlessly to my phone before I'd even descended the subway on my way back from Best Buy Santa's workshop, but when I got them home they flat out refused to pair to my Chromebook.

The official state of Bluetooth audio on ChromeOS seems to be a bit dubious, it's implied that "certain devices" are supported but it seems to be uncertain exactly which devices or to what extent. I did find a post in the chromebook-central Google group who seemed to have my specific model of headphone, and chromebook, and bluetooth issue. The advice offered wasn't very helpful.

I haven't the faintest idea what is happening behind the scenes when a device is selected for pairing in the ChromeOS UI, but the output of bluetoothd looks like this (spoiler: shit ain't happenin).

Fortunately as usual if you bash around in bash long enough you can make the aforementioned shit happen. This will probably not work on a stock, non-rooted chromebook.

First if you haven't already removed failed pairing attempts do so. YMMV (your MAC may vary).

I dunno if mine is showing Trusted: yes because it's cached somehow so if you don't get that right away you may need to run [bluetooth]# trust <mac address>. At this point you should be able to connect.

From here on a regular linux distro you'd need to start screwing with the audio settings to select the now connected headphones as the default audio device, but the cras audio server seems to work this out on it's own. If you see the norse kings initials or whatever on the right side of your volume bar you're in business.

I figured I'd need to do this every time I wanted to connect but it seems to persist through powering off the headphones as well as rebooting the lappy. If you're not as lucky and want to script these steps this forum post seems like a good place to start.

]]>https://rob.salmond.ca/chromeos-bluetooth-audio-jiggle-the-handle/I really don't know much about Angular but I've been playing with it a bit lately, specifically paying attention to the Google Maps UI directive project. As the directive is being rapidly iterated on and docs and example code are a little behind that work, folks like myself who are]]>http://feedproxy.google.com/~r/phrostuff/~3/UhKsSGV3rG4/54fd8c79-cd65-45ef-9837-d2adc49d1327Fri, 12 Dec 2014 08:36:19 GMTI really don't know much about Angular but I've been playing with it a bit lately, specifically paying attention to the Google Maps UI directive project. As the directive is being rapidly iterated on and docs and example code are a little behind that work, folks like myself who are new to the framework as well as this UI project may be a bit baffled.

I did find enough there between the official docs and some of the (now broken but pointed in the correct direction) suggestions found tucked away in closed issues to piece it together. The project seems lively enough that better docs are likely to show up soon. Until then for those looking for something to paste into a project and start hacking, here's a gist that works with Angular 1.3.5 and angular-google-maps 2.0.11.