I’m reading the n-th article where someone mentions TDD (test-driven
development) as a magic word that means “doing testing” or something else and I
thought I’d write down a few things as a note. There have been nearly
infinity plus one articles and discussions about testing and TDD. I don’t mean
to pile on the old, dead and old-dead horse here but I just keep hearing
language that makes me want to pull out a tobacco pipe near a fireplace and puff
Well Actually … and that’s not great without helpful context and more
detail.

So, let me TLDR this real quick. There is no right or wrong way to test if
you’ve tried many different types and flavors and have your own personal
preference. There probably is a wrong way to test your project if you have
never had time, don’t care that much or someone sent you this link and this is
just another opinion
man.

The TLDR here is:

Every project is different and has different testing needs and opportunities.

Testing increases confidence and reduces “whoops” moments.

There’s no such thing as no testing. When you say you don’t have time for
tests, you mean automated tests. Everyone tests, you are just doing manual
and slow (over the project) testing.

Testing is a spectrum and TDD does not mean “I am doing testing”.

There are many flavors of testing and even though TDD is usually the most
dogmatic form of testing, it’s not “best”.

The Spectrum

No Testing

Me: Do you have tests?
Someone: Hehe, I know. We didn’t really have
time for this project. Look, I joined late and …

No one does “No Testing” but people think they do. This is typing the code,
never running it and shipping it to production or scheduling it to run for real.
You never observe it running. No one does this but they think they have no tests
when they only have manual tests.

Think about this with “hello world”. You would type hello world code, save it
and put it somewhere as production. You would dust off your hands, say “another
job well done!” and go home. No one does this. From here on out, this isn’t
what I mean by testing vs no testing. By testing, I mean automated testing
and that includes your local machine.

Pros:

You are infallable and you type the correct solution perfectly

You don’t waste any time by checking what you did at all

You are a physical manifestation of Ship It!

Cons:

As a perfect being, you feel alienated from your fellow man

You are an impossible construct

No one does No Testing. What they mean is Manual Testing. And this is the
point about time. They didn’t have time then for automated tests and they
are running manual tests now. Do they have time for manual tests now? Maybe.
I’m fine with it as long as it’s an informed decision and it’s not causing bugs/outages/delays.

Manual Testing

This is what most people do when they don’t have a test suite. You type hello
world, run it, look at the output.

The only way to verify correctness is with your fragile and tired human eyes

You are definitely going to be running ./some_program a lot

Some Partial Testing

This is when there are tests but maybe only small coverage or one type (like
unit tests only).

There’s a huge inflection between partial testing and manual testing. The
manual testing project has never had time, doesn’t deeply care (or deeply know) and
has had little positive experience (if any) with testing. There is a huge gap
here in culture and minds. It could be developer minds, it could be boss minds,
who knows. This is the mind-gap where you have to explain what testing is. This
is the mind-gap where you try to tell stories and impart wisdom. This is where
you try to explain the feelings of having tests at your back when you are
trying to refactor or understand legacy code. Sometimes, you might sound crazy
or religious.

Cutting the team some slack, maybe there are constraints. Usually there are
constraints. Constraints can keep a project or team from making their tests
better. Maybe the domain, language, history or some other constraint is keeping
the tests from becoming better. But there is a test suite and someone cares or
understands at least some aspect of testing.

Maybe people are trying their best. But I would argue that partial test teams
haven’t deeply felt tests helping them stay on track and ship quality projects.
If they can explain this blog post then I believe them. If they can’t, they
haven’t had time yet and maybe they will. It’s not their fault but they also
aren’t requiring that testing is a required tool of the trade.

Pros:

At least you tried to write tests

Less developer time (in the short term)

Maybe you don’t waste effort trying to automate testing some horrible legacy
GUI with a hack

Minimal tests run fast

Cons:

Major aspects of the system aren’t covered

Confidence isn’t really there

Maybe it’s testing theater, maybe you give up on your tests

If you have a UI or web frontend, that bit probably breaks a lot with unit
tests only

Lack of testing ecosystem understanding and strategies to enable more coverage

Excuses can live here too. And some products are hard to test. But have these
options been tried?

I/O and Third-Party APIs (can be faked or the seams designed away with
dependency injection)

GUIs and Mobile device specifics (could be mocked?)

The product is actually on the moon, how can we simulate the moon? (decrease gravity?)

I would argue some of these things can be mitigated and you really need to
reach for a lot of tooling and language features to fake some of this stuff.
And maybe it’s hard to test everything But that’s why you don’t need 100%
test coverage. But yes, some projects are hard to test. Some code is hard to
test too but sometimes that can be fixed and the developer learns a ton about
refactoring, their language and design.

I had a project where I thought AWS services were hard to test and my app
was breaking in weird ways everytime I pushed it out. Then I researched a bit and
found some tricks and my app wasn’t so different between my laptop tests and public reality.

Complete Practical Testing

Some form of complete testing where units are tested and the system is tested
(like UI testing or regression tests).

Engineers on the team have a culture of including tests with work estimates and
expectation. The organization either supports this kind of work explicitly or
implicitly. It doesn’t really matter. This is a purely practical decision and
there is limited value in “abstract testing values”. Tests exist, that’s good
enough.

This probably means the project probably run many different test suites where
regression tests are run occasionally but some other group of tests are run more
often as code happens.

This is trickier than it seems. That means that git commits, pushes, CI and
other tools all have this culture baked in. You aren’t going to run integration
tests all the time. Everyone has to know that to be the quickest they can be.
Scripts to separate tests have to exist. You can’t just run all tests that
match under test/*.

Where this ends is in philosophy and world view. There’s no perceived value (at
least as far as schedule, job, work, too busy etc) in doing anything
differently. As long as tests exist, that’s good enough. It stops production
bugs.

Pros:

Probably good enough for a lot of projects.

Production bugs are caught before-hand as long as the feature is correct.

Pragmatic and religion-free.

Cons:

Requirements might be misunderstood or misremembered. Implementation comes
first and the test comes later, how good are your notes?

It doesn’t matter. The thing you don’t do is write any code in src or lib.
You don’t even start. You don’t even spike. You write a test. Hopefully your
test is in a reaction to a conversation you just had with someone who is paying
you. Hopefully your quickness to writing a test captures a conversation in an
executable format that is checked in and lives and acts. Compare this with an
email or a Slack message which sits and rots.

Pros/Cons: I don’t know many projects doing purely this. I guess the pro is not
being religious about letting tests drive the design.

Test Driven Development

You let the tests lead design decisions. This is hands-off-the-wheel dogmatic
stuff. You limit your thinking really. Requirements are captured early and
tests are written first. But more so, you let go of the wheel. If something is
hard to test, that means the design is wrong. If a test can’t be written then
you don’t start the feature. If you can’t start the test, you ask someone a
question. See how testing is the thing?

You don’t really need to do TDD to have confidence before deployment. But it’s
trying to fight human behavior. Almost each step is trying to fight some
historically problematic behavior (except for manual and no testing).

You probably need a tight feedback loop, tooling and automation to make this
happen. It’s also not the best way to test just because it’s at the end of
this list.

Cons:

If you are completely lost, it doesn’t work

It can be more annoying at the start of the project if working bottom up

Pros:

It can be a creative tool where a design emerges, the problem is solved and by
definition the code is easy to test

It limits getting distracted while working on a feature - this is subtle and
non-obvious from the outside

Enablers and Multipliers

Automation

Let’s say your flow is something like this:

Work on feature

Write tests

Run tests (old ones and new ones)

Ship it!

While working, you might just run your tests or tests that are relevant to your
feature/work. But before you ship it to production, you’re going to make sure
you don’t break everything right? So, just have a tool that runs your tests.
Have the tool tell you when it passes. Don’t deploy until it passes. Call this
continuous integration (CI).

Now something else happens when you have continous testing. You can have
continous deployment. So, tests passed and you have a build that those tests ran
against right? Then that build is good to go! Why throw it away? Why deploy
from your laptop? Deploy it! This is continous deployment (CD). Note that
you don’t need to do CI or CD but testing is enabling you to do so.

Tests still work. Looks good? Maybe cleanup tests and try one more thing?

Tests still work. Wow, this came out nice and I know I didn’t break
anything.

Would you have refactored and tried one more thing if you didn’t have tests?

Boss: “Hey, right now our calculator only has numbers on it. Could we put a
plus sign on there and call it addition?” You: “Sure!”

You go to the repo and start work. You add some code to handle the addition, in
this contrived world life is simple.

Add a function/method called add that takes two numbers

Write a test for the happy path (numbers like 1 and 2)

Write a few tests for failing paths (strings plus strings, not allowed)

You don’t need to cover all permutations, in your head you probably have an
idea of what bad data is, do that

Ready to deploy? Great! Oops. Someone noticed that you forgot to add a test
in case an emoji character shows up. Ok, write a test for that. What is your
confidence like now? You have a lot of edge cases covered. Is it going to
work?

Executable Documentation

How does this code work? What is it supposed to do? Maybe you have code
comments (but they bit-rot), maybe you have language features like annotations,
typespecs or something. Maybe you don’t. But how do you use the calculator
from the previous example? Can it handle numbers that are strings? It can’t!
Did you write that down? Can you, yourself remember in a few months? Tests
really aren’t docs but they are executable and they can stand-in as docs,
especially as API usage. So until you write docs (and maybe you won’t), tests
can act as capability documentation, ala “What was the developer thinking”.

Hooks and Addons

You have this testing habit now, why not add some libraries?

When you run your tests:

Notify a channel in Slack that master has failed (broken build)

Pass a random seed in to see if tests, modules or files interfere with each
other

See if race conditions happen

See how much of your code executes (coverage)

Load dummy data in to test against (fixtures and test factories)

Notice all these enablers and multipliers happening now that you have a test
step.

Wrapping Up

Let me head back to the start of the spectrum. “We don’t have tests”

Compare the workflow where someone just types a program and copies it to a
server and then closes their laptop. It seems insane from the very dogmatic
land of TDD because their point of view is beyond just “having tests”. But that
doesn’t mean it’s “best”. But there is a knowledge gap between each of the
points on the spectrum. I’m ok with ignoring parts of the spectrum for a
project if it’s understood. If you can explain this blog post to me then we’re
cool. If you cannot then I feel like there is a blind spot and any pain points
the project is feeling is fair game to improve. If you can explain this post
then I’ll mark it up to semantics. If you cannot explain the spectrum to me,
then the term TDD is being misused as “testing” and I’d like to explain and help
because there is some wisdom to be shared.

There’s very little right or wrong in all this. I am trying to communicate that
“no tests” does not mean “no testing”. You are probably doing manual testing.
And for certain projects, who cares? Do you care? Do you see yourself hating
the manual testing? Then automate it. Are you manually typing in a test user
and password, clicking a login button and then clicking around to see if
anything broke? Does that annoy you even to talk about it? Then automate it.
Are you hand-crafting test data and sharing it with colleagues? Then automate
the creating of your test data.

There’s no such thing as no testing and TDD is not completely required
although I have enjoyed TDD or Test-First quite a bit when I’ve had the
opportunity to use it as an approach. It’s not required to go all the way to TDD
because testing is a gradient spectrum.

Gb is a fantastic tool for Golang that let’s you define dependencies but more importantly (to me) is it lets
you work out of a
normal1src directory wherever you want. You don’t have to mess with $GOPATH and you don’t have to put your
own creations next to libraries. You could even code directly in Dropbox if you wanted to be super lazy
about source control and sharing. Overall, I really like gb for projects. It’s more normal to other
languages and I don’t have to have Go be the exception to my project backups / paths / scripts / everything.

But I think examples are lacking. The gb docs are great, I’m not saying that. I just wanted to walk through
growing a project from small to medium to large and see how organization changes. First, we’ll start by
building a fake calculator with no working pieces so it doesn’t need a lot of organization. Then as we add
features, we’ll pretend that it needs lots of separation and structure for future expansion and work.

You’ll need to install gb with go get. You probably already have it installed and you
know how to google so I’ll just skip that stuff.

I’m going to use the terms small / medium / large but please note that doesn’t mean stupid / insignificant /
important. These size terms are just for labeling and explanation, don’t read anything else into it. If you
make a small project, it’s not “stupid” just as a large project is not automatically “important”2.

Minimum GB

First, a gb project is really just a directory with a src directory in it. Of course, nothing will work
without some files for it to build. Below is the same error you’ll get even if you make gb_project/src
(which gb looks under for source files).

Gb wants a subdirectory for a package under src to tell it what to build. For our examples let’s make a pretend calculator.
Our working directory is going to be pretend_calculator. This can be anywhere. Under your home, tmp or
Desktop. Put it wherever you want. Just assume we’re in pretend_calculator as the project root after this point.

$ mkdir -p pretend_calculator/src/calculator

Let’s write minimal code for this to build.

// src/calculator/calculator.gopackagemainfuncmain(){}

$ cd pretend_calculator
$ gb build
calculator # showing us gb built the pkg, I'm going to omit this output from here on out

So our project tree looks like this:

.
├── src
└── calculator
└── calculator.go

When you gb build, it will create a binary ./bin/calculator that doesn’t print anything (not surprising,
our main is empty). This project layout isn’t that great because the main is really a cmd. If we wanted to
add more than executable, we’d have to change where the main() is and rename a few directories and files.
So this isn’t great if we’re building an equivalent of Hello World, it’s hard to tell where func main() is
if you just look at the filesystem.

So let’s make this more obvious. Let’s create the start of a simple gb project with a command entry point.

Small Gb Project Example

In this case, we want some actual code that runs something. We’ll have everything in one file under cmd/.
Later, we’ll move some code out to a package as the project examples grow in size. The cmd folder in gb
projects tell gb to build binaries of that same name of the file or the package. It’s the executable we’re going
to run from ./bin.

Now this is a bit tricky. If you name your source file src/cmd/calculator.go then you’ll get a binary
called cmd. So what I’d do is name it something like src/cmd/calculator/main.go just show that this is
where the main lives for this binary. You can name the file something other than main.go but it needs to be
in a subdirectory. The gb docs are a bit vague in their example
tree output describing this. Also, note that binaries will always show up in ./bin. So I’m skipping that
output in the tree listings.

So this is a nice layout for a small CLI app with not too much logic that would be ok to put into a single
file under cmd. If I wanted to break it apart more where the entry point (the main) and the app logic and
functions were separated and kept organized, I’d use the medium project layout which we’re going to
talk about next.

You could also just add functions to main.go to keep that file clean and then later move the functions around
to different packages later.

Medium Gb Project Example

Let’s move some of the app logic to another file and package. This can be super confusing and yet it’s the
most common thing to do (in my opinion) when working with Go projects. We’re going to make an add function in
a new file and a new package called calculator. Note that this package is sort of arbitrary, it doesn’t
need to be your project folder name or anything. Packages are subfolders under src. This will be more clear in
the next gb project examples.

Note that we would be planning on putting all functions into src/calculator/calculator.go here. If we
wanted to only put the Add function into src/calculator/add.go, we could do that. In the context of a
medium sized Go project, we might not want to do that.

Also note that the main.go needs to import calculator. This refers to the package we created. If we want
sub-packages and more sub-division, we can do that but we’ll get to that in a bit.

Large-ish Gb Project Example

Just a reminder, my label of large is very arbitrary.

Ok, now what if we want to add more functions and packages. We can continue to do so across files and
packages. Let’s add subtraction and the concept of memory storage (you know the MR button?).

Adding subtraction is the same as addition. We just add a Subtract function to
src/calculator/calculator.go with a Capital letter to export it. It’s the same as Add. We could split this
out to different files if we wanted. Maybe that’s more interesting. We’ll do that in the next example.

Let’s add memory storage. We need to create a struct to store stuff in. So our memory.go code is going to
have a struct initializer in it. The function naming is just Go convention, nothing here is specific to gb.

// src/calculator/memory.gopackagecalculatortypememorystruct{registerint}funcNewMemory()memory{returnmemory{register:0,}}// MR means memory recall, it returns the contents of a number in memoryfunc(m*memory)MR()int{returnm.register}// MS means memory store, it stores a number (normally would be the screen)func(m*memory)MS(numberint){m.register=number}

We only export the NewMemory function to keep people from creating structs themselves.
Using this struct in main.go for the command goes like this:

Larger Gb Example

Let’s split out every function into a file to make the project very easy to navigate. Intuition should drive
file search. Add and subtract will go in their own files. We’ll add the concept of a tape to display
information that will also have the opportunity to save state that will make the memory feature more realistic
to how physical calculators work.

I said we’d break out functions into intuitive files. Let’s put Add() into add.go

// src/calculator/tape/tape.gopackagetapeimport"fmt"// represents an empty memory instead of using nil which does not communicate wellconstemptyRegister=0// For simplicity's sake, the calculator tape is essentially the entire electronics // of this fake calculator. A tape probably wouldn't care about current previous // values for undo functionality.typetapestruct{lastNumberintCurrentNumberint}funcNewTape()tape{returntape{lastNumber:emptyRegister,}}func(s*tape)Clear(){s.CurrentNumber=emptyRegister}// Updates the internal state of the tapefunc(s*tape)Update(numberint){s.lastNumber=s.CurrentNumbers.CurrentNumber=number}// Displays the current numberfunc(s*tape)Display(messagestring){fmt.Printf("| %-22s|%7.1f|\n",message,float32(s.CurrentNumber))}// Just print a blank line like the calculator tape is advancingfunc(s*tape)Advance(){fmt.Printf("|%31s|\n","")}// Roll the tape back, behaves kind of like one-time undofunc(s*tape)Rollback(){s.CurrentNumber=s.lastNumbers.lastNumber=emptyRegister}funcformatNumber(numberint)string{ifnumber==emptyRegister{return" "}else{returnfmt.Sprintf("%1.0f",number)}}

It’s very similar to the last examples, just more code. We have some types and structs in this file
but you can see that any Capitalized anything is expected to be used externally. The package has no hierarchy
but later when we use it, we’ll need to alias it.

The changes to the last project are simpler that it seems. All we did was:

We made a directory called src/calculator/function. The package is now calculator/function.

We split Add and Subtract into files named add.go and subtract.go. We don’t explicitly need to care
about this when importing.

Each of these new files have packagefunction. You can’t declare packagecalculator/function at the
top. Doing that won’t even pass go fmt, it will error.

Memory.go stays the same, it’s in the root calculator package just because.

Our main file has expanded dramatically as we try to exercise the new packages and files we’re making.
We need to give calculator/function an alias (in this case fn) to use a hierarchical package.
It’s very arbitrary. We still are using Memory out of the calculator package so we need to import
that explicitly like we were before. If you wanted to break memory out, you’d follow what we did with add &
subtract.

Wrap Up

I hope this was interesting. I’ve been wanting an article like this to exist ever since I started using gb as
a tool. I’ve found example gb projects on github that were useful examples but believe me when I’m blogging
all this for myself as a future reference. Shoot me a note on twitter if you liked this or would like to see
something else, it’s nice to know who’s reading.

[1] Nothing is normal

[2] I prefer better/worse over good/bad. In this case it'd be smaller and larger
which is awkward in this case. The only rock we have to stand on in C.S. is metrics, everything else is opinion (like
this very statement!).

I wrote about setting up a Dev Log about two years
ago. At this point, I’ve been using this setup for two years so I thought I’d write a little bit about it as
a follow up. After all, I hate uninvolved advice.

What Have I Learned

I’ve learned that the dev log works as a pensieve. It’s a dumping ground for code snippets
and dreams. I found it a good outlet for frustrations too. But most importantly, it’s
like an archeology site. Let me give you the best payoff of the dev log as a small story.

SSL Only Mystery

We use an external API for mobile stats tracking. It will track installs and other things
from the app store. It’s wired up to our own API through a webhook. This webhook has a
URL configuration. Originally, it was something like http://api.example.com/... and it had
a payload and other such details. We hadn’t received data from them in many, many months.
I started looking at this but basically had no context other than this.

Of course, the first question in debugging is what has changed. So what did changed? I
didn’t think anyone had messed with the config in months because essentially this service was a
set and forget kind of thing. We also hadn’t received Android or iOS data on the same day. Too
suspicious, too convenient. So, I knew the data it stopped working. Let’s go to the dev log!

What did we find on the day that it stopped working:

Switch to temporary SSL redirects by default

Later, there are some clues that we were working on making the temporary redirects into
permanent redirects. There’s notes all around this timeframe that we were working on making
the site SSL-only. Ah.

Change the webhook to https and bam, we start getting rows in the database of payloads.
The URL didn’t jump out to me as wrong. It’s obvious now but the dev log helped trigger
some clues around this. The clues were also in the git log but not the surrounding context
that we were working on making the site SSL-only.

It especially didn’t seem wrong as redirects are supported. It turns out the service we are using
doesn’t handle redirects (or at least seems to not). Just looking at the URL as http:// doesn’t
seem wrong at all. But with the dev log context, it does. This is what changed.

The Surprise of the Double Me

Just as when you don’t have a pair and you need to be the “high level person” and the
“low level person” all at once, I’ve found that my complaints and frustrations come off
TO MYSELF as whining. This is amazing. Let me say this again.

Logging frustrations in my dev log comes off as “whining” to myself later.

I still think this is good if it’s a healthy outlet. It’s not good if it lets you polish your whining so it
can be delivered as a pithy zinger to an unexpecting listener. The dev log is about capturing your thoughts.
Be careful what your thoughts are, you might get what you want. I still like to capture task changes as this
represents time lost or spent. Maybe this sounds like whining in the log but that’s ok.

Do Not

Don’t tag or organize your thoughts into an ontology or fancy structure. The idea is to get in and get out.
One friend is good enough with org-mode that he was able to structure his log more than me. That’s fine.
Make it your own. But don’t start making per-project logs, I think it would just self-destruct under ceremony
burdens. The dev log is something I write in during context switches. Get in and get out. See my previous
post for shortcut keys.

I however would leave clues for myself like LEARNED: or TIL which could be used for retros. Or PR: S3
refactor if I opened a pull request. The idea is to capture what have you been doing or what is your time
being spent on. I capture interruptions or helping someone too. That’s a great thing to jot down when you
first come back to your desk or switch from Slack.

Helped Dumbledore with Docker

Two Years Later

10435 lines of text and two years. My intention or goal was never length. It was always the pensieve.
Reading back on it is a massive log of bugs, TILs, tech gotchas and a frustration heat-sink. There are
face-palm mistakes, logs of miscommunications, “this library doesn’t do that” notes and details.

2017-02-28 - Tuesday

Trying to do a deploy, S3 goes down hard and breaks the Internet.

There are rabbit hole results with fantastic details right before you come back from the rabbit hole:

Envconsul won’t work for us because of our combination of unicorn zero downtime deployment configs, how we
want to handle ENV restarts and a limitation of Go. Envconsul won’t work because it does in fact restart
the app correctly if -once is passed and you -QUIT the worker. But since -once has been passed, you can’t
reload the environment.

https://github.com/hashicorp/envconsul/issues/52

This is the detail I wanted to capture so I can chunk it later as “we can’t use envconsul” and then I can just
text search for this later. This is how it actually worked many times.

The concrete example I’m going to use is my previous blog post
about Slop where I demonstrated how to use the slop
gem. The code in that post is slightly contrived and certainly not clean but I think it demonstrates how to
test CLI scripts which suffer from some testability problems (how do you capture STDOUT?). The thing that
it does not demonstrate is long term maintenance problems that happen after it’s written once for a blog
post.

Code review aside, this desire to have a binary CLI was inspired from a very real work situation where we had
a CLI utility and not surprisingly it was damaged from some gem and dependency problems. Mainly, if I use
(consume as a user) the slop gem it’s in my bundle. If my list of gems grows forever eventually I might want
want to develop another gem that uses slop as a dev. So now I need to use RVM’s gemsets or gem_home or
otherwise keep my gems and projects sandboxed. Because (as it did happen) pry uses slop and when pry stayed
behind causing slop problems between projects. Distributing this gem to our team was problematic because
different people used different gem isolation tools.

So … uh … what if I just want a CLI? Why can I just live and die in /usr/local/bin like “normal” unix-y
utilities do?

Golang to the Rescue?

So for the past few years I’ve experimented with Go as a tool in the toolbelt for the above problem. It has
fast compile times, can cross-compile to other cpu types and you can get a binary even for a web service.
Shipping a binary for an api service sounds pretty neat! However, it lacks high-level density (usually called
expressiveness). So without starting a language war, what if I want something in-between loose shell scripts
and strict compiled C (not that I’m specifically talking about shell or C)?

Ruby is so close to shell script sometimes and then you can drop into the “real stuff” for the heavy lifting
and then just continue in happy script land. I feel like a lot of shell script problems align with this flow.
Looping over images and doing mass conversion for example. It’s just a little bit of heavy algorithm
surrounded by a lot of shell stuff, which is great. So Ruby has been fine in that way. But then not fine
for it to live in $PATH.

Go as an experiment has been fine while I’ve sought a panacea for $PATH. Go has a lot of interesting things
in it and I’m not giving up on it. But porting isn’t real. Rewriting is real. Porting Ruby to Go is a
rewrite. You really need to go back to requirements / thinking and you will feel tempted to refactor. It’s
closer to rewriting I mean. It works the other way too. I’ve seen “Java in Ruby” in a lot of libraries.

There’s no such thing as porting. Only rewriting.

I’ll show otherwise later.

What Sharing Ruby is Like

So if I make a hello world CLI in Ruby called utility, how do I share it?

Here I list the dependencies that are implied in the top box. In ruby there are many.
Many times they aren’t listed or described. If you are a ruby dev, you just know that things start with
bundle exec, you probably have it aliased. If you aren’t, you are confused and probably don’t use the thing
because the README didn’t work.

Maybe this above in the middle is the source code I’m trying to share. Scripts can be commited with file
permissions so the chmod on the left isn’t entirely needed. What is definitely needed is some path setup
which may or may not already be configured. I suppose you could put utility into /usr/local/bin but then
it’s like an oddball exception. brew list won’t show it and it’ll never be updated. You’ll just have to
remember you installed utility as a one-off? Uhh …

Basically it boils down to this:

“Please install a dev environment” vs “Please use a package manager.”

You can see that on the left I’m basically asking a user to install a dev environment for a Ruby program. And
then as time progresses, what happens to that dev environment? Does it bit-rot? Does homebrew break it?

And maybe you might say “just gem publish”. Phusion used to do this for passenger. And logstash. But then
they stopped. Using rubygems to distribute ruby code is sometimes done but then sometimes it’s frowned upon.
I’m not sure exactly why and I don’t have a source although codegangsta kinda hints at
it.

This isn’t a ruby problem. The same thing happens with node & python. But when I run into a utility written in
Go, I breathe a sigh of relief.

It’s written in golang. Woo! This should be easy to install and run.

Worst case it’s a go get. Sometimes it’s a brew install. I think these mechanics keep people from
packaging ruby utilities into homebrew. I know there are packages that help with this like Phusion’s
tool and FPM but I just
don’t see that a lot. Most of the time the README just says gem install but they skip all the context that
I diagrammed up there. Even my own projects blow up on me. Sometimes I have to reset bundler and ruby (OSX
upgrades). Then I’m missing a gem.

The fix: gem install slop. I had already done bundle before but cleaning out gems, upgrading homebrew,
upgrading to Sierra or switching from rbenv/chruby/rvm and back and forth can leave this script “broken”.

So, what to do? I just want a command in my path. Do I have to switch languages?

The Enthusiast Trench is a metaphor for a topic/hobby/community/pastime that can’t easily observed
and understood from outsiders without a similar amount of interest or involvement of the curious party.

There isn’t just one Enthusiast Trench. There are many trenches and they are easily to find if you are walking
on the surface of the earth. It’s like the concept of rabbit holes but rabbit holes or rabbit holing
is usually a pejorative about wasting time. Enthusiast Trenches are about interest, enthusiasm and the
hidden nature of the payoff in these things until you spend enough time to appreciate them. At that point, you are
in the trench and now you are unable to explain to outsiders what you have learned and witnessed in
the Enthusiast Trench. The trench in The Enthusiast Trench metaphor isn’t a pejorative. It isn’t related to dirt or
digging. Enthusiast Trenches aren’t good or bad.

Anything that can’t be explained why it is fun is probably an Enthusiast Trench.
When a person has to resort to metaphors, they are trying to think of things that surface people
have seen and use those for stand-ins for things they have seen underground in the Trench.

If you listened to someone talk about why they built a life-sized Lost in Space blinking computer replica
they might tell you “it was fun” but if you probe “why” then they are going to have a rough time explaining
it. The raw answer in their head is probably something like:

I didn’t think I’d be able to get the neon bulbs refresh time to be precise enough to look like the
original Burrows props. But, after I did some tests and talked with some friends that I met (and have
become good friends with since), I knew I could get the full scale version working. Then it was just a matter
of time …

The Trench isn’t this project or this person. It’s the whole community of people doing projects like this.
The Trench hides the real “why” behind a time and interest wall.

A community where mods, hacks or extensions exist and are plentiful is a strong indicator that it is an
enthusiast trench. The important thing about Enthusiast Trenches is not that it is one or it isn’t one.
It’s that it can’t be easily appreciated.

I can think of a lot of examples but some of the biggest trenches are the ones that are abstract and not
physical. Photography is one but it can be demonstrated physically (maybe not the process but the product).
The abstract trenches are really tricky. So, naturally, being a software person I can think of a lot of
software trenches.

Examples

A working irc client in minecraft using mods.

A raid-proof base in Rust (a survival/building game), designed in an external CAD program with mods.

A development board with the PCB shape of a Lego minifig

These examples pictured above are easily demonstrable because they are visual or physical. Abstract things are not.

Libs

This is true for software libraries in every language I can think of. Maybe I’m not in some of these communities.
Maybe I’m haven’t been in the communities for a long time. I might ask the question “what are modern
libraries to use in Java these days”? This is like calling down to someone in the trench after you have
left. People are extending tunnels that can’t easily be explained.

Maybe software libraries aren’t purely fun. But people can be enthusiastic about them because they are amazing in
their eyes. If you are an outsider, you won’t be able to see the fun in the interior tunnels of their trench.

Fear of Missing Out

There is definitely a relation to the fear of missing out (FOMO).
You could feel bad about not being in all trenches and many times I do.
I don’t want to encourage FOMO. I don’t want to give FOMO any more fuel.
I don’t really have a solution to FOMO and really that’s a different topic.

I follow the City Skylines subreddit but I don’t play the game. I
know people are having fun. I sort of understand the game mechanics and the game loop. But there are a lot of mods and deep
mechanics I don’t get. This is true of a lot of games with “mods”. The community is digging its own
trenches from within a trench by extending the game. But I really don’t grok the fun.

Sometimes, I just let the weight of the trenches flow over me and appreciate the complexity.
Like looking at a landscape from really far away. It’s beautiful because it’s missing the details.

If you are working on a gem that uses slop itself (your gem uses slop) then
you might run into this error when adding pry. Because the latest published
pry gem uses slop 3.6 but you are probably using slop 4. Slop 4 and 3 aren’t
the same API.

Resolving dependencies...
Bundler could not find compatible versions for gem "slop":
In snapshot (Gemfile.lock):
slop (= 4.2.1)
In Gemfile:
my_cool_gem_im_working_on was resolved to 0.2.0, which depends on
slop (~> 4.2)
pry (= 0.10.1) was resolved to 0.10.1, which depends on
slop (~> 3.4)
Running `bundle update` will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.

This is true for pry 0.10.2 too. There are two options I’ve found that works:

Update Pry

tl;dr Do this

Install 0.10.3 or newer. Make sure your bundle is resolving to that exact version.
This means

# your Gemfile"pry","= 0.10.3"

in your Gemfile. If you are working on a gem and
don’t really have a Gemfile but have a gemspec file then put this dev dependency in your gemspec.

# your .gemspec filespec.add_development_dependency"pry",'= 0.10.3'

Install From Master

You could also install pry from github master. This might show up as 0.10.3 depending on when
you are reading this. Version numbers only increment when pry does a release. I found
that pry git master did not have this issue.

Now the problem here is, if you are working on a gem yourself, you don’t have a Gemfile.
Afaik, you can’t install a gem from github source instead of a gemspec (that wouldn’t make sense
because you are going to distribute a gem!). But perhaps, you maybe want pry
temporarily in your gemspec like this:

# your_gem.gemspec
spec.add_development_dependency "pry", '=0.10.3'

Here’s how you can install a gem from source in a gemspec temporarily.

# do what you want here but I clone into a global place called ~/src/vendor
mkdir -p ~/src/vendor
cd ~/src/vendor
git clone https://github.com/pry/pry
cd pry
gem build pry.gemspec
# it will spit out a pry gem with a version on it
gem install pry-0.10.3 # or whatever `.gem` file is created

Now we have pry 0.10.3. Bundle doesn’t care it came from pry master. So when it
picks up on the spec.add_development_dependency it will install the version
you already have. BUT BIG PROBLEM You probably don’t want to commit this
because people will get the same error you got on bundle install if
that version doesn’t resolve. As far as I can tell, this pry version
works with slop so perhaps you just want to use 0.10.3 and be done with this.
I just wanted to illustrate how you can manipulate bundler.

Pry Vendored Slop

The reason this is happening is because of the slop namespace.
Pry fixed this in a commit associated with that issue. It’s fixed because they inlined
the gem as Pry::Slop so now Slop (your version) doesn’t conflict/activate.

Slop 4

I had an older post about ruby and slop but that’s with Slop 3 which is basically locked to Ruby 1.9.
No doubt, this post will bitrot too so please pay attention to the post date. The current ruby
is about 2.3.0, slop 4.3 is current, it’s 2016 and the US election cycle is awful.

It’s ok that you need help

I think the most confusing thing about slop is that it has great examples and documentation but
when you try to break this apart in a real app with small methods and single responsibilities
some things sort of get weird. I think this is because of exception handling as logic control
but I’m not sure enough to say slop is doing something wrong that makes this weird. In
my example

I refer back to MY OWN BLOG quite often for slop examples so it’s ok that you need help.

I disagree with -h here for hosts. I think -h should always be help. This is especially true
when switching contexts. When I switch to java or node or go or python, I have no idea
what those communities’ standards are. I rely on what unix expects: dash aitch.
I disagree also with this example because figuring out how to handle -h for help
is the most confusing thing about using slop because you have to use exceptions
as flow control (sort of an anti-pattern).

Thoughtbot has an excellent and much desired article on getting Docker + Rspec + Serverspec wired up but I couldn’t find anything about images generated from Packer. Packer generates its own images and so we can’t just build_from_dir(.). Our images are already built at that point. We’re using Packer to run Chef and other things beyond what vanilla Docker can do.

The fix is really simple after I was poking around in pry looking at the serverspec API.

First of all, what am I even talking about? Serverspec is like rspec for your server. It has matchers and objects like

describe file('/etc/passwd') do
it { should exist }
end
describe file('/var/run/unicorn.sock') do
it { should be_socket }
end

So although we have application tests of varying styles and application
monitors, serverspec allows us to test our server just like an integration test
before we deploy. I had previously tried to go down this route with test
kitchen to test our chef recipes but it was sort of picky about paths.
Additionally, going with serverspec and docker doesn’t even require Chef. Chef
has already been run at this point! What this means is fast tests. Just
booting a docker image and running a command is fast.

# single test
$ time bundle exec rspec
1.415 total

Nice!

So how does this work? Well, like I said the thoughtbot article is really good but I wanted to add to the
‘net about packer specifcally. The critical piece to make Serverspec work with a Docker image
created from Packer is in your spec itself (spec/yer_image_name/yer_image_name_spec.rb).

# spec_helper and a lot of spec/ came from `serverspec-init1`require'spec_helper'require"docker"describe"yer_packer_image"dobefore(:all)doimage=Docker::Image.get("yer_package_image")set:os,family: :debian# this can be set per spec# describe package('httpd'), :if => os[:family] == 'redhat' do# it { should be_installed }# endset:backend,:dockerset:docker_image,image.idendit"has bash installed"doexpect(package("bash")).tobe_installedendend

See that image = Docker::Image.get("yer_package_image") bit in the before block? This
is the difference between build my image (what the thoughtbot article uses)
and run an existing image. Since packer builds the image, we can just reuse
the one we have from our local store. Then later :docker_image, image.id sets
the image to use during the test. It knows about docker because of require "docker" from
serverspec. I’ll mention what versions of these gems I’m using at the time of this post
since this might bit-rot.

The path structure is arbitrary above. We have a project we’re currently
working on that I’ll explain in another blog post or talk. The only specifics
about this file structure is that typically you’d want to do something like
require 'spec_helper' but if you are building an image from a subdirectory
and then running tests from another nested subdirectory then you’ll need to
require_relative 'spec_helper'. I actually don’t know why this isn’t the
default anyway.

But like I said, running tests with Packer as a post processor doesn’t do
anything. You could run it with PACKER_DEBUG or something but I don’t like any
of that. I’ll be following up with a more complete workflow as we figure this
out. So you don’t need to do this last bit with the post-processors. I just
wanted to leave a breadcrumb for myself later.

Sidekiq Enterprise has a rate limiting feature. Note that this is not throttling. The perfect use case is the exact one that’s mentioned in the wiki: limit outbound connections to an API. We had a need for this between two of our own services. I spiked a little bit and I thought the behavior was interesting so I thought I’d share.

At one point a while back, I had a config file outside a rails app
and what I wanted was something like this:

Given this mappping definition /order/:meal/:cheese
How can I turn these strings into parsed hashes?
/order/hotdog/cheddar -> {meal:'hotdog', cheese:'cheddar'}

I knew that something in Rails was doing this. I just didn’t know what.
I also didn’t know what assumptions or abstraction level it was working at.

Journey into Journey

The gem that handles parsing the routes file and creating a tree is journey.
Journey used to be (years ago) a separate gem but is not integrated into
action_dispatch which itself is a part of actionpack. So to install it you
need to gem install actionpack (or use bundler) but to include it in your
program you need to require 'action_dispatch/journey'. If you have
any rails 4+ gem installed on your system, you don’t need to install
anything. Action pack comes with rails.

cd /usr/local
# find some sha you want, I want mysql 5.6.26
git log -S'5.6.26' Library/Formula/mysql.rb
git checkout -b mysql-5.6.26 93a54a
brew install mysql
# oh no! the CDN doesn't have 5.6.26 anymore!# Homebrew pukes with a 404 error. :( :( :(# make homebrew's cache folder
mkdir ~/Library/Caches/Homebrew
# google for the tarball (the url doesn't matter as long as you trust it)
wget http://pkgs.fedoraproject.org/repo/pkgs/community-mysql/mysql-5.6.26.tar.gz/733e1817c88c16fb193176e76f5b818f/mysql-5.6.26.tar.gz -o ~/Library/Caches/Homebrew/mysql-5.6.26.tar.gz
brew install mysql
# This installs older versions of dependencies.# You probably don't want to install old versions just for fun.# Like, this will install some version of cmake for mysql 5.6.26 but# idk what happens when you flip back to master and install# something else that requires cmake.# You can delete the branch when it's done.cd /usr/local
git br -d mysql-5.6.26
git checkout master
# I assume you can use a newer version of cmake (or other deps)# after the binary is built but I don't know.

I have this problem a lot at work. I’m cranking on stuff, figuring things out
day to day but if someone asks me what I’ve done, I have no clue. Being put
on the spot sucks. When something sucks, it’s a problem. Put it on the
tool sharpening list.

So what can we do? It’s pretty easy, just keep a diary. But there are some specifics that I’ve worked out because
I’ve had Lists of Lists™ before. I’ve learned that Lists of Lists™ do not work.

I want to:

Keep it simple.

Have it be easy to use, non disruptive.

Actually use it. Something that I’m not going to hate, throw away or give up on.

A Nice Setup

iTerm allows you to launch a terminal with a global hot key and run a command.
What’s better is that it stays out of your way when you click away.

iTerm Setup

Configure a new profile in iTerm. Set a command to run vim.

Make the profile pop up with a hot key.

Voilà!

Combine this with a quick vim script to insert the date headers (including knowing what weekends are),
it’s pretty nice.

Vim Setup

(completely optional)

Here’s a shortcut that will add a header like # 3000-12-25 - Thursday at the top of the file.
Assign it to a shortcut and hit that at the beginning of the day. Put this in your .vimrc or .vimrc.local
depending on how you have vim setup.

" Insert the date at the top of a development log.
nmap <leader>N ggi# <C-R>=strftime("%Y-%m-%d - %A")<CR><CR><CR>

Now, in command mode, hit ,N for next date. It will jump you to the top and start today’s entry.
It’s fast, it’s nice and it stays out of your way. You’ll do this all the time so this is important.

1 # 2017-01-26 - Thursday
2
3 █

Awesome Things This Does

No more remembering during standups

During standups or retros, I can convert this quickly into a summary:

What I worked on

What I’m waiting on

Whatever your format is, your log is what you did and you won’t forget stuff.

As a bonus, after using this log for a while, it also can show you how hard you’ve been working and keep
yourself from being too hard on yourself. That thing you really tried hard on that you forgot where you
left it, maybe you chunked it as a failure when it was not a failure. Maybe you left yourself enough detail
to show:

I could keep going on this experiment but the point was proven.
I ran into a limitation beyond my control.
I tried many different options and approaches but the technology isn’t ready or something else is up.

As time goes on, this chunking effect is more dramatic. Wait until you forget how hard you tried.

No more forgetting that cool command you typed

Sometimes I browse my history to find that curl that worked. But which one?
In my dev log, I’ll just paste in a command or the thing that actually worked.
Maybe I was debugging something because I forgot something silly. Writing that
down is like a tiny “hurrah” but also a breadcrump to future me what the hold-up was.

Weekend Me

I don’t think about work on the weekend. Monday me hates this. With a dev log, this
isn’t a problem. I just review Friday and that’s enough to jog my memory.

Advice Time

I’ve been using this for a year and it’s been amazing. I’ve done it everyday.
So let me hand out some advice.

Don’t create multiple files. If you work in multiple teams, don’t try to orgranize
your thoughts into teams. Just split it up by day. Embrace the chaos. This is
quick. Hit key, brain dump, hit key, keep working. If you hate this and it keeps
you from logging then change this advice. I think most people would hate having
to categorize work into separate files.

Don’t worry about tagging or searching. I only tag things like TIL so it jumps out.
Not even for retrevial. Text search works fine. I have 7500 lines from 1.5 years of
content and I can find anything just with vim text search.

Make it yours. If you don’t want to call it dev_log.md, call it something else.

Whatever you hate about this blog post, change it. The real idea is: solve a problem for you,
in my case and most people’s on my team it has been remembering what you did and remembering your wins.

I couldn’t find this information anywhere so I’m writing it. If you installed mysql (and I mean MariaDB) through homebrew
then you might find some trouble when trying to set your timezone to UTC or GMT.

I gave a lightning talk at pdxruby recently.
I was trying to explain the gotchas but was doing live coding in
pry and it wasn’t enough time for me to figure out some nice succinct
take-aways. My bigger point was something like “our industry seems to keep
forgetting certain things”. This is not to say Yer Doin It Wrong. I just
think it’s interesting that some things keep coming up because they are very rare.

How to generate an SSL cert

Encoding and utf-8

Database salts

HTTP and RFCs - I personally have forgotten or misremembered something

Even if you’ve done it many times, you haven’t done it recently (like just now)
so we all forget. This theme is interesting! Different teams, people, states
and projects … some common patterns maybe? Many times with these hard
subjects, I often come across as “wrong!” and that’s not what I’m trying to do.
I just want to point out where the key things are so that you can remember
where to look to google some more or trigger your memory.

So, this encoding thing. Ruby 2.x changed lots of things. First, your source
file is utf-8. Your strings are utf-8 by default. There’s more to it than
that but it’s all pretty much utf-8 now. There’s also no iconv in stdlib
anymore. It’s just .encode off the string class (we’ll get to that in a
second).

Your Encoding Friends

Open up pry (if you don’t have pry, gem install pry). It’s all you’ll need.
If you do ls Encoding, you’ll see a list of encodings that Ruby supports.
You get this for free in every process. You don’t need to do anything special.
You’ll notice that "".encoding is => #<Encoding:UTF-8>. That inspected
Encoding:UTF-8 bit is coming from that list.

There’s also a shorthand versions of these encoding names that you can use but I like using
the constants where I can because it’s namespaced with Encoding so it’s more
intention-revealing. So let’s write a file as utf-8 so I can explain the shorthand thing.

File.open('/tmp/awesome.txt','w:utf-8'){|file|file.puts"awesome"}

This is pretty straight-forward. It creates a file with awesome in it, encoded
in utf-8.

You can’t say ‘w:latin-1’ here. That’s another name for iso-8859-1 but latin-1 doesn’t work
here for the file writing mode.

You can write a few modes in different encodings and the bytes come out
exactly the same. There’s a historical reason for this. EBDIC begat ASCII begat ANSI
(sort of) begat Unicode. All along the way, the lowest bytes stayed backwards compatible.

This is also why English speaking programmers are surprised by encoding
errors because you can get away with a lot by sticking with these low order bytes
and remaining ignorant (slightly strong word but intended in its opportunity sense).
It’s only when “weird” data comes in that we have to think about encoding right?

Here’s another friend. If you do Encoding::BINARY.to_s you’ll get
‘ASCII-8BIT’. This is the same as saying “I don’t know”. It’s not
the same as Encoding::ASCII. You can tell because .to_s says
‘US-ASCII’. So .to_s can be handy here.

There is a method called .encode. This takes the place of Iconv
in the stdlib. It works just like the unix command iconv. It
takes one encoding and converts the bytes into another. This
isn’t the same as .force_encoding as we’ll see in a second.

Now this is where culture/language trickiness comes in.

Lucky

All these things are the same bytes because we (sort of) got lucky on our
history, where ASCII came from (A is for American) and kind of how computer
keyboards and alphabets work. Someone had a good counter argument to this
statement at the meetup and I agree. What I mean is, some of this is
a bit culturally sensitive
and complicated.

What I really mean is:

English works well on a keyboard

Keyboards are the fastest input device

ASCII was invented by English speakers

UTF-8 is extended ASCII

English was invented before the computer

So, world, I’m sorry (empathy not apology).

What Encoding Is

Take this string "\x20". It’s a space character. If you look at man ascii
you’ll see that 20 is “ “ in ASCII. You might recognize this from %20 in
URLs. 20 decimal is 20 in hex too. The \x bit means hex. URL encoding is
hex too so 20 is the same 20. If I pick something higher in the codepage
like "\xC3", things are going to get weird. “\xC3” by itself isn’t valid
utf-8. And that’s fine until I try to do something with it. If I print it,
it’s nothing. Puts just gives me the normal newline.

puts"\xC3"=>nil

If I combine it with \x20, that’s not valid. ASCII space is at the top of
the UTF-8 codepage. I can’t just make up stuff. Or maybe I can and get lucky.
But in this case, it prints the unknown utf-8 symbol: <?> If I try something else,
just a different error message shows up:

We could do this all day and not flip a bit. It’s just not modifying the byte
sequence and that’s really what the data is.

So that’s the happy path with ASCII. It just sort of luckily works
because of history and other things that are complicated. The more
complicated path involves a few things. First, what happens when
Ruby loses control of the encoding it knows about and finally
what happens when non-ASCII things start happening.

This is the Korean word for wizard. I don’t know Korean btw. It’s just an
easy alphabet and I think it’s neat.

wizard="마법사"wizard.bytes=>[235,167,136,235,178,149,236,130,172]

Nothing in .bytes is going to be over 255 because bytes are 8-bit.
You’ll never, ever see .bytes return anything over 255. So what’s the deal?
Why are there more bytes there? Is it because Korean has more letters
inside each of those characters? No, that guess doesn’t make sense when I do
this with a single “character”:

"ㅅ".bytes=>[227,133,133]

It’s because utf-8 is dynamic. ASCII fits in 1 byte. If we force this to
Encoding::UTF_16, it has four bytes. What we thing of as a letter is
irrelevant. It’s bytes and codepoints in an encoding scheme. ASCII/English
just happens to be lucky at the top of the number chart.

So let’s turn that single character into utf-16 (Java’s default).

"ㅅ".encode('utf-16').bytes=>[254,255,49,69]

But that doesn’t mean we should. And … if we force this the wrong way, we’ll
have a bad time. Ruby won’t change the bytes if you do .force_encoding. But
it will if you .encode, as you can see. It depends what you are trying to
do.

Next, I’m going to show what you can do with all of this.

Data Corruption

Let’s take a more practical example. Let’s say a file was written in the wrong
encoding. This could be a database backup file that you really care about.
You could use iconv but let’s play in pry because it’s more fun and interactive.

Interestingly, .force_encoding sticks. So let’s try again, knowing the path
that the data took. We can reverse it:

First the data was utf-8.

Then it was forced to be latin1 but it’s in a utf-8 file.

Then it was read as a latin1 file.

Since the read happened in Ruby-land, we can force_encoding the file reading
mistake. Now it’s a utf-8 string that was forced to latin1 in mistake 2. So
we just have to re-encode those bytes back to latin1. Finally, it was utf-8 in
mistake 1. So we can just force_encoding the last step because it wasn’t
written externally or re-encoded, the bytes were forced.

You can do it as one big line and play with this. Just make sure to check your
encoding of your play variables. The variable import is now utf-8 so weird
things will happen if you think it’s latin1. Re-read the file with readlines
to reset your playtime.

UTF-8 Doesn’t Just Solve Everything

Base64 encodes to ASCII. So you’ll have very similar problems like above.

require'base64'encoded=Base64.encode64'bacon is great'=>"YmFjb24gaXMgZ3JlYXQ=\n"decoded=Base64.decode64(encoded)=>"bacon is great"# Yay for ascii?# Wait a minute ...encoded=Base64.encode64'ºåß¬∂˚∆ƒ'=>"wrrDpcOfwqziiILLmuKIhsaS\n"decoded=Base64.decode64(encoded)=>"\xC2\xBA\xC3\xA5\xC3\x9F\xC2\xAC\xE2\x88\x82\xCB\x9A\xE2\x88\x86\xC6\x92"decoded.force_encoding('utf-8')=>"ºåß¬∂˚∆ƒ"# The bytes didn't change, so force_encoding is correct here

Conclusion

Encoding is hard. It comes up a lot. I forget what I have learned.
I hope this is a beacon to myself and others about some lessons and tricks.
Playing with this stuff now might save you stress later when something real
pops up. I’ve seen backups be useless and then saved with iconv tricks and
Ruby’s encode method is the same thing.

Started a new gig about three weeks ago. Sad to leave the old team and
friends. It was awesome and I grew in a lot of ways. But this new
place is probably what I was looking for.

It’s way too early to call or judge or even sum up because it takes me
about three months to settle into any new job and place. You might
think that’s ridiculous but I’ve unsciencely tracked this and it holds
up. Slow burn man. It’s three months.

The new place is Goldstar. We sell discount tickets
fill events and have amazing customer service in and around this domain.
From the tech side, the app is Rails and mobile with a set of amazing
devs and ops peeps. We have expanded the tech team very recently and
I’m one of the new recruits.

Learning a codebase is rough. Building a codebase and learning along
the way is much more natural and comes with an advantage that needs to
be cared for and not abused. “Can you not code?” This isn’t
happening at the new place. I’m more amplifying what Katrina Owen said
on Ruby Rogues about a book
that explains downhill synthesis, uphill analysis. It’s way easier to
understand a system when you’ve built it. Not even because the code is
fresh in your mind. But because you hold the structure and general
layout and design, connected by memories and breadcrumbs. When starting
from the outside, it’s code splunking. Even if there are tests. Most
of the time, I’ll break the test and see what happens. And then fix it.
This simulates the synthesis part! Take it apart and put it back
together for the put it back together part. This didn’t really click
until Katrina enlightened me. I thanked her on Twitter. She was happy.
Happy time.

So let’s talk about something pretty serious. This perceived skill gap.
It wasn’t as bad as I thought it would be. I’m hanging with smart
people but have a massive case of impostor syndrome. But right before
this job, I wondered if the small-shop world had accelerated way past
the big-scary enterprise world. And it has. But it’s not a huge deal!
Do you know why? Because C, Unix, TCP/IP, Sockets, CAP Theorem, I/O
speeds, SOLID, ACID and all the other
non-science-laws-this-is-the-best-we-can-do-guys stuff of computer
science is forever. It’s the bedrock. It’s what’s really happening.
And once you know or even have a previous story/tale about these things
then learning today’s Hipsterware™ is no big deal.

What’s Riak? I don’t know! It’s consistent and available? Oh! It
must be really slow. Yep! Great! Right there I can knock it out of a
few use cases where Redis or Memcached would be put in there. I could
blather on about this. It’s really not healthy. It’s pretty arrogant
actually. Most of the work is not in the initial introduction and
overview. It’s in the deep and long lived implementation where your
cherished newlywed tech betrays you in your most dire moment of
edge-case mortality. There are so many things that I think are really
great because I haven’t seen them blow up in my face in prod. There are
lots of things that I used to think are great, which now I say “yeah
…” unsurely because I’ve seen them blow up or not be a good fit.

I still have many miles to walk. Here some things that I’m predicting
WITH MY MIND POWERS that I’ll learn and or gain from this new gig.

vim - My vimfiles and dotfiles have been challenged. Not even an
editor war. An editor civil war. Are leader keys evil? Is nerdtree
evil? Yes!? What?! I submit. I yield! I see the speed at which
you are navigating files. I have thought about your strategy before
and not seen it in action. Fine. I will delete my .vimrc and use
yours. I’ve done it before with
janus. I can start over again.
Each time, I learn something new. The goal now is to stay as close to
vanilla vim as possible.

Ruby - looking forward to pairing with lots of folks. I’ve been
hearing a lot of great discussion. Lots of end-game topics.
“What is intention revealing? What is this actually doing? What is the
difference between these two classes? Let’s measure how fast this runs
if we try it this way.”

AWS - A bigger setup than I’m used to. VPC I’ve done. But not so
many objects. Learning lots of integrations within AWS. Pulling and
syncing to buckets and stuff. I’m sure I’ll be flexing the fog gem
at some point.

Instrumentation - It’s a big deal. There are many cloud services in
action. Some are overlapping. It’s neat. It’s real. Retros where
we look at code climate scores. Custom dashboards. I donated a
raspberry pi for the cause. “Hook it up to the TV! Give me real
insight. Get it done. Yay!” Pretty sweet.

Automation and CM - Chef is being used in a really nice way. It’s
changing and evolving. No sacred cows. Custom tooling. Chef server
is a bit slow, so move everything out. Put state somewhere else. We
need to beef up the custom bits of this. We’re also working on other
tools around containers. There’s no single tool really. It’s very
practical. No sacred tools. I’m very impressed with the ops folks.
It’s kind of beautiful.

The Business - It’s so easy to drown in tech. I’m looking forward to
seeing all the pieces come together and watch something real happen.
Ernie Miller said it best:

Humane Development, to me, means the acknowledgement that we are
humans working with humans to develop software for the benefit of
humans.

To me, this is where you see the user story get run not by your test
suite but by a real customer or person. It’s the best part for lots
of reasons.

Everyone is really good. That’s the job. The digs part is, our rental
is coming to a close and we’ve bought a house. The next time I post
might be from a different location. But not so far from where I’m at
now. We love Portland. I miss friends/family but we’re staying.

I hope this town becomes a tech sanctuary for Bay Area and Seattle burnouts.