Chris Leary

Great excerpt from Jason Hong's article in this month's Communications of the
ACM:

The most impressive story I have ever heard about owning your research is
from Ron Azuma's retrospective "So Long, and Thanks for the Ph.D." Azuma
tells the story of how one graduate student needed a piece of equipment for
his research, but the shipment was delayed due to a strike. The graduate
student flew out to where the hardware was, rented a truck, and drove it
back, just to get his work done.

Stories like that pluck at my heart strings. Best part of Back to Work,
Episode 1 was this, when around 19 minutes in Merlin Mann said:

I was drinking, which I don't usually do, but I was with a guy who likes to
drink, who is a friend of mine, and actually happens to be a client. And, we
were talking about what we're both really interested in and fascinated by,
which is culture. What is it that makes some environments such a petri dish
for great stuff, and what is it about that makes people wanna run away
from the petri dish stealing office supplies and peeing in someone's desk?
What is it, what makes that difference, and can you change it?

In time, I found myself moving more towards this position — as we
had more drinks — that it kind of doesn't really matter what people do,
given that ultimately you're the one who's gotta be the animus. You're the
one who's actually going to have to go ship, right?

And, my sense was — great guy — he kept moving further toward, "Yeah,
but...". "This person does this", and "that person does that", and "I need
this to do that". And I found myself saying, "Well, okay, but what?" What
are you gonna do as a result of that? Do you just give up? Do you spend all
of your time trying to fix these things that these other people are doing
wrong?

And, to get to the nut of the nut; apparently — I'm told by the security
guards who removed me from the room — that it ended with me basically yelling
over and over, "What couldn't you ship?!" "What couldn't you ship?!" "What
couldn't you ship?!"

... If we really, really are honest with ourselves, there's really not that
much stuff we can't ship because of other people...

... When are you ever gonna get enough change in other people to satisfy
you? When are you ever gonna get enough of exactly how you need it to be to
make one thing?

Well, you know, that is always gonna be there. You're
always gonna find some reason to not run today. You're always gonna find
some reason to eat crap from a machine today. You're always gonna find a
reason for everything.

To quote that wonderful Renoir film, Rules of the
Game, something along the lines of, "The trouble in life is that every man
has his reasons." Everybody's got their reasons. And the thing that
separates the people who make cool stuff from the people who don't make
cool stuff is not whether they live in San Francisco. And it's not whether
they have a cool system. It's whether they made it. That's it, end of
story. Did you make it or didn't you make it?

The way I see it, you should never stop asking yourself:

What's really going to be different about tomorrow that you couldn't go make
happen today? Why isn't past inaction indicative of what's going to happen
today, or tomorrow?

What reason do you have to believe that appropriate steps to deliver on your
vision are in flight, and what would it take for you to go drive them harder.

What losses might you have to cut in order to get some thing done,
rather than a theoretically more perfect no thing. For some outcomes, it
really does take a village. I wouldn't expect anybody to single-handedly ship
the Great Pyramid.

Of course, sunk costs are powerful siren, so you have to be very careful to
evaluate whether compromises still allow you to hit the marks you care about as
true goals. But, at the end of the day, all those trade-offs roll up into one
subtly simple question:

The distinction between essential complexity and accidental complexity is a
useful one — it allows you to identify the parts of your design where you're
stumbling over yourself instead of working against something truly reflected
in the problem domain.

The simplest-solution-that-could-possibly-work (SSTCPW) concept is inherently
appealing in that, by design, you're trying to minimize these pieces that you
may come to stumble over. Typically, when you take this approach, you
acknowledge that an unanticipated change in requirements will entail major
rework, and accept that fact in light of the perceived benefits.

As a more quantifiable example: if a SSTCPW contains comparatively less code
paths than an alternative solution, you can see how some of the above merits
could fall out of it.

This also demonstrates some of the appeal of fail-fast and crash-only
approaches to software implementation, in that cutting out unanticipated
program inputs and states, via an acceptance of "failure" as a concept, tends
to hone in on SSTCPW.

Contrast

In my head, this approach is contrasted most starkly against an approach called
big-design-up-front (BDUF). The essence of BDUF is that, in the design process,
one attempts to consider the whole set of possible requirements (typically
both currently-known and projected) and build into the initial design and
implementation the flexibility and structure to accommodate large swaths of
them in the future, if not in the current version.

In essence, this approach acknowledges that the target is likely moving, tries
to anticipate the target's movement, and takes steps to remain one step ahead
of the game by building in flexibility, genericity, and a more 1:1-looking
mapping between the problem domain and the code constructs.

Benefits cited usually relate to ongoing maintenance in some sense and
typically include:

Reuse via genericity.

Flexibility for feature addition.

A more robust model of the problem domain imbued in the program.

Head to head

In a lot of software engineering doctrine that I've read, been taught, and
toyed with throughout the years, the prevalence of unknown and ever-changing
business requirements for application software has lent a lot of credence to
BDUF, especially in that space.

There have also been enabling trends for this mentality; for example, the
introduction of indirection through abstractions has monumentally less cost on
today's JVM than on the Java interpreter of yore. In that same sense, C++ has
attempted to satisfy an interesting niche in the middle ground with its design
concept of "zero cost abstractions", which intend to be known-reducible to more
easily understood and more predictable underlying code forms at compile time.
On the hardware side, the steady provisioning of single-thread performance and
memory capacity throughout the years has also played an enabling role.

By contrast, the system-software implementation doctrine and conventional
wisdom skews heavily towards SSTCPW, in that any "additional" design reflected
in the implementation tends to come under higher levels of duress from a
{performance, code-size, debuggability, correctness} perspective. Ideas like
"depending on concretions" — which I specifically use because it's denounced
by the D in SOLID — are wholly accepted in SSTCPW given that it (a) makes the
resulting artifact simpler to understand in some sense (b) without sacrificing
the ability to meet necessary requirements.

So what's the underlying trick in acting on a SSTCPW philosophy? You have to do
enough design work (and detailed engineering legwork) to distinguish between
what is necessary and what is wanted, and have some good-taste arbitration
process to distinguish between the two when there's disagreement about the
classification. As part of that process, you have to make the most difficult
decisions: what you definitely will not do and what the design will not
accommodate without major rework.

In reply

What follows are a few quick-and-general pointers on "I want to start doing
lower level stuff, but need a motivating direction for a starter project."
They're somewhat un-tested because I haven't mentored any apps-to-systems
transitions, but, as somebody who plays on both sides of that fence, I think
they all sound pretty fun.

A word of warning: systems programming may feel crude at first compared
to the managed languages and application-level design you're used to. However,
even among experts, the prevalence of footguns motivates simple designs and APIs, which can be a beautiful thing.
As a heuristic, when starting out, just code it the simple, ungeneralized way.
If you're doing something interesting, hard problems are likely to present
themselves anyhow!

Microcontrollers rock

Check out sites like hackaday.com to see the incredible feats that
people accomplish through microcontrollers and hobby time.
When starting out, it's great to get the tactile feedback of lighting up a
bright blue LED or successfully sending that first UDP packet to your desktop
at four in the morning.

Microcontroller-based development is also nice because you can build up your understanding of C code, if you're feeling rusty, from basic usage — say, keeping everything you need to store as a global variable or array — to fancier techniques as you improve and gain experience with what works well.

Although I haven't played with them specifically, I understand that Arduino
boards are all the rage these days — there are great tutorials and support
communities out on the web that love to help newbies get started with
microcontrollers. AVR freaks was around even when I was programming on my
STK500. I would recommend reading some forums to figure out which board looks right for you and your intended projects.

At school, people really took to Bruce Land's microcontroller class,
because you can't help but feel the fiero as you work towards more and more
ambitious project goals.
Since that class is still being taught, look to
the exercises and projects (link above) as good examples of what's possible with bright
students and four credits worth of time. [*]

Start fixing bugs on low-level open source projects

Many open source projects love to see willing new contributors. Especially check out projects a) that are known for having good/friendly mentoring and
b) that you think are cool (which will help you stay motivated).

I know one amazing person I worked with at Mozilla got into the project by
taking his time to figure out how to properly patch some open bugs.
If you take that route, either compare your patch to what the project
member has already posted, or request that somebody give you feedback on your
patch.
This is another good way to pick up mentor-like connections.

Check out open courseware for conceptual background

I personally love the rapid evolution of open courseware we're seeing. If you're feeling confident, pick a random low-level thing you've heard-of-but-never-quite-understood, type it into a search engine, and do a deep dive on a lecture or series. If you want a more structured approach, a simple search for systems programming open courseware has quite educational looking results.

General specifics: OSes and reversing

OSes

If you're really into OSes, I think you should just dive in and try writing a little kernel on top of your hardware of choice in qemu (a hardware emulator). Quick searches turn up some seemingly excellent tutorials on writing simple OS kernels on qemu, and writing simple OSes for microcontrollers is often a student project topic in courses like the one I mention above. [†]

With some confidence, patience, maybe a programming guide, and recall of some low-level background from school, I think this should be doable. Some research will be required on effective methods of debugging, though — that's always the trick with bare metal coding.

Or, for something less audacious sounding: build your own Linux kernel with some modifications to figure out what's going on.
There are plenty of guides on how to do this for your Linux distribution of choice, and you can learn a great deal just by fiddling around with code paths and using printk.
Try doing something on the system (in userspace) that's simple to isolate in the kernel source using grep — like mmapping /dev/mem or accessing an entry in /proc — to figure out how it works, and leave no stone unturned.

I recommend taking copious notes, because I find that's the best way to trace out any complex system. Taking notes makes it easy to refer back to previous realizations and backtrack at will.

Read everything that interests you on Linux Kernel Newbies, and subscribe to kernel changelog summaries. Attempt to understand things that interest you in the source tree's /Documentation. Write a really simple Linux Kernel Module. Then, refer to freely available texts for help in making it do progressively more interesting things. Another favorite read of mine was Understanding the Linux Kernel, if you have a hobby budget or a local library that carries it.

Reversing

This I know less about — pretty much everybody I know that has done significant reversing is an IDA wizard, and I, at this point, am not. They are also typically Win32 experts, which I am not. Understanding obfuscated assembly is probably a lot easier with powerful and scriptable tools of that sort, which ideally also have a good understanding of the OS. [‡]

However, one of the things that struck me when I was doing background research for attack mitigation patches was how great the security community was at sharing information through papers, blog entries, and proof of concept code. Also, I found that there are a good number of videos online where security researchers share their insights and methods in the exploit analysis process. Video searches may turn up useful conference proceedings, or it may be more effective to work from the other direction: find conferences that deal with your topic of interest, and see which of those offer video recordings.

During my research on security-related things, a blog entry by Chris Rohlf caused Practical Malware Analysis to end up on my wishlist as an introductory text. Seems to have good reviews all around. Something else to check out on a trip to the library or online forums, perhaps.

One other neat thing we occassionally used for debugging at Mozilla was a VMWare-based time-traveling virtual machine instance. It sounded like they were deprecating it a few years back, so I'm not sure the status of it, but if it's still around it would literally allow you to play programs backwards!

Bryan also asked me this at NodeConf last year, where I was chatting with him about the then-in-development IonMonkey:

I remembered my talk with Bryan when I went to recruit there last year and asked the same interview question that he references — except with the pointer uninitialized so candidates would have to enumerate the possibilities — to see what evidence I could collect. My thoughts on the issue haven't really changed since that chat, so I'll just repeat them here.

My overarching thought: bring the passion

Many of the people I'm really proud that my teams have hired out of undergrad are just "in love" with systems programming, just as a skilled artisan "cares" about their craft. They work on personal projects and steer their trajectory towards it somewhat independent of the curriculum.

Passion seems to be pretty key, along with follow-through, and ability to work well with others, in the people I've thumbs-up'd over the years. Of course I always want people who do well in their more systems-oriented curriculum and live in a solid part the current-ability curve, but I always have an eye out for the passionately interested ones.

So, I tend to wonder: if an org has a "can systems program" distribution among the candidates, can you predict the existence of the outliers at the career fair from the position of the fat part of that curve?

Anecdotally, myself and two other systems hackers on the JavaScript engine came from the same undergrad program, modulo a few years, although we took radically different paths to get to the team. They are among the best and most passionate systems programmers I've ever known, which also pushes me to think passionate interest may be a high-order bit.

Regardless, it's obviously in systems companies' best interest to try to get the most bang per buck on recruiting trips, so you can see how Bryan's point of order is relevant.

My biased take-away from my time there

I graduated less than a decade ago, so I have my own point of reference. From my time there several years ago, I got the feeling that the mentality was:

C/C++ are horrible teaching languages, so they shouldn't really be taught in general curricula in circumstances where they can be avoided.

Java and applications-level programming is where most of the well-paying industry jobs are. (Not sure how true this is or was, but it seemed to be the conventional wisdom at the time.)

It's a Windows world. And, if it's not a Windows world, you've probably got a VM under you.

This didn't come from any kind of authority, it's just putting into words the "this is how things are done around here" understanding I had at the time. All of them seemed reasonable in context, though I didn't think I wanted to head down the path alluded by those rules of thumb. Of course these were, in the end, just rules of thumb: we still had things like a Linux farm used by some courses.

I feel that the "horrible for teaching" problem extends to other important real-world systems considerations as well: I learned MIPS and Alpha [*], presumably due to their clean RISC heritage, but golly do I ever wish I was taught more about specifics of x86 systems. And POSIX systems. [†]

Of course that kind of thing — picking a "real-world" ISA or compute platform — can be a tricky play for a curriculum: what do you do about the to-be SUN folks? Perhaps you've taught them all this x86-specific nonsense when they only care about SPARC. How many of the "there-be-dragons" lessons from x86 would cross-apply?

There's a balance between trade and fundamentals, and I feel I was often reminded that I was there to cultivate excellent fundamentals which could later be applied appropriately to the trends of industry and academia.

But seriously, it's just writing C...

For my graduating class, CS undergrad didn't really require writing C. The closest you were forced to get was translating C constructs (like loops and function calls) to MIPS and filling in blanks in existing programs. You note the bijection-looking relationship between C and assembly and can pretty much move on.

I tried to steer to hit as much interesting systems-level programming as possible. To summarize a path to learning a workable amount of systems programming in my school of yore, in hopes it will translate to something helpful existing today:

You may have read K&R, but as a newbie it makes sense to beef up on fundamentals, so CS 116: Introduction to C Programming doesn't hurt (and you meet other passionate systems programming people in the process).

CS 415: Operating Systems Practicum made you write C. Sadly, we were given a library for context switching userspace threads on top of the Win32 API in MSVC that we didn't really have to dig into. We had to write things like concurrency primitives, a scheduler, and a rudimentary filesystem that operated in terms of a soft (i.e. fake) disk model. I think there may have been some networking in there as well. The course was being revamped at the time, so I hope it's more bare-metal now with something practical like qemu.

ECE 476: Designing with Microcontrollers was an amazing class for integrating whatever you were most passionate about from CS and ECE curricula. Though at the time we were using 8-bit Atmels on a proprietary compiler that had no dynamic allocation support, you had to write both assembly and C code and talk to your system board via I/O ports. Plus, I got to be a little sneaky and use avr-gcc.

ECE 473: Optimizing Compilers targeted Alpha at the time, but was a great big systems project that taught a lot about machine specifics and code generation (interfacing to syscalls, executable and linkable formats).

ECE 575: High-Performance Microprocessor Architecture made you write real and well-performing C applications for things like cache modeling with static binary translation. This was a very formative course for me.

I did a bunch of independent projects to mess around and better understand areas where I was lacking knowledge.

I did work with systems researchers at the university. Some were unwilling to take any undergrads as a policy, but some groups are more amenable.

I'm not a good alum in failing to keep up with the goings-ons but, if I had a recommendation based on personal experience, it'd be to do stuff like that. Unfortunately, I've also been at companies where the most basic interview question is "how does a vtable actually work" or on nuances of C++ exceptions, so for some jobs you may want to take an advanced C++ class as well.

Understanding a NULL pointer deref isn't writing C

Eh, it kind of is. On my recruiting trip, if people didn't get my uninitialized pointer dereference question, I would ask them questions about MMUs if they had taken the computer organization class. Some knew how an MMU worked (of course, some more roughly than others), but didn't realize that OSes had a policy of keeping the null page mapping invalid.

So if you understand an MMU, why don't you know what's going to happen in the NULL pointer deref? Because you've never actually written a C program and screwed it up. Or your haven't written enough assembly with pointer manipulation. If you've actually written a Java program and screwed it up you might say NullPointerException, but then you remember there are no exceptions in C, so you have to quickly come up with an answer that fits and say zero.

I think another example might help to illustrate the disconnect: the difference between protected mode and user mode is well understood among people who complete an operating systems course, but the conventions associated with them (something like "tell me about init"), or what a "traditional" physical memory space actually looks like, seem to be out of scope without outside interest.

This kind of interview scenario is usually time to fluency sensitive — wrapping your head around modern C and sane manual memory management isn't trivial, so it does require some time and experience. Plus when you're working regularly with footguns, team members want a basic level of trust in coding capability. It's not that you think the person can't do the job, it's just not the right timing if you need to find somebody who can hit the ground running. Bryan also mentions this in his email.

Thankfully for those of us concerned with the placement of the fat part of the distribution, it sounds like Professor Sirer is saying it's been moving even more in the right direction in the time since I've departed. And, for the big reveal, I did find good systems candidates on my trip, and at the same time avoided freezing to death despite going soft in California all these years.

Brain teaser

I'll round this entry off with a little brain teaser for you systems-minded folks: I contend that the following might not segfault.

[Latest from the "I can't believe I'm writing a blog entry about this"
department, but the context and surrounding discussion is interesting. --Ed]

If you're like me, or one of the other thousands of concerned parents who has borne C code into this cruel, topsy-turvy, and oftentimes undefined world, you read the C standard aloud to your programs each night. It's comforting to know that K&R are out there, somewhere, watching over them, as visions of Duff's Devices dance in their wee little heads.

The shocking truth

In all probability, you're one of today's lucky bunch who find out that the
signedness of the char datatype in C is undefined. The implication being, when
you write char, the compiler is implicitly (but consistently) giving it
either the signed or unsigned modifier. From the spec: [*]

The three types char, signed char, and unsigned char are collectively called
the character types. The implementation shall define char to have the same range,
representation, and behavior as either signed char or unsigned char.

...

Irrespective of the choice made, char is a separate type from the
other two and is not compatible with either.

—ISO 9899:1999, section "6.2.5 Types"

Why is char distinct from the explicitly-signed variants to begin with? A
great discussion of historical portability questions is given here:

Fast forward [to 1993] and you'll find no single "load character from
memory and sign extend" in the ARM instruction set. That's why, for
performance reasons, every compiler I'm aware of makes the default char
type signed on x86, but unsigned on ARM. (A workaround for the GNU GCC
compiler is the -fsigned-char parameter, which forces all chars to
become signed.)

It's worth noting, though, that in modern times there are both LDRB (Load
Register Byte) and LDRSB (Load Register Signed Byte) instructions available
in the ISA that do sign extension after the load operation in a single
instruction. [†]

So what does this mean in practice? Conventional wisdom is that you use
unsigned values when you're bit bashing (although you have to be extra careful
bit-bashing types smaller than int due to promotion rules) and signed values
when you're doing math, [‡] but now we have this third type, the
implicit-signedness char. What's the conventional wisdom on that?

Signedness-un-decorated char is for ASCII text

If you find yourself writing:

char some_char = NUMERIC_VALUE;

You should probably reconsider. In that case, when you're clearly doing
something numeric, spring for a signed char so the effect of arithmetic
expressions across platforms is more clear. But the more typical usage is still
good:

Examples to consider

Some of the following mistakes will trigger warnings, but you should realize there's
something to be aware of in the warning spew (or a compiler option to consider
changing) when you're cross-compiling for ARM.

Example of badness: testing the high bit

Let's say you wanted to see if the high bit were set on a char. If you assume signed chars, this easy-to-write comparison seems legit:

Example of badness: comparison to negative numeric literals

This comparison would never return true with an 8-bit unsigned char
datatype and a 32-bit int datatype. Here's the breakdown:

When getchar() returns ((signed int) -1) to represent EOF, you'll
truncate that value into 0xFFu (because chars are an unsigned 8-bit datatype).
Then, when you compare against EOF, you'll promote that unsigned value to a
signed integer without sign extension (preserving the bit pattern of the
original, unsigned char value), and get comparison between 0xFF (255 in
decimal) and 0xFFFFFFFF (-1 in decimal). For all the values in the unsigned
char range, I hope it's clear that this test will never pass. [§]

To make the example a little more obvious we can replace the call to
getchar() and the EOF with a numeric -1 literal and the same thing
will happen.

char c =-1;
assert(c ==-1); // This assertion fails. Yikes.

That last snippet can be tested by compiling in GCC with -fsigned-char and
-funsigned-char if you'd like to see the difference in action.

Footnotes

The spec goes on to say that you can figure out the underlying signedness
by checking whether CHAR_MIN from <limits.h> is 0 or
SCHAR_MIN. In C++ you could do the <limits>-based
std::numeric_limits<char>::is_signed dance.

Disclaimer

I've caught some flak over publishing my "selfish"
(read: empirical testing that yields results which are only relevant to me)
multi-language-engine-and-standard-library "shootout"
(read: I wrote the same basic functionality across multiple languages,
somewhat like on the shootout.alioth.debian.org site,
the Computer Language Benchmarks Game).
I value the concept and process of learning in the open,
but it may require more time and consideration of clarity
than I had given in this entry.
Taking it down would apparently be a breach of etiquette,
so please read the following TL;DR as a primer.

TL;DR: I encourage you to personally try writing small utilities
against a variety of language engines when you have the opportunity.
Consider how much tweaking of the original code you have to do
in order to obtain a well-performing implementation.
Weigh the development effort and your natural proficiency
against the performance, clarity, and safety
of the resulting program.
Gather evidence and be eager to test your cost assumptions.
Commit to learning about sources of overhead and
unforeseen characteristics of your libraries.
You may be surprised which engines give the best bang per time spent.

It has also been suggested to me that
all native languages are within ~3x of one another
on generated code performance,
and the rest of the difference is generally attributable
to the library or algorithm,
so that's an interesting rule of thumb to keep in mind.

Introduction

We tend to throw around "orders of magnitude" when it comes to "programming language speeds",
even though we know that the concept of a programming language having a speed for arbitrary programs makes little sense.
But, when I'm coding up something small, I find myself pondering a very concrete question:
which available language engine (language implementation and libraries) could I reasonably write this little program against
that would give the best speed over development effort?

I'm not looking to optimize all the buttery nooks and crannies of this program,
nor do I want to drill into potential deficiencies in the I/O subsystem:
I just want to make a painless little utility that doesn't require me to go on a lunch break.

XKCD knows what I'm talking about:

I was writing a very simple, single-threaded program to generate about a billion uniformly random int32s in a text file,
and I decided I would do a selfish little shootout:
write the same program in a set of "viable" languages
(remember, this is all about me :-),
unscientifically use time(1) on the programs a few times,
consider how painful it was to write,
and see what the runtimes come out to be.

For 100 million integers on my CentOS Bloomfield box,
these were the runtimes
for my initial, naive implementations
and their lightly tweaked counterparts:

Impl

Naive
Runtime

Naive
Ratio

Tweaked
Runtime

Tweaked
Ratio

Engine

.cpp

~0m 11s

~0m 15s

GCC 4.4.6 -O3

.java

~0m 18s

~1.5x

~0m 19s

~1.25x

JDK 1.7.0.04

.go

~1m 5s

~6x

~0m 23s

~1.5x

go1.0.1

.rs

~1m 7s

~6x

~0m 23s

~1.5x

rustc -O3 0.2 (trunk)

.ml

~0m 37s

~3.3x

~0m 35s

~2.5x

ocamlopt 3.11.2

.py

~1m 6s

~6x

~0m 51s

~3.5x

PyPy 1.9.1 (nightly)

.lua

~1m 36s

~9x

~0m 27s (FFI)

~1.8x

LuaJIT 2.0.0-beta10 (trunk)

.rb

~1m 50s

~10x

ruby 2.0.0 (trunk)

Like all developers, I have varied levels of expertise across languages and their standard libraries;
but, as I said, this is a selfish shootout,
so my competence in a given language is considered part of the baseline.
You'll see in the comments that
many readers identified performance bugs in these code samples.

There are also caveats for the random numbers I was generating in OCaml (due to tag bit stealing).

For a billion integers the naive C++0x version took 1m 42s
and the naive Java version took 2m 18s (1.35x slower).
I didn't want to spend the time to slow down the others by an order of magnitude.

As a result — with perpetual intent to improve my abilities in all engines I work with,
willful ignorance of the reasoning,
acknowledgement that I need to perform more experiments like this to draw a more reasonable conclusion over time,
and malice aforethought — I'll hereby declare myself guilty of leaning a bit more towards writing things like this in C++
when I want better runtimes in the giga range for little IO-and-compute programs.

Show me the code!

I threw the code up on github,
but the versions that I wrote naively
(before optimization suggestions)
are duplicated here for convenience.