As someone just getting serious about Go this was quite enlightening. My day job is largely C++ with a focus on small memory footprint. I’d been wondering about the Go GC’s perf characteristics but not yet been driven to do the research myself.

I’ve recently done a couple file processors at work that have to rip through a large amount of data (to us) and I paid attention to things like this while I was working on it. Reading in the file and keeping things as byte array significantly simplifies working with the data. The other area where I was careful was to make sure I wasn’t allocating things in my processing loops. Everything gets setup once and reused or set during processing. This allows the program to process something like 650k records in just under 2 seconds. There are many useful features built into the Go toolchain to make sure you’re not leaking things that the GC has to pick up. The benchmark tests are very convenient too. Nice article.