Engineering tales from Semantics3–2016 edition

Back in October, I had written about our team’s pull crew for hacking on open source projects. I had meant to write a review of the various contributions that we had managed. However, life happened, and that post is still in the works.

Meanwhile, as 2016 draws to a close, I thought I would instead use this chance to highlight some of our more popular posts this year, in case you missed them.

Amarnath and Srinivas gave our first hit of the year with their joint post on building and scaling a micro-service based architecture using Perl. I would recommend the post not just for Perl programmers, but for any developer interested in reading about micro-services and the tools that help build them. The post also generated some interesting discussion on HN and r/programming, which you might want to check out.

Abishek’s post on bloom filters was one of my favorite posts this year. Bloom filters are one of those mystical data structures that every one talks about, but very few fully comprehend. Starting from basics, the post offers a gentle introduction, gradually ramping up to interesting insights. The performance graphs in that post help drive home the trade-offs available when considering bloom filters.

Following my adventures in database management, I had blogged about my experiences with Postgres. The post, though quite technical in nature, had generated a lot of unexpected interest from readers over at HN and r/programming. The post was meant to document some of the internal quirks in Postgres, so if you are interested in that sort of thing, you should definitely check it out.

As our team started heavily adopting modern machine learning techniques, we began gathering various insights. Govind summarized a checklist for all the budding neural network enthusiasts out there. With a quick glance at the list, you should be able to identify quick-win areas for your own deep learning projects.

Over the past year, public contributions by big companies have skyrocketed, especially in the domain of machine learning. However, having open tools only seemed to provide a partial solution. I had wanted to summarize the other (arguably more difficult) aspects of machine learning in a quasi-rant and the result was that post.

After reviving the blog earlier this year, the posts seem to have come out quite nicely, with some great contributions from members all across our team.

Despite touching upon varied aspects, the technical adventures of our team continue to outpace our speed of blogging about them. As a result, there is now a huge backlog of posts that I hope to bring out over the next year.

However, until then, that’s a wrap for 2016. See you on the other side.