Apr 19 2016
Bots are all the rage now. While building a conversational AI bot is a huge undertaking, building your own helpful Slackbot isn’t. These bots are great for performing simple tasks that don’t warrant a dedicated interface. This is exactly what I needed at my job. Tasks like creating a customer, listing customers, and refreshing customer data were all repetitive tasks that were great candidates for automating into a bot. I began by writing my own parsing code and this worked fine until things began to get more complicated.

Jan 13 2016
Without a doubt, great glass can make a world of difference for your photography. More often than not, that great glass comes at a great expense. If you aren’t made of money, it can be hard to grow a stable of lenses to cover your needs. My lenses aren’t high end, but after a few years of experimenting and leap-frogging quality, I’ve settled down with a solid collection that is very reasonably.

Sep 14 2015
Working extensively with Clojure in the last year, I’ve been exploring the many concurrency techniques favored by the language. Certain strategies like atoms and immutability have become second nature. The Clojure books explain atoms, refs, and agents and how they work. While I understand refs and atoms, agents were never well covered. There are examples of using agents to protect resources like files, but that’s it. The Textbook on Agents It’s natural to assume agents are like actors.

Jun 25 2015
In part 1, we got a feel for topics, producers, and consumers in Apache Kafka. In this part, we will learn about partitions, keyed messages, and the two types of topics. Kafka is built around a simple log architecture. This simplicity makes Kafka robust and fast. Partitions A topic can be divided into partitions which may be distributed. Partitions enable the following: Distribute the data across brokers (think sharding) Simplify parallelization Ensure sequencing of related messages We will touch on each of these.

Jun 22 2015
Lately I’ve been moving our data backend to use Apache Kafka to store our many data sources. I think it’s a great way to deal with the problem of data collection, Kafka gives us great flexibility in how we consume the data and how we query it. One of our data scientists asked if we could randomly sample a stream. This is a common activity in machine learning and statistics, and it’s trivial on a known dataset.