Posts Categorized: Machine Learning

We visited the second annual RE•WORK Deep Learning conference in Boston earlier this month. In this debrief, I’m going to share my totally biased take on what was noteworthy from the conference. My observations will be in the context of trends in deep learning tech. As a bonus, since we were one of the few… Read more

Takeaways and favorite papers from ICLR Last week, three members of indico’s Advanced Development team attended the International Conference on Learning Representations (ICLR). ICLR focuses mainly on representation learning — or working with raw data to build better features to solve complex problems. This covers ideas such as deep learning, kernel learning, compositional models, as… Read more

A survey of six months rapid evolution (+ tips/hacks and code to fix the ugly stuff) We’ve been using TensorFlow in daily research and engineering since it was released almost six months ago. We’ve learned a lot of things along the way. Time for an update! Because there are many subjective articles on TensorFlow and… Read more

In Part 1 of this mini series, we explored various methods of data input for machine learning models using TensorFlow. In this article we’ll discuss a hybrid approach of those methods that allows for faster training, as well as some extensions to the demo in Part 1. A Hybrid Approach While great in theory, the… Read more

TensorFlow is a great new deep learning framework provided by the team at Google Brain. It supports the symbolic construction of functions (similar to Theano) to perform some computation, generally a neural network based model. Unlike Theano, TensorFlow supports a number of ways to feed data into your machine learning model. The processes of getting… Read more

Welcome back to our two part series on sequence to sequence models. In the previous post we saw how language models and sequence to sequence models can be used to handle data that varies over time. In this post, we will see how an attention mechanism can be added to the sequence to sequence model… Read more

This blog post is the first in a two part series covering sequence modeling using neural networks. Sequence to sequence problems address areas such as machine translation, where an input sequence in one language is converted into a sequence in another language. In this post we will learn the foundations behind sequence to sequence models… Read more

Earlier this month, Alec Radford — indico’s Head of Research — led a talk at Boston ML Forum. He presented an overview of recent work in generative modeling, including research that he, Luke Metz, and Soumith Chintala (FAIR) released in November 2015: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. Video and slides below.… Read more

So many other frameworks exist, why MXNet? MXNet is a modern interpretation and rewrite of a number of ideas being talked about in the deep learning infrastructure. It’s designed from the ground up to work well with multiple GPUs and multiple computers. When doing multi-device work in other frameworks, the end user frequently has to… Read more

Welcome back to our three part series on computer vision. In the previous post, we discussed convolutional neural networks (CNNs). This post will assume that you have a basic understanding of CNNs; we encourage you to reread the first post if you want a refresher on convolutional networks. Introduction to Transfer Learning When we start… Read more