In this case study, we evaluate four different strategies for solving a problem with machine learning. In terms of both technical performance and practical factors like economics and amount of training data required, customized models built from semi-supervised “deep” features using transfer learning outperform models built from scratch, and rival state-of-the-art methods. Featured on KDnuggets.

What is transfer learning?

Transfer learning is the concept of training a model to solve one problem, and then using the knowledge it learned to help solve other problems. In practice, we pre-train a deep neural network using large datasets (many millions of examples), so that it learns generally useful feature representations. We can then copy those internal feature representations into new models, effectively transferring in knowledge from the deep model.

What is semi-supervised feature transfer?

Do you remember learning to speak your first language? I don’t, but as a father, it is fascinating to see my children do it! How is the process of learning a second language different from learning the first one? When children learn their first language, they’re simultaneously learning how to reason about things in the world, and also how to compose those ideas using language. Learning an additional language is easier because we already have lots of knowledge about things in the world—we need only relearn how to compose the new language.