Representation and transfer learning for medical image analysis

Machine learning methods learn from examples to make predictions about new data. This can be useful in medical image analysis: manually annotating images is time-consuming, so an automated method can be much more efficient. Machine learning learns from annotations provided by experts, so we want to use the annotations in the most efficient way. We would prefer to reuse the same data for multiple projects, but reusing data can be tricky as most machine learning methods expect the training examples to be similar to the unseen examples they have to predict. This means, for example, that a model trained with data from one scanner might not work as well for data from a different scanner, because the images from both scanners are slightly different. In the transfer learning project we look at methods to make data from different sources look more similar. We want to see if transfer learning problems can be solved with representation learning, a special family of machine learning methods. Representation learning methods aim to represent the training data in a new, more abstract way that includes all important information but excludes most of the noise. These methods learn descriptions that are well-suited to represent the training data, and we suspect this might be useful for transfer learning as well. Some features may be specific to a particular source of training samples, while others may be shared between the two domains. We want to see if we can encourage representation learners to learn these shared features, and if we can use these features to learn a common model that can be used to transfer knowledge from one source to another.