Computer Science > Neural and Evolutionary Computing

Title:
Representational Distance Learning for Deep Neural Networks

Abstract: Deep neural networks (DNNs) provide useful models of visual representational
transformations. We present a method that enables a DNN (student) to learn from
the internal representational spaces of a reference model (teacher), which
could be another DNN or, in the future, a biological brain. Representational
spaces of the student and the teacher are characterized by representational
distance matrices (RDMs). We propose representational distance learning (RDL),
a stochastic gradient descent method that drives the RDMs of the student to
approximate the RDMs of the teacher. We demonstrate that RDL is competitive
with other transfer learning techniques for two publicly available benchmark
computer vision datasets (MNIST and CIFAR-100), while allowing for
architectural differences between student and teacher. By pulling the student's
RDMs towards those of the teacher, RDL significantly improved visual
classification performance when compared to baseline networks that did not use
transfer learning. In the future, RDL may enable combined supervised training
of deep neural networks using task constraints (e.g. images and category
labels) and constraints from brain-activity measurements, so as to build models
that replicate the internal representational spaces of biological brains.