Tinne Tuytelaars

CNNs have broken all records on virtually any computer vision benchmark we know. However, the current paradigm seems to be to train a new model (either from scratch or by finetuning a pretrained model) for each new task and for each new dataset that pops up. This is in strong contrast with human learning, where multiple tasks are solved using a single brain, and knowledge about previous tasks is reused when learning a new one. In this talk, I'll talk about our efforts in scaling up to multiple sequential tasks, in contrast to the traditional scheme of supervised learning. In a lifelong learning setting, an agent (learner) needs to learn multiple tasks sequentially during its lifetime. To ensure the scalability of this scenario, data from previous tasks shouldn’t be stored and the knowledge acquired from previous tasks should be retained rather than forgotten such that the agent can successfully perform all the previous tasks that it has learned.