Searching for a given target object in a scene not only requires detecting the target object if it is visible, but also to identify promising locations for search if not. The quantity that measures how interesting an image region is given a certain task is called top-down saliency.

Saliency predictions are mostly used to reduce the computational burden of image processing: Instead of applying expensive processing steps like object classification exhaustively over the whole image, we can use saliency maps to chose a small set of salient regions for further processing and ignore other, uninteresting image parts

We investigate whether Convolutional Neural Networks can be trained on human search fixations to predict top-down saliency for search. To this end, we collected a dataset of fixations from 15 participants who searched for objects from three different categories. As the data also contains a lot of task-unrelated fixations, we developed different methods to facilitate the learning of task-specific behavior from it. We also evaluated the benefit of training on fixation data versus using the segmentation masks of target objects for training and showed that saliency predictions from fixation-trained models give better results for pruning region proposals.

2016

Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems, IEEE, IROS, October 2016 (conference)

Abstract

One of the central tasks for a household robot is searching for specific objects. It does not only require localizing the target object but also identifying promising search locations in the scene if the target is not immediately visible. As computation time and hardware resources are usually limited in robotics, it is desirable to avoid expensive visual processing steps that are exhaustively applied over the entire image. The human visual system can quickly select those image locations that have to be processed in detail for a given task. This allows us to cope with huge amounts of information and to efficiently deploy the limited capacities of our visual system. In this paper, we therefore propose to use human fixation data to train a top-down saliency model that predicts relevant image locations when searching for specific objects. We show that the learned model can successfully prune bounding box proposals without rejecting the ground truth object locations. In this aspect, the proposed model outperforms a model that is trained only on the ground truth segmentations of the target object instead of fixation data.

2015

Detecting and identifying the different objects in an image fast and reliably is an
important skill for interacting with one’s environment. The main problem is that in
theory, all parts of an image have to be searched for objects on many different scales
to make sure that no object instance is missed. It however takes considerable time
and effort to actually classify the content of a given image region and both time
and computational capacities that an agent can spend on classification are limited.
Humans use a process called visual attention to quickly decide which locations of
an image need to be processed in detail and which can be ignored. This allows us
to deal with the huge amount of visual information and to employ the capacities
of our visual system efficiently.
For computer vision, researchers have to deal with exactly the same problems,
so learning from the behaviour of humans provides a promising way to improve
existing algorithms. In the presented master’s thesis, a model is trained with eye
tracking data recorded from 15 participants that were asked to search images for
objects from three different categories. It uses a deep convolutional neural network
to extract features from the input image that are then combined to form a saliency
map. This map provides information about which image regions are interesting
when searching for the given target object and can thus be used to reduce the
parts of the image that have to be processed in detail. The method is based on a
recent publication of Kümmerer et al., but in contrast to the original method that
computes general, task independent saliency, the presented model is supposed to
respond differently when searching for different target categories.

2015

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems