Depth Layers from Occlusions

by Arno Schödl and Irfan Essa

We present a method to extract relative depth information from an
uncalibrated monocular video sequence. Our method detects occlusions caused by
an object moving in a static scene to infer relative depth relationships between
scene parts. Our approach does not rely on any strong assumptions about the
object or the scene to aid in this segmentation into layers. In general, the
problem of building relative depth relationships from occlusion events is
underconstrained, even in the absence of observation noise. A minimum
description length algorithm is used to reliably calculate layer opacities and
their depth relationships in the absence of hard constraints. Our approach
extends previously published approaches that are restricted to work with a
certain type of moving object or require strong image edges to allow for an
a-priori segmentation of the scene. We also discuss ideas on how to extend our
algorithm to make use of a richer set of observations.