We present a novel and efficient multi-view depth map enhancement
method proposed as a post-processing of initially estimated depth
maps. The proposed method is based on edge, motion and scene
depth-range adaptive median filtering and allows for an improved
quality of virtual view synthesis. To enforce the spatial, temporal
and inter-view coherence in the multi-view depth maps, the median
filtering is applied to 4-dimensional windows that consist of the spatially
neighboring depth map values taken at different viewpoints
and time instants. A fast iterative block segmentation approach is
adopted to adaptively shrink these windows in the presence of edges
and motion for preservation of sharpness and realistic rendering and
for improvement of the compression efficiency. We show that our
enhancement method leads to a reduction of the coding bit-rate required
for representation of the depth maps and also leads to a gain
in the quality of synthesized views at arbitrary virtual viewpoints.

Description:

Closed access.

Sponsor:

This work was in part developed within VISNET II, a European Network
of Excellence (http://www.visnetnoe.org), funded under the European
Commission IST FP6 programme.