A Review of Statistical Approaches to Level Set Segmentation: Integrating Color, Texture, Motion and Shape

Abstract

Since their introduction as a means of front propagation and their first application to edge-based segmentation in the early 90’s, level set methods have become increasingly popular as a general framework for image segmentation. In this paper, we present a survey of a specific class of region-based level set segmentation methods and clarify how they can all be derived from a common statistical framework.

Region-based segmentation schemes aim at partitioning the image domain by progressively fitting statistical models to the intensity, color, texture or motion in each of a set of regions. In contrast to edge-based schemes such as the classical Snakes, region-based methods tend to be less sensitive to noise. For typical images, the respective cost functionals tend to have less local minima which makes them particularly well-suited for local optimization methods such as the level set method.

We detail a general statistical formulation for level set segmentation. Subsequently, we clarify how the integration of various low level criteria leads to a set of cost functionals. We point out relations between the different segmentation schemes. In experimental results, we demonstrate how the level set function is driven to partition the image plane into domains of coherent color, texture, dynamic texture or motion. Moreover, the Bayesian formulation allows to introduce prior shape knowledge into the level set method. We briefly review a number of advances in this domain.

Förstner, M.A., and Gülch, E. 1987. A fast operator for detection and precise location of distinct points, corners and centers of circular features. In Proceedings of the Intercommission Workshop of the International Society for Photogrammetry and Remote Sensing, Interlaken, Switzerland.Google Scholar

Vese, L.A., and Chan, T.F. 2002. A multiphase level set framework for image segmentation using the Mumford and Shah model. The International Journal of Computer Vision, 50(3):271–293.CrossRefzbMATHGoogle Scholar