Boundary Detection Using Color-Opponency

Kaifu Yang, Shaobing Gao, Chaoyi Li, Yongjie Li *

University of Electronic Science and Technology of China
Shanghai Institutes for Biological Sciences, CAS

Abstract

Brightness and color are two basis visual features integrated by the human visual system (HVS) to gain a better understanding of color natural scenes. Aiming to combine these two cues to maximize the reliability of boundary detection in natural scenes, this work proposes a new framework based on the color-opponent mechanisms of a certain type of color-sensitive double-opponent (DO) cells in the primary visual cortex (V1) of HVS. This type of DO cells has oriented receptive field (RF) with both chromatically and spatially opponent structure. The proposed framework is a feedforward hierarchical model, which has direct counterpart to the color-opponent mechanisms involved in from the retina to V1. In addition, we employ the spatial sparseness constraints (SSC) of neural responses to further suppress the unwanted edges of texture elements. Experiment results show that the DO cells we modeled can flexibly capture both the structured chromatic and achromatic boundaries of salient objects in complex scenes when the cone inputs to DO cells are unbalanced. Meanwhile, SSC operator further improves the performance by suppressing redundant texture edges. With competitive contour detection accuracy, the proposed model has the additional advantage of quite simple implementation with low computational cost.

The receptive fields of single-opponent cells of Type I (a) and Type II (b) in RGC and LGN levels, and double-opponent cells in V1 with concentric
RF(c) and oriented double-opponent cells in V1 with side-by-side spatially antagonistic regions with unbalanced cone weights (d). (e) An illustration to explain that the center-only RF of Type II in LGN is constructed by differencing two center-surround ganglion cells.

Results

Figure: Comparisons of SCO with various cone-input weights. The last column presents the F-measure of each boundary map listed in the third to eighth columns. The optimal results (marked by black bold rectangle) correspond to the maximum of F-measure.

Figure: Some examples compared with Pb (Martin et al. PAMI, 2004). The thresholds used here correspond to the maximal F-measure for each image.