It is widely known that edges in natural scenes are formed by both luminance and texture differences between two objects. However, little effort has focused on studying the statistical properties of such edges. Computing these statistics could provide important insights into how the visual system processes natural scenes.

Ten high-resolution natural scenes were selected from the McGill Color Image Database. Three human subjects traced the edges of occlusion boundaries on grayscale versions of the images. Patches of size 80x40 pixels, centered on the marked edges, were extracted for analysis. The 5000 extracted edge patches were then aligned in terms of polarity (brighter side on top).

We analyzed the edges in both linear and log luminance domains. First-order statistics revealed that the mean edge is a blurred step in luminance with greater variance and less skewness in the brighter half than in the darker half. The distribution of Michelson contrast between the brighter and darker halves is uniform with a bias towards low contrast. We also classified the edge patches into four categories: (1) Luminance-defined edges, which have high contrast and small standard deviation. (2) Textured-defined edges, which have low contrast and high standard deviation. (3) Luminance-textured edges, which have high contrast and large standard deviation. (4) Object-defined edges, which exhibit neither a difference in luminance nor a difference in texture between the two halves; these edges likely contain boundaries that subjects marked via interpolation/extrapolation based on object recognition. Approximately 40% of the edges were luminance-defined, 10% of the edges were texture-defined, 32% of the edges were luminance-textured edges, and 18% of the edges were object-defined. We discuss the implications of these findings for neural and computational coding. In particular, edge detectors and various wavelets have been tuned to detect luminance-defined edges; such templates would fail on 30% of occlusion boundaries.