3 ChallengeRetrieving images from large and varied collections using image content as a key.The image collections are diverse and often poorly indexed; unfortunately, image retrieval systems have not kept pace with the collections they are searching.Approach: Transformation from the raw pixel data to a small set of image regions that are coherent in color and texture.

4 Limitations of the Image Retrieval SystemsFind images containing particular objects based only on their low-level features, with little regard for the spatial organization of those features.Systems based on user querying are often unintuitive.

5 IntroductionClustering pixels in a joint color-texture-position feature space.Segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images.The user is allowed to view the internal representation of the submitted image and the query results.

6 What is BLOBWORLD?A new framework for image retrieval based on segmentation into regions and querying using properties of these regions.The regions generally correspond to objects or parts of objects.Blobworld does not exist completely in the “thing” domain, it recognizes the nature of images as combinations of objects, and querying in Blobworld is more meaningful than it is with simple “stuff” representations.

7 Image SegmentationSegmentation algorithms make mistakes, causing degradation in performance of any system that uses the segmentation results.As a result, designers of image retrieval systems have generally chosen to use global image properties, which do not depend on accurate segmentation.However, segmenting an image allows us to access the image at the level of objects.

8 Related WorkColor Histograms - encodes the spatial correlation of color-bin pairsMultiresolution wavelet decompositions to perform queries based on iconic matching.EM-Algorithm - estimate the parameters of a mixture of Gaussians model of the joint distribution of pixel color and texture features.

9 EM- AlgorithmIn order to segment each image automatically, we model the joint distribution of color, texture, and position features with a mixture of Gaussians.We use the Expectation-Maximization (EM) algorithm to estimate the parameters of this model; the resulting pixel-cluster memberships provide a segmentation of the image.After the image is segmented into regions, a description of each region's color and texture characteristics is produced.In a querying task, the user can access the regions directly, in order to see the segmentation of the query image and specify which aspects of the image are important to the query.When query results are returned, the user also sees the Blobworld representation of each retrieved image; this information assists greatly in refining the query.

11 Feature ExtractionSelect an appropriate scale for each pixel and extract color, texture, and position features for that pixel at the selected scale.Group pixels into regions by modeling the distribution of pixel features with a mixture of Gaussians using Expectation-Maximization.Describe the color distribution and texture of each region for use in a query.

12 Extracting Color FeaturesEach image pixel has a three-dimensional color descriptor in the L*a*b* color space. This color space is approximately perceptually uniform; thus, distances in this space are meaningful.We smooth the color features in order to avoid over segmenting regions such as tiger stripes based on local color variation; otherwise, each stripe would become its own region.

13 Extracting Texture FeaturesColor is a point property, texture is a local neighborhood property.The first requirement could be met to an arbitrary degree of satisfaction by using multi-orientation filter banks such as steerable filters; we chose a simpler method that is sufficient for our purposes.The second requirement, the problem of scale selection, has not received the same level of attention.

14 Scale Selection Use of a local image property known as polarity.The polarity is a measure of the extent to which the gradient vectors in a certain neighborhood all point in the same direction.The polarity at a given pixel is computed with respect to the dominant orientation in the neighborhood of that pixel.

15 Fig. 3. Five sample patches from a zebra image. Both (a). 1:5 and (b)Fig. 3. Five sample patches from a zebra image. Both (a) . 1:5 and (b) . 2:5 have stripes (1D flow) of different scales and orientations,(c) is a region of 2D texture with . 1:5, (d) contains an edge with . 0, and (e) is a uniform region with . 0.

18 Factors affecting PolarityEdge: The presence of an edge is signaled by p holding values close to 1 for all .Texture: In regions with 2D texture or 1D flow, p decays with : as the window size increases, pixels with gradients in multiple directions are included in the window, so the dominance of any one orientation decreases.Uniform: When a neighborhood possesses a constant intensity, p takes on arbitrary values since the gradient vectors have negligible magnitudes and arbitrary angles.

21 Combining Color, Texture, and Position FeaturesThe final color/texture descriptor for a given pixel consists of six values: three for color and three for texture.The three color components are the L*a*b* coordinates found after spatial averaging using a Gaussian at the selected scale.The three texture components are ac, pc, and c, computed at the selected scale; the anisotropy and polarity are each modulated by the contrast since they are meaningless in regions of low contrast.

22 EM AlgorithmThe EM algorithm is used for finding maximum likelihood parameter estimates when there is missing or incomplete data.The missing data is the Gaussian cluster to which the points in the feature space belong.We estimate values to fill in for the incomplete data (the “E Step”), compute the maximum-likelihood parameter estimates using this data (the “M Step”), and repeat until a suitable stopping criterion is reached.In the case where EM is applied to learning the parameters for a mixture of Gaussians, it turns out that both steps can be combined into a single update step.

26 PostprocessingPerform spatial grouping of those pixels belonging to the same color/texture cluster.We first produce a K-level image which encodes pixel-cluster.Find the color histogram of each region (minus its boundary) using the original pixel colors (before smoothing).For each pixel (in color bin i) on the boundary between two or more regions, reassign it to the region whose histogram value i is largest.

27 Segmentation ResultsLarge background areas may be arbitrarily split into two regions due to the use of position in the feature vector.The region boundaries sometimes do not follow object boundaries exactly, even when the object boundary is visually quite apparent. This occurs because the color feature is averaged across object boundaries.The object of interest is missed, split, or merged with other regions because it is not visually distinct.In rare cases, a visually distinct object is simply missed. This error occurs mainly when no initial mean falls near the object's feature vectors.

35 Content-based Image RetrievalGroup pixels into regions which are coherent in low level properties and which generally correspond to objects or parts of objects.Describe these regions in ways that are meaningful to the user.Access these region descriptions, either automatically or with user intervention, to retrieve desired images.

36 ConclusionOur belief is that segmentation, while imperfect, is an essential first step, as the combinatorics of searching for all possible instances of a class is intractable.A combine architecture for segmentation and recognition is needed, analogous to inference using Hidden Markov Models in speech recognition.We cannot claim that our framework provides an ultimate solution to this central problem in computer vision.