In this paper, a new way to take advantage of image processing, computer vision and pattern recognition is proposed. First, convolutional neural networks are adopted to detect the attributes. Second, the dependencies among attributes are obtained by mining association rules, and they are used to refine the attributes classification results. Third, metric learning technique is used to transfer the attribute learning task to person re-identification. Finally, the approach is integrated into an appearance-based method for video-based person re-identification. Experimental results on two benchmark datasets indicate that attributes can provide improvements both in accuracy and generalization capabilities.

A new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach took advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames were used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space were performed based on the location of past segmentations and a prediction of the next segmentation.

Presented is a novel approach to automatically generate intermediate image descriptors by exploiting concept co-occurrence patterns in the pre-labeled training set that renders it possible to depict complex scene images semantically. This work is motivated by the fact that multiple concepts that frequently co-occur across images form patterns which could provide contextual cues for individual concept inference. The co-occurrence patterns were discovered as hierarchical communities by graph modularity maximization in a network with nodes and edges representing concepts and co-occurrence relationships separately. A random walk process working on the inferred concept probabilities with the discovered co-occurrence patterns was applied to acquire the refined concept signature representation. Through experiments in automatic image annotation and semantic image retrieval on several challenging datasets, the effectiveness of the proposed concept co-occurrence patterns as well as the concept signature representation in comparison with state-of-the-art approaches was demonstrated.

Biologists studied pollen tube growth to understand how internal cell dynamics affected observable structural characteristics like cell diameter, length, and growth rate. Fluorescence microscopy was used to study the dynamics of internal proteins and ions, but this often produced images with missing parts of the pollen tube. Brightfield microscopy provided a low-cost way of obtaining structural information about the pollen tube, but the images were crowded with false edges. We proposed a dynamic segmentation fusion scheme that used both Bright-Field and Fluorescence images of growing pollen tubes to get a unified segmentation. Knowledge of the image formation process was used to create an initial estimate of the location of the cell boundary. Fusing this estimate with an edge indicator function amplified desired edges and attenuated undesired edges. The cell boundary was obtained using Level Set evolution on the fused edge indicator function.

In this paper, we improve upon existing work with two major contributions. First, we show that a more representative reference-set contributes to better classification accuracy. To this end, we carefully adapt the K-means clustering algorithm in the feature space to select a distinguished reference-set. Second, in the image classification process, we propose to represent each image by measuring its betweenness centrality in a social network composed of the representative reference-set in each class, leading to a more coherent distance measure that considers the overall connectivity between the probe image and the reference-set. Extensive experiment results demonstrate that our proposed scheme achieves better performance than existing methods.

A reference-based algorithm for scene image categorization is presented in this letter. In addition to using a reference-set for images representation, we also associate the reference-set with training data in sparse codes during the dictionary learning process. The reference-set is combined with the reconstruction error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. After dictionaries are constructed, Locality-constrained Linear Coding (LLC) features of images are extracted. Then, we represent each image feature vector using the similarities between the image and the reference-set, leading to a significant reduction of the dimensionality in the feature space. Experimental results demonstrate that our method achieves outstanding performance.

Visual codebook based quantization of robust appearance
descriptors extracted from local image patches is an effective
means of capturing image statistics for texture analysis and
natural scene classification. Based on the newly
proposed statistics of word activation forces (WAFs), we have optimized
the codebook. Currently, codebooks are typically created
from a set of training images using a clustering algorithm.
However, these codebooks are often functionally limited due
to redundancy. We show that WAFs can remove the redundancy
efficiently. In the experiment, the proposed method
achieved the state-of-the-art performance on the Caltech-101,
fifteen natural scene categories and VOC2007 databases. The
optimization method also offers insights into the success of
several recently proposed images classification approaches,
including vector quantization (VQ) coding in the Spatial
Pyramid Matching (SPM), sparse coding SPM (ScSPM), and
Locality-constrained Linear Coding (LLC).

We developed a new symmetry-integrated region-based image segmentation method to obtain improved image
segmentation. The method constructs
a symmetry token that can be flexibly embedded into segmentation cues. The method has been investigated
experimentally in challenging natural images
and images containing man-made objects. It is shown that the proposed method outperforms current
segmentation methods both with and without exploiting symmetry. A thorough experimental analysis indicates
that symmetry plays an important role as a segmentation cue, in conjunction with other attributes like color
and texture.

Symmetry is an important cue for machine perception
that involves high-level knowledge of image components.
Unlike most of the previous research that only computes
symmetry in an image, this research integrates symmetry with
image segmentation to improve the segmentation
performance. The symmetry integration is used to optimize
both the segmentation and the symmetry of regions
simultaneously. Interesting points are initially extracted
from an image and they are further refined for detecting
symmetry axis. A symmetry affinity matrix is used explicitly
as a constraint in a region growing algorithm in order to
refine the symmetry of segmented regions. Experimental
results and comparisons from a wide domain of images
indicate a promising improvement by symmetry integrated
image segmentation compared to other image segmentation
methods that do not exploit symmetry.

This paper presents a fully automated symmetry-integrated brain injury detection method for magnetic resonance imaging (MRI) sequences. One of the limitations of current injury detection methods often involves a large amount of training data or a prior model that is only applicable to a limited domain of brain slices, with low computational efficiency and robustness. Our proposed approach can detect injuries from a wide variety of brain images since it makes use of symmetry as a dominant feature, and does not rely on any prior models and training phases. The approach consists of the following steps: (a) symmetry integrated segmentation of brain slices based on symmetry affinity matrix, (b) computation of kurtosis and skewness of symmetry affinity matrix to find potential asymmetric regions, (c) clustering of the pixels in symmetry affinity matrix using a 3D relaxation algorithm, (d) fusion of the results of (b) and (c) to obtain refined asymmetric regions, (e) Gaussian mixture model for unsupervised classification of potential asymmetric regions as the set of regions corresponding to brain injuries. Experimental results are carried out to demonstrate the efficacy of the approach.

We present an approach to automatic image segmentation, in which user selected sets of examples and
counter-examples supply information about the specific segmentation problem. In our approach, image
segmentation is guided by a genetic algorithm which learns the appropriate subset and spatial combination
of a collection of discriminating functions, associated with image features. The genetic algorithm encodes
discriminating functions into a functional template representation, which can be applied to the input image
to produce a candidate segmentation.

We used genetic programming (GP) with smart crossover and smart mutation to
discover integrated feature agents that are evolved from combinations of primitive image processing
operations to extract regions-of-interest (ROIs) in remotely sensed images. Smart
crossover and smart mutation identify and keep the effective components of integrated operators called
"agents" and significantly improve the efficiency of GP. Our experimental results show that compared to
normal GP, our GP algorithm with smart crossover and smart mutation can find good agents more quickly
during training to effectively extract the regions-of-interest and the learned agents can be applied to extract
ROIs in other similar images.

In this paper the problem of separating moving cast shadows from the moving objects in an outdoor environment is addressed. Unlike other previous work, we provide a method that does not use any geometrical information. Our approach is based on a spatiotemporal albedo normalization test and a dichromatic reflection model. The physics based model is used both in the estimation and verification phases. We provide results for several different video sequences representing a variety of materials and shadows. We achieve excellent results in distinguishing moving objects from their shadows. The results indicate that our approach is robust to a variety of background and foreground materials and varying illumination conditions.

We present a general approach to image segmentation and object recognition that can adapt the
image segmentation algorithm parameters to the changing environmental conditions.
The edge-border coincidence measure is first used
as reinforcement for segmentation evaluation to reduce computational expenses associated with
model matching during the early stage of adaptation. This measure alone, however, can not reliably
predict the outcome of object recognition. Therefore, it is used in conjunction with model matching.
Results are presented for both indoor and
outdoor color images where the performance improvement over time is shown for both image segmentation
and object recognition.

In this paper, we present an approach, to image segmentation in which user selected sets of examples and counter-examples supply information about the specific segmentation problem. Image segmentation is guided by a genetic algorithm, which learns the appropriate subset and spatial combination of a collection of discriminating functions, associated with image features. The genetic algorithm encodes discriminating functions into a functional template representation, which can be applied to the input image to produce a candidate segmentation. The quality of each segmentation is evaluated within the genetic algorithm, by a comparison of two physics-based techniques for region growing and edge detection. Experimental results on real SAR imagery demonstrate that evolved segmentations are consistently better than segmentations derived from the Bayesian best single feature.

The system presented here achieves robust performance by using reinforcement learning to induce a mapping from input images to corresponding segmentation parameters. This is accomplished by using the confidence level of model matching as a reinforcement signal for a team of learning automata to search for segmentation parameters during training. The use of the recognition algorithm as part of the evaluation function for image segmentation gives rise to significant improvement of the system performance by automatic generation of recognition strategies. The system is verified through experiments on sequences of indoor and outdoor color images with varying external conditions.

Generally, object recognition systems are open loop with no feedback between levels and assuring their
robustness is a key challenge in computer vision and pattern recognition research. A robust closed-loop
system based on “delayed” reinforcement learning is introduced in this paper. The parameters of a
multilevel system employed for model-based object recognition are learned.
The approach systematically controls feedback in
a multilevel vision system and shows promise in approaching a long-standing problem in the field of
computer vision and pattern recognition.

A general approach to image segmentation and object recognition that learned a mapping from images with varying properties to segmentation algorithm parameters is presented. The mapping was built using a reinforcement learning algorithm that was based on a team of generalized stochastic learning automata and operated separately in a global or local manner on an image. The edge-border coincidence was first used as an immediate reinforcement to reduce computational expenses associated with model matching during the early stage of the learning process. Since this measure could not reliably predict the outcome of object recognition, it was used in conjunction with model matching that provided optimal segmentation evaluation in a closed-loop object recognition system. Results are presented for both indoor and outdoor color images where the performance improvement over time is shown for both image segmentation and object recognition.

Automated terrain analysis was required for many practical applications, such as outdoor navigation, image exploitation, remote sensing, reconnaissance and surveillance. A hierarchical approach to analyze multispectral (MS) imagery for autonomous land vehicle navigation is presented. The approach integrated several strategies to label various terrain classes in these images acquired using twelve spectral bands in the visible and near-infrared spectrum. At the low (pixel) level, it combined texture gradient results from specifically selected channels by varying the size of gradient operators and performed multi-thresholding and relaxation-based edge linking operations to obtain robust closed region boundaries. At the high (symbolic) level, it made use of the spectral, locational, and relational constraints among regions to achieve accurate terrain image interpretation.

An adaptive approach for the important image processing problem of image segmentation that relied on learning from experience to adapt and improve the Segmentation performance. The adaptive image segmentation system incorporated a feedback loop consisting of a machine learning subsystem, an image segmentation algorithm, and an evaluation component which determined segmentation quality. The machine learning component was based on genetic adaptation and used (separately) a pure genetic algorithm (GA) and a hybrid of GA and Hill Climbing (HC). When the learning subsystem was based on pure genetics, the corresponding evaluation component was based on a vector of evaluation criteria. For the hybrid case, the system employed a scalar evaluation measure which was a weighted combination of the different criteria. The multi-objective optimization demonstrated the ability of the adaptive image segmentation system to provide high quality segmentation results in a minimal number of generations.

One of the fundamental weaknesses of computer vision systems used in practical applications was their inability to adapt the segmentation process as real-world changes occurred in the image. Presented is the first closed loop image segmentation system which incorporated a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions such as time of day, time of year, clouds, etc. The segmentation problem was formulated as an optimization problem and the genetic algorithm efficiently searched the hyperspace of segmentation parameter combinations to determine the parameter set which maximized the segmentation quality criteria. The goals of the adaptive image segmentation system were to provide continuous adaptation to normal environmental variations, to exhibit learning capabilities, and to provide robust performance when interacting with a dynamic environment. Also presented are experimental results which demonstrated learning and the ability to adapt the segmentation performance in outdoor color imagery.

An approach for image segmentation that relied on learning from experience to adapt and improve the segmentation performance is presented. The adaptive image segmentation system incorporated a feedback loop that consisted of a machine learning subsystem, an image segmentation algorithm, and an evaluation component which determined segmentation quality. The machine learning component was based on genetic adaptation and used (separately) a pure genetic algorithm (GA) and a hybrid of GA and hill climbing (HC). When the learning subsystem was based on pure genetics, the corresponding evaluation component was based on a vector of evaluation criteria. For the hybrid case, the system employed a scalar evaluation measure which was a weighted combination of the different criteria. Experimental results for pure genetic and hybrid search methods are presented using a representative database of outdoor TV imagery.

One of the fundamental weaknesses of computer vision systems used in outdoor applications was their inability to adapt the segmentation process as real-world changes occurred in the image. We present a closed loop image segmentation system that incorporated a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions. The genetic algorithm efficiently searched the hyperspace of segmentation parameter combinations to determine the parameter set which maximized the segmentation quality criteria. A summary of the experimental results that demonstrated the ability to perform adaptive image segmentation and to learn from experience using a collection of outdoor color imagery is presented.

One of the fundamental weaknesses of computer vision systems used in outdoor applications was their inability to adapt the segmentation process as real-world changes occurred in the image. We present a closed loop image segmentation system that incorporated a genetic algorithm to adapt the segmentation process to changes in image characteristics caused by variable environmental conditions. The genetic algorithm efficiently searched the hyperspace of segmentation parameter combinations to determine the parameter set which maximized the segmentation quality criteria. A summary of the experimental results that demonstrated the ability to perform adaptive image segmentation and to learn from experience using a collection of outdoor color imagery is presented.

Image segmentation was a crucial part of machine vision applications. Presented is a system that performed real-time segmentation of images that used a real-time segmentation VLSI chip that was based on a gradient relaxation algorithm and was designed using the Path Programmable Logic design methodology developed at the University of Utah. The system design considerations, system specifications, and an input/output format for the chip are discussed. The actual design of the chip is given that used pipeline methodology to achieve real-time performance with a compact VLSI layout. The implementation of the segmentation system is presented and the segmentation chip and the overall system are evaluated with regard to real-time performance and segmentation results.

The use of gray scale intensities together with the edge information is presented in a forward-looking infrared (FLIR) image to obtain a precise and accurate segmentation of a target. A model of FLIR images based on gray scale and edge information was incorporated in a gradient relaxation technique which explicitly maximized a criterion function based on the inconsistency and ambiguity of classification of pixels with respect to their neighbors. Four variations of the basic technique were considered which provided automatic selection of thresholds to segment FLIR images. A comparison of these methods is discussed and several examples of segmentation of ship images are given.

A simple and computationally efficient approach to image segmentation via recursive region splitting and merging is presented. Unlike other techniques the criterion for splitting was based on a generalization of a two-class gradient relaxation method and merging used a test for mean gray level equivalency for adjacent regions. The technique is illustrated by providing results for both synthetic and natural scenes.

An approach to image segmentation via recursive region splitting is presented. The kernel of the proposed segmentation was based on the two class relaxation technique. An evaluation of this relaxation algorithm was made with respect to the signal to noise ratio, region size, and contrast of the objects present in the image. This established the validity of the two class segmentation technique for segmenting the objects of interest in a multi-class image, when applied on a local basis and recursive manner. The segmentation was analyzed, and its performance on a natural scene is presented.

A gradient relaxation method based on maximizing a criterion function was studied and compared
to the nonlinear probabilistic relaxation method for the purpose of segmentation of images having
unimodal distributions. Although both methods provided comparable segmentation results, the gradient
method had the additional advantage of providing control over the relaxation process by choosing
three parameters which could be tuned to obtain the desired segmentation results at a faster rate.

A gradient relaxation method based on maximizing a criterion function is presented for the purpose of segmentation of images having almost unimodal distributions. The method provided control over the relaxation process by choosing three parameters which could be tuned to obtain the desired segmentation results.