Choose your preferred view mode

Please select whether you prefer to view the MDPI pages with a view tailored for mobile displays or to view the MDPI
pages in the normal scrollable desktop version. This selection will be stored into your cookies and used automatically
in next visits. You can also change the view style at any point from the main header when using the pages with your
mobile device.

Special Issue Information

Dear Colleagues,

Color is one of the most important and fascinating attributes of the natural environment. Research about color is becoming more and more prevalent in image processing and computer vision, even if many models are still designed for grayscale pictures and their extension to color images is not a trivial task. In fact, the intrinsic multidisciplinary character of color makes it difficult to model, both from a perceptual and a computational or mathematical level.

The intent of this Special Issue is to provide a framework where scientists in several different disciplines related to color can find a place to illustrate their ideas and results.

This Special Issue is primarily focused on the following topics, however we encourage all submissions related to color in imaging:

Computational color vision models

Perceptually-inspired color image and video processing

Variational and patch-based techniques applied to color images

Color data compression and encoding

Color image/video indexing and retrieval

Color enhancement

Color constancy and saliency

Color texture

Color image and video watermarking

Color image/video quality assessment

Multispectral imaging

Geometry of color spaces

Interactions between color science and other disciplines such as art, medicine, psychology, and so on

Color imaging and technology for material appearance

Color and contrast measures

Statistics of natural images in color

High dynamic range imaging in color

Prof. Dr. Edoardo ProvenziGuest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Segmentation of regions of interest is an important pre-processing step in many colour image analysis procedures. Similarly, segmentation of plant objects in digital images is an important preprocessing step for effective phenotyping by image analysis. In this paper, we present results of a

Segmentation of regions of interest is an important pre-processing step in many colour image analysis procedures. Similarly, segmentation of plant objects in digital images is an important preprocessing step for effective phenotyping by image analysis. In this paper, we present results of a statistical analysis to establish the respective abilities of different colour space representations to detect plant pixels and separate them from background pixels. Our hypothesis is that the colour space representation for which the separation of the distributions representing object and background pixels is maximized is the best for the detection of plant pixels. The two pixel classes are modelled by Gaussian Mixture Models (GMMs). In our statistical modelling we make no prior assumptions on the number of Gaussians employed. Instead, a constant bandwidth mean-shift filter is used to cluster the data with the number of clusters, and hence the number of Gaussians, being automatically determined. We have analysed the following representative colour spaces: RGB, rgb, HSV, Ycbcr and CIE-Lab. We have analysed the colour space features from a two-class variance ratio perspective and compared the results of our model with this metric. The dataset for our empirical study consisted of 378 digital images (and their manual segmentations) of a variety of plant species: Arabidopsis, tobacco, wheat, and rye grass, imaged under different lighting conditions, in either indoor or outdoor environments, and with either controlled or uncontrolled backgrounds. We have found that the best segmentation of plants is found using HSV colour space. This is supported by measures of Earth Mover Distance (EMD) of the GMM distributions of plant and background pixels.
Full article

In this work, through a phenomenological analysis, we studied the perception of the chromatic illusion and illusoriness. The necessary condition for an illusion to occur is the discovery of a mismatch/disagreement between the geometrical/physical domain and the phenomenal one. The illusoriness is instead

In this work, through a phenomenological analysis, we studied the perception of the chromatic illusion and illusoriness. The necessary condition for an illusion to occur is the discovery of a mismatch/disagreement between the geometrical/physical domain and the phenomenal one. The illusoriness is instead a phenomenal attribute related to a sense of strangeness, deception, singularity, mendacity, and oddity. The main purpose of this work is to study the phenomenology of chromatic illusion vs. illusoriness, which is useful for shedding new light on the no-man’s land between “sensory” and “cognitive” processes that have not been fully explored. Some basic psychological and biological implications for living organisms are deduced.
Full article

Colorization of gray-scale images relies on prior color information. Exemplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale target image. In the literature, this transfer is mainly guided by

Colorization of gray-scale images relies on prior color information. Exemplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale target image. In the literature, this transfer is mainly guided by texture descriptors. Face images usually contain few texture so that the common approaches frequently fail. In this paper, we propose a new method taking the geometric structure of the images rather their texture into account such that it is more reliable for faces. Our approach is based on image morphing and relies on the YUV color space. First, a correspondence mapping between the luminance Y channel of the color source image and the gray-scale target image is computed. This mapping is based on the time discrete metamorphosis model suggested by Berkels, Effland and Rumpf. We provide a new finite difference approach for the numerical computation of the mapping. Then, the chrominance U,V channels of the source image are transferred via this correspondence map to the target image. A possible postprocessing step by a variational model is developed to further improve the results. To keep the contrast special attention is paid to make the postprocessing unbiased. Our numerical experiments show that our morphing based approach clearly outperforms state-of-the-art methods.
Full article

Color inconsistency often exists between the images to be stitched and will reduce the visual quality of the stitching results. Color transfer plays an important role in image stitching. This kind of technique can produce corrected images which are color consistent. This paper

Color inconsistency often exists between the images to be stitched and will reduce the visual quality of the stitching results. Color transfer plays an important role in image stitching. This kind of technique can produce corrected images which are color consistent. This paper presents a color transfer approach via histogram specification and global mapping. The proposed algorithm can make images share the same color style and obtain color consistency. There are four main steps in this algorithm. Firstly, overlapping regions between a reference image and a test image are obtained. Secondly, an exact histogram specification is conducted for the overlapping region in the test image using the histogram of the overlapping region in the reference image. Thirdly, a global mapping function is obtained by minimizing color differences with an iterative method. Lastly, the global mapping function is applied to the whole test image for producing a color-corrected image. Both the synthetic dataset and real dataset are tested. The experiments demonstrate that the proposed algorithm outperforms the compared methods both quantitatively and qualitatively.
Full article

Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their

Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their contrast and the visibility of otherwise obscured details. As a result, it has been shown that these colorizing methods lead to an increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness. A crucial step in the proposed coloring process is the choice of a suitable color mapping scheme. When both daytime color images and multiband sensor images of the same scene are available, the color mapping can be derived from matching image samples (i.e., by relating color values to sensor output signal intensities in a sample-based approach). When no exact matching reference images are available, the color transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image. In the current study, we investigated new color fusion schemes that combine the advantages of both methods (i.e., the efficiency and color constancy of the sample-based method with the ability of the statistical method to use the image of a different but somewhat similar scene as a reference image), using the correspondence between multiband sensor values and daytime colors (sample-based method) in a smooth transformation (statistical method). We designed and evaluated three new fusion schemes that focus on (i) a closer match with the daytime luminances; (ii) an improved saliency of hot targets; and (iii) an improved discriminability of materials. We performed both qualitative and quantitative analyses to assess the weak and strong points of all methods.
Full article

Mobile change detection systems allow for acquiring image sequences on a route of interest at different time points and display changes on a monitor. For the display of color images, a processing approach is required to enhance details, to reduce lightness/color inconsistencies along

Mobile change detection systems allow for acquiring image sequences on a route of interest at different time points and display changes on a monitor. For the display of color images, a processing approach is required to enhance details, to reduce lightness/color inconsistencies along each image sequence as well as between corresponding image sequences due to the different illumination conditions, and to determine colors with natural appearance. We have developed a real-time local/global color processing approach for local contrast enhancement and lightness/color consistency, which processes images of the different sequences independently. Our approach combines the center/surround Retinex model and the Gray World hypothesis using a nonlinear color processing function. We propose an extended gain/offset scheme for Retinex to reduce the halo effect on shadow boundaries, and we employ stacked integral images (SII) for efficient Gaussian convolution. By applying the gain/offset function before the color processing function, we avoid color inversion issues, compared to the original scheme. Our combined Retinex/Gray World approach has been successfully applied to pairs of image sequences acquired on outdoor routes for change detection, and an experimental comparison with previous Retinex-based approaches has been carried out.
Full article

In this paper, we apply the quaternion framework for color images to a fragile watermarking algorithm with the objective of multimedia integrity protection (Quaternion Karhunen-Loève Transform Fragile Watermarking (QKLT-FW)). The use of quaternions to represent pixels allows to consider the color information in

In this paper, we apply the quaternion framework for color images to a fragile watermarking algorithm with the objective of multimedia integrity protection (Quaternion Karhunen-Loève Transform Fragile Watermarking (QKLT-FW)). The use of quaternions to represent pixels allows to consider the color information in a holistic and integrated fashion. We stress out that, by taking advantage of the host image quaternion representation, we extract complex features that are able to improve the embedding and verification of fragile watermarks. The algorithm, based on the Quaternion Karhunen-Loève Transform (QKLT), embeds a binary watermark into some QKLT coefficients representing a host image in a secret frequency space: the QKLT basis images are computed from a secret color image used as a symmetric key. A computational intelligence technique (i.e., genetic algorithm) is employed to modify the host image pixels in such a way that the watermark is contained in the protected image. The sensitivity to image modifications is then tested, showing very good performance.
Full article

Texture classification has a long history in computer vision. In the last decade, the strong affirmation of deep learning techniques in general, and of convolutional neural networks (CNN) in particular, has allowed for a drastic improvement in the accuracy of texture recognition systems.

Texture classification has a long history in computer vision. In the last decade, the strong affirmation of deep learning techniques in general, and of convolutional neural networks (CNN) in particular, has allowed for a drastic improvement in the accuracy of texture recognition systems. However, their performance may be dampened by the fact that texture images are often characterized by color distributions that are unusual with respect to those seen by the networks during their training. In this paper we will show how suitable color balancing models allow for a significant improvement in the accuracy in recognizing textures for many CNN architectures. The feasibility of our approach is demonstrated by the experimental results obtained on the RawFooT dataset, which includes texture images acquired under several different lighting conditions.
Full article

A tuning method was proposed for automatic lighting (auto-lighting) algorithms derived from the steepest descent and conjugate gradient methods. The auto-lighting algorithms maximize the image quality of industrial machine vision by adjusting multiple-color light emitting diodes (LEDs)—usually called color mixers. Searching for the driving condition for achieving maximum sharpness influences image quality. In most inspection systems, a single-color light source is used, and an equal step search (ESS) is employed to determine the maximum image quality. However, in the case of multiple color LEDs, the number of iterations becomes large, which is time-consuming. Hence, the steepest descent (STD) and conjugate gradient methods (CJG) were applied to reduce the searching time for achieving maximum image quality. The relationship between lighting and image quality is multi-dimensional, non-linear, and difficult to describe using mathematical equations. Hence, the Taguchi method is actually the only method that can determine the parameters of auto-lighting algorithms. The algorithm parameters were determined using orthogonal arrays, and the candidate parameters were selected by increasing the sharpness and decreasing the iterations of the algorithm, which were dependent on the searching time. The contribution of parameters was investigated using ANOVA. After conducting retests using the selected parameters, the image quality was almost the same as that in the best-case parameters with a smaller number of iterations.
Full article

This study describes a method for using a camera to automatically recognize the speed limits on speed-limit signs. This method consists of the following three processes: first (1) a method of detecting the speed-limit signs with a machine learning method utilizing the local

This study describes a method for using a camera to automatically recognize the speed limits on speed-limit signs. This method consists of the following three processes: first (1) a method of detecting the speed-limit signs with a machine learning method utilizing the local binary pattern (LBP) feature quantities as information helpful for identification, then (2) an image processing method using Hue, Saturation and Value (HSV) color spaces for extracting the speed limit numbers on the identified speed-limit signs, and finally (3) a method for recognition of the extracted numbers using a neural network. The method of traffic sign recognition previously proposed by the author consisted of extracting geometric shapes from the sign and recognizing them based on their aspect ratios. This method cannot be used for the numbers on speed-limit signs because the numbers all have the same aspect ratios. In a study that proposed recognition of speed limit numbers using an Eigen space method, a method using only color information was used to detect speed-limit signs from images of scenery. Because this method used only color information for detection, precise color information settings and processing to exclude everything other than the signs are necessary in an environment where many colors similar to the speed-limit signs exist, and further study of the method for sign detection is needed. This study focuses on considering the following three points. (1) Make it possible to detect only the speed-limit sign in an image of scenery using a single process focusing on the local patterns of speed limit signs. (2) Make it possible to separate and extract the two-digit numbers on a speed-limit sign in cases when the two-digit numbers are incorrectly extracted as a single area due to the light environment. (3) Make it possible to identify the numbers using a neural network by focusing on three feature quantities. This study also used the proposed method with still images in order to validate it.
Full article

A large number of color image enhancement methods are based on the methods for grayscale image enhancement in which the main interest is contrast enhancement. However, since colors usually have three attributes, including hue, saturation and intensity of more than only one attribute

A large number of color image enhancement methods are based on the methods for grayscale image enhancement in which the main interest is contrast enhancement. However, since colors usually have three attributes, including hue, saturation and intensity of more than only one attribute of grayscale values, the naive application of the methods for grayscale images to color images often results in unsatisfactory consequences. Conventional hue-preserving color image enhancement methods utilize histogram equalization (HE) for enhancing the contrast. However, they cannot always enhance the saturation simultaneously. In this paper, we propose a histogram specification (HS) method for enhancing the saturation in hue-preserving color image enhancement. The proposed method computes the target histogram for HS on the basis of the geometry of RGB (rad, green and blue) color space, whose shape is a cube with a unit side length. Therefore, the proposed method includes no parameters to be set by users. Experimental results show that the proposed method achieves higher color saturation than recent parameter-free methods for hue-preserving color image enhancement. As a result, the proposed method can be used for an alternative method of HE in hue-preserving color image enhancement.
Full article

The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and

The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and image preservation at large. For this reason, the Academy gathered an interdisciplinary group of scientists, technologists, and creatives, to contribute to it so that it is scientifically sound and technically advantageous in solving practical and interoperability problems in the current film production, postproduction and visual-effects (VFX) ecosystem—all while preserving and future-proofing the cinematographers’ and artists’ creative intent as its main objective. In this paper, a review of ACES’ technical specifications is provided, as well as the current status of the project and a recent use case is given, namely that of the first Italian production embracing an end-to-end ACES pipeline. In addition, new ACES components will be introduced and a discussion started about possible uses for long-time preservation of color imaging in video-content heritage.
Full article