Large multi-spectral datasets such as those created by multi-spectral images require a lot of data storage. Compression of these data is therefore an important problem. A common approach is to use principal components analysis (PCA) as a way of reducing the data requirements as part of a lossy compression strategy. In this paper, we employ the fast MCD (Minimum Covariance Determinant) algorithm, as a highly robust estimator of multivariate mean and covariance, to detect outlier spectra in a multi-spectral image. We then show that by removing the outliers from the main dataset, the performance of PCA in spectral compression significantly increases. However, since outlier spectra are a part of the image, they cannot simply be ignored. Our strategy is to cluster the outliers into a small number of groups and then compress each group separately using its own cluster-specific PCAderived bases. Overall, we show that significantly better compression can be achieved with this approach.

Metamer mismatching (the phenomenon that two objects matching in color under one illuminant may not match under a different illuminant) potentially has important consequences for color perception. Logvinenko et al. [PLoS ONE 10, e0135029 (2015)] show that in theory the extent of metamer mismatching can be very significant. This paper examines metamer mismatching in practice by computing the volumes of the empirical metamer mismatch bodies and comparing them to the volumes of the theoretical mismatch bodies. A set of more than 25 million unique reflectance spectra is assembled using datasets from several sources. For a given color signal (e.g., CIE XYZ) recorded under a given first illuminant, its empirical metamer mismatch body for a change to a second illuminant is computed as follows: the reflectances having the same color signal when lit by the first illuminant (i.e., reflect metameric light) are computationally relit by the second illuminant, and the convex hull of the resulting color signals then defines the empirical metamer mismatch body. The volume of these bodies is shown to vary systematically with Munsell value and chroma. The empirical mismatch bodies are compared to the theoretical mismatch bodies computed using the algorithm of Logvinenko et al. [IEEE Trans. Image Process. 23, 34 (2014)]. There are three key findings: (1) the empirical bodies are found to be substantially smaller than the theoretical ones; (2) the sizes of both the empirical and theoretical bodies show a systematic variation with Munsell value and chroma; and (3) applied to the problem of color-signal prediction, the centroid of the empirical metamer mismatch body is shown to be a better predictor of what a given color signal might become under a specified illuminant than state-of-the-art methods.

The performance of color prediction methods CIECAM02, KSM2, Waypoint, Best Linear, MMV center, and relit color signal are compared in terms of how well they explain Logvinenko & Tokunaga’s asymmetric color matching results (“Colour Constancy as Measured by Least Dissimilar Matching,” Seeing and Perceiving, vol. 24, no. 5, pp. 407- 452, 2011). In their experiment, 4 observers were asked to determine (3 repeats) for a given Munsell paper under a test illuminant which of 22 other Munsell papers was the least-dissimilar under a match illuminant. Their use of “least-dissimilar” as opposed to “matching” is an important aspect of their experiment. Their results raise several questions. Question 1: Are observers choosing the original Munsell paper under the match illuminant? If they are, then the average (over 12 matches) color signal (i.e., cone LMS or CIE XYZ) made under a given illuminant condition should correspond to that of the test paper’s color signal under the match illuminant. Computation shows that the mean color signal of the matched papers is close to the color signal of the physically identical paper under the match illuminant. Question 2: Which color prediction method most closely predicts the observers’ average leastdissimilar match? Question 3: Given the variability between observers, how do individual observers compare to the computational methods in predicting the average observer matches? A leave-one-observer-out comparison shows that individual observers, somewhat surprisingly, predict the average matches of the remaining observers better than any of the above color prediction methods.

Automatic white balancing works quite well on average, but seriously fails some of the time. These failures lead to completely unacceptable images. Can the number, or severity, of these failures be reduced, perhaps at the expense of slightly poorer white balancing on average, with the overall goal being to increase the overall acceptability of a collection of images? Since the main source of error in automatic white balancing arises from misidentifying the overall scene illuminant, a new illuminationestimation algorithm is presented that minimizes the high percentile error of its estimates. The algorithm combines illumination estimates from standard existing algorithms and chromaticity gamut characteristics of the image as features in a feature space. Illuminant chromaticities are quantized into chromaticity bins. Given a test image of a real scene, its feature vector is computed, and for each chromaticity bin, the probability of the illuminant chromaticity falling into a chromaticity bin given the feature vector is estimated. The probability estimation is based on Loftsgaarden-Quesenberry multivariate density function estimation over the feature vectors derived from a set of synthetic training images. Once the probability distribution estimate for a given chromaticity channel is known, the smallest interval that is likely to contain the right answer with a desired probability (i.e., the smallest chromaticity interval whose sum of probabilities is greater or equal to the desired probability) is chosen. The point in the middle of that interval is then reported as the chromaticity of the illuminant. Testing on a dataset of real images shows that the error at the 90th and 98th percentile ranges can be reduced by roughly half, with minimal impact on the mean error.

A new two-stage illumination estimation method based on the concept of rank is presented. The method first estimates the illuminant locally in subwindows using a ranking of digital counts in each color channel and then combines local subwindow estimates again based on a ranking of the local estimates. The proposed method unifies the MaxRGB and Grayworld methods. Despite its simplicity, the performance of the method is found to be competitive with other state-of-the art methods for estimating the chromaticity of the overall scene illumination.

An important component of camera calibration is to derive a mapping of a camera’s output RGB to a deviceindependent color space such as the CIE XYZ or sRGB6. Commonly, the calibration process is performed by photographing a color chart in a scene under controlled lighting and finding a linear transformation M that maps the chart’s colors from linear camera RGB to XYZ. When the XYZ values corresponding to the color chart’s patches are measured under a reference illumination, it is often assumed that the illumination across the chart is uniform when it is photographed. This simplifying assumption, however, often is violated even in such relatively controlled environments as a light booth, and it can lead to inaccuracies in the calibration. The problem of color calibration under non-uniform lighting was investigated by Funt and Bastani2,3. Their method, however, uses a numerical optimizer, which can be complex to implement on some devices and has a relatively high computational cost. Here, we present an irradiance-independent camera color calibration scheme based on least-squares regression on the unitsphere that can be implemented easily, computed quickly, and performs comparably to the previously suggested technique

The set of all possible cone excitation triplets from reflecting surfaces tmder a given illuminant forms a volume in cone excitation space known as the object-co/our solid (OCS). An important task in Color Science is to specify the precise geometry of the OCS as defined by its boundary. Schrodinger claimed that the optimal reflectances that map to the boundary of the OCS take on values of 0 or 1 only, with no more than two wavelength transitions. Although this popularly accepted assertion is, by and large, correct and holds under some restricted conditions (e.g., it holds for the CIE colour matching ftmctions), as far as the number of transitions is concented, it has been shown not to hold in general. As a result, the Schrodinger optimal reflectances provide only an approximation to the true OCS. For the case of dichromatic vision, we compare the true and approximate OCS by computing the set of true optimal reflectances, and find that they differ significantly.

The perceptual correlate to hue and the stability of its representation in the coordinates of Logvinenko's illumination-invariant object-colour atlas (Logvinenko, 2009) are investigated. Logvinenko's object-colour atlas represents the colours of objects in terms of special rectangular reflectance functions defined by 3-parameters, a (chromatic purity), o (spectral bandwidth) and A. (central wavelength) describing the rectangular reflectance to which it is metameric. These parameters were shown to be approximate perceptual correlates in terms of chroma, whiteness/blackness, and hue, respectively. When the illumination changes, the mapping of object colours to the rectangular atlas coordinates is subject to a phenomenon referred to as colour stimulus shift. The perceptual correlates shift as well. The problem of coJour stinmlus shift is exacerbated by the fact that the atlas is based on rectangular functions. This paper explores the benefits of using the Gaussian parameterization of the object-colour atlas (Logvinenko, 20 12) in terms of its robustness to colour stimulus shift and in terms of how well it maps to the perceptual correlate ofhue.

Metamer mismatching refers to the fact that two objects reflecting light causing identical colour signals (i.e., cone response or XYZ) under one illunimation may reflect light causing non-identical colour signals under a second illumination_ As a consequence of metamer mismatching, two objects appearing the same under one illuminant can be expected to appear different under the second illunimant. To investigate the potential extent of metamer mismatching, we calculated the metamer mismatching effect for 20 Munsell papers and 8 pairs of illunimants (Logvinenko & Tokunaga, 20 11) using the recent method (Logvinenko, Funt, & Godau, 2012) of computing the exact metan2er mismatch volume boundary. The results show that metamer mismatching is very significant for some lights. In fact, metamer mismatching was found to be so significant that it can lead to the prediction of some paradoxical phenomena, such as the possibility of 20 objects having the same colour under a neutral ("white") light dispersing into a whole hue circle of colours under a red light, and vice versa.

A hue descriptor based on Logvinenko’s illuminantinvariant object colour atlas [1] is tested in terms of how well it maps hues to the hue names found in Moroney’s Colour Thesaurus [2] [3] and how well it maps hues of Munsell papers to their corresponding Munsell hue designator. Called the KSM hue descriptor, it correlates hue with the central wavelength of a Gaussian-shaped reflectance function. An important feature of this representation is that the set of hue descriptors inherits the illuminate invariant property of Logvinenko’s object colour atlas. Despite the illuminant invariance of the atlas and the hue descriptors, metamer mismatching means that colour stimulus shift [4] can occur, which will inevitably lead to some hue shifts. However, tests show that KSM hue is robust in the sense that it is much more stable under a change of illuminant than CIELAB hue.