High-end PC monitors and TVs continue to increase their native display resolution to 4k by 2k and beyond.
Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or
wireless communication channels, but also when processing with array processor architectures. We recently presented a
block-based memory compression architecture for text, graphics, and video which we named parametric functional
compression (PFC) enabling multi-dimensional error minimization with context sensitive control of visually noticeable
artifacts. The underlying architecture was limited to small block sizes of 4x4 pixels. Although well suitable for random
access, its overall compression ratio ranges between 1.5 and 2.0. To increase compression ratio as well as image quality,
we propose a new hybrid approach. Within an extended block size we apply two complementary methods using a set of
vectors with orientation and curvature attributes across a 3x3 kernel of pixel positions. The first method searches for
linear interpolation candidate pixels that result in very low interpolation errors using vectorized linear interpolation
(VLI). The second method calculates the local probability of orientation and curvature (POC) to predict and minimize
PFC coding errors. Detailed performance estimation in comparison with the prior algorithm highlights the effectiveness
of our new approach, identifies its current limitations with regard to high quality color rendering with lower number of
bits per pixel, and illustrates remaining visual artifacts.

Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each
CFA pixel only captures one primary color component; the other primary components will be estimated using information
from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be
estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red,
Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking
method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This
particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have
applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image
dataset and the results have been compared with previous work.

Local backlight dimming is a popular technology in high quality Liquid Crystal Displays (LCDs). In those displays, the
backlight is composed of contributions from several individually adjustable backlight segments, set at different backlight
luminance levels in different parts of the screen, according to the luma of the target image displayed on LCD. Typically,
transmittance of the liquid crystal cells (pixels) located in the regions with dimmed backlight is increased in order to
preserve their relative brightness with respect to the pixels located in the regions with bright backlight. There are
different methods for brightness preservation for local backlight dimming displays, producing images with different
visual characteristics. In this study, we have implemented, analyzed and evaluated several different approaches for
brightness preservation, and conducted a subjective study based on rank ordering to compare the relevant methods on a
real-life LCD with a local backlight dimming capability. In general, our results show that locally adapted brightness
preservation methods produce more preferred visual outcome than global methods, but dependency on the content is also
observed. Based on the results, guidelines for selecting the perceptually preferred brightness preservation method for
local backlight dimming displays are outlined.

Camera modules have become more popular in consumer electronics and office products. As a consequence,
people have many opportunities to use a camera-based device to record a hardcopy document in their daily lives.
However, it is easy to let undesired shading into the captured document image through the camera. Sometimes,
this non-uniformity may degrade the readability of the contents. In order to mitigate this artifact, some solutions
have been developed. But most of them are only suitable for particular types of documents.
In this paper, we introduce a content-independent and shape-independent method that will lessen the shading
effects in captured document images. We want to reconstruct the image such that the result will look like a
document image captured under a uniform lighting source. Our method utilizes the 3D depth map of the
document surface and a look-up table strategy. We will first discuss the model and the assumptions that we used
for the approach. Then, the process of creating and utilizing the look-up table will be described in the paper.
We implement this algorithm with our prototype 3D scanner, which also uses a camera module to capture a
2D image of the object. Some experimental results will be presented to show the effectiveness of our method.
Both flat and curved surface document examples will be included.

The image quality of reprinted documents that were scanned at a high resolution may not satisfy human viewers who anticipate at least the same image quality as the original document. Moiré artifacts without proper descreening, text blurred by the poor scanner modulation transfer function (MTF), and color distortion resulting from misclassification between color and gray may make the reprint quality worse. To remedy these shortcomings from reprinting, the documents should be classified into various attributes such as image or text, edge or non-edge, continuous-tone or halftone, color or gray, and so on. The improvement of the reprint quality could be achieved by applying proper enhancement with these attributes. In this paper, we introduce a robust and effective approach to classify scanned documents into the attributes of each pixel. The proposed document segmentation algorithm utilizes simple features such as variance-to-mean (VMR), gradient, etc in various combinations of sizes and positions of a processing kernel. We also exploit each direction of gradients in the multiple positions of the same kernel to detect as small as 4-point text. Experimental results show that our proposed algorithm performs well over various types of the scanned documents including the documents that were printed in a resolution of low lines per inch (LPI).

Text line detection is a critical step for applications in document image processing. In this paper, we propose a novel text
line detection method. First, the connected components are extracted from the image as symbols. Then, we estimate the
direction of the text line in multiple local regions. This estimation is, for the first time, to our knowledge, formulated
in a cost optimization framework. We also propose an efficient way to solve this optimization problem. Afterwards,
we consider symbols as nodes in a graph, and connect symbols based on the local text line direction estimation results.
Last, we detect the text lines by separating the graph into subgraphs according to the nodes’ connectivities. Preliminary
experimental results demonstrate that our proposed method is very robust to non-uniform skew within text lines, variability
of font sizes, and complex structures of layout. Our new method works well for documents captured with flat-bed and
sheet-fed scanners, mobile phone cameras, and with other general imaging assets.

This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization
(PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to
model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian
components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by
transforming the lightness value in each interval to appropriate output interval according to the transformation function
that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative
distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce
washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to
the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance,
information loss and gradation artifacts.

This paper presents a real time automatic image correction technique operating under real-time
constraints (25ms) to address red dot artifacts in low resolution display panels. The algorithm is
designed as a two stage process, starting with the identification of pixels which could cause such
artifacts followed by a color correction scheme to compensate for any perceived visual errors.
Artifact inducing pixels are identified by thresholding the vector color gradient of the image.
The color levels of the adjacent subpixels around the artifacts are estimated based on partitive
spatial color mixing. Red dot artifacts occur as singlet, couplets or triplets and consequently three
different correction schemes are explored. The algorithm also ensures that artifacts occurring
at junctions, corners and cross sections are corrected without affecting the underlying shape or
contextual sharpness. The performance of our algorithm was benchmarked using a series of 30
corrected and uncorrected images by human observers. This algorithm is designed as a general
purpose color correction technique for display panels with an offset in their subpixel patterns
and can be easily implemented as an isolated real time post processing technique for the output
display buffer without any higher order processing or image content information. All the above
mentioned benefits are realized through software and don’t require any upgrade or replacement
of existing hardware.

The current study is aimed to propose a post-processing method for video enhancement by adopting a color-protection
technique. The color-protection intends to attenuate perceptible artifacts due to over-enhancements in visually sensitive
image regions such as low-chroma colors, including skin and gray objects. In addition, reducing the loss in color texture
caused by the out-of-color-gamut signals is also taken into account. Consequently, color reproducibility of video
sequences could be remarkably enhanced while the undesirable visual exaggerations are minimized.

In the paper, we propose new fast and effective approach for automatic visibility enhancement of images with poor
global and local contrast. Initially, we developed the technique for scanned images with dark and light background
regions and low visibility of foreground objects in both types of regions. Newly proposed algorithm carries out locally
adaptive tone mapping by means of variable S-shaped curve. We use cubic Hermit spline. Starting and ending points of
the spline depend on global brightness contrast, whereas tangents depend on local distribution of background and
foreground pixels. Alteration of the tangents for adjacent areas is smoothed in order to avoid forming of visible artifacts.
The description of several optimization tricks, which allow realization of high-speed algorithm, is given. We compare
proposed method with several well-known image enhancement techniques by means of estimation of Michelson contrast
(also known as visibility metric) for a number of test patterns. Disclosed algorithm outperforms tested alternatives.
Finally, we extend application of proposed method for photo enhancement and correction of images with haze.

Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a
printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided
in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve
the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to
classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help
users to improve the performance of classification as input images accumulate. At the same time, we want to
make quick decision on the input image to speed up the classification which means sometimes the AIO device
does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick
decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the
decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick
decision capability.

This paper examines the transferability of the Munsell system to modern inkjet colorants and printing technology
following a similar approach to his original methods. While extensive research and development has gone into
establishing methods for measuring and modelling the modern colour gamut, this study seeks to reintegrate the
psychophysical and artistic principles used in Munsell’s early colour studies with digital print. Contemporary inkjet
printing, with ink sets containing a greater number of primary colorants, are significantly higher in chroma compared to
the limited colorants available at the time of Munsell’s original work. Following Munsell’s design and implementation,
our experiments replicate the use of Clerk-Maxwell’s spinning disks in order to examine the effects of colour mixing
with these expanded colour capacities, and to determine hue distribution and placement. This work revisits Munsell's
project in light of known issues, and formulates questions about how we can reintegrate Munsell's approach for colour
description and mixing into modern colour science, understanding, and potential application.

To control printers so that the mixture of inks results in specific color under defined visual environment requires a
spectral reflectance model that estimates reflectance spectra from nominal dot coverage. The topic of this paper is to
investigate the dependence of the Yule-Nielsen modified spectral Neugebauer (YNSN) model accuracy on ink amount. It
is shown that the performance of the YNSN model strongly depends on the maximum ink amount applied. In a cellular
implementation, this limitation mainly occurs for high coverage prints, which impacts on the optimal cell design.
Effective coverages derived from both Murray-Davis (MD) and YNSN show large ink spreading. As ink-jet printing is a
non-impact printing process, the ink volume deposited per unit area (pixel) is constant, leading to the hypothesis that
isolated ink dots have lower thickness that the full-tone ink film. Measured spectral reflectance curves show similar
trend, which supports the hypothesis. The reduced accuracy of YNSN can thus be explained with the fact that patches
with lower effective coverage have a mean ink thickness very different from that of the full-tone patch. The effect will be
stronger for small dot coverage and large dot gain and could partially explain why the Yule-Nielsen n-factor is different
for different inks. The performance of the YNSN model could be improved with integration of ink thickness variation.

In order to predict the spectral reflectance of color halftone images, we considered the scattering of light within paper
and the ink penetration in the substrate and proposed the color spectral reflectance precise prediction model for halftone
images. The paper based on the assumption that the colorant is non-scattering and the assumption that the paper is strong
scattering substrate. By the multiple internal reflection between the paper substrate and the print-air interface of light,
and the light along oblique path of the Williams-Clapper model, we propose this model for taking into account ink
spreading, a phenomenon that occurs when printing an ink halftone in superposition with one or several solid inks. The
ink-spreading model includes nominal-to-effective dot area coverage functions for each of the different ink overprint
conditions by the least square curve fitting method and the network structure of multiple reflection. It turned out that the
modeled and the measured colors agree very well, confirming the validity of the used model. The new model provides a
theoretical foundation for color prediction analysis of halftone images and the development of prints quality detection
system.

Spectral prediction models are widely used for characterizing classical, almost transparent ink halftones printed on a diffuse substrate. Metallic-ink prints however reflect a significant portion of light in the specular direction. Due to their opaque nature, multi-color metallic halftones require juxtaposed halftoning methods where halftone dots of different colors are laid out side-by-side. In this work, we study the application of the Yule-Nielsen spectral Neugebauer (YNSN) model on metallic halftones in order to predict their reflectances. The model is calibrated separately at each considered illumination and observation angle. For each measuring geometry, there is a different Yule-Nielsen n-value. For traditional prints on paper, the n-value expresses the amount of optical dot gain. In the case of the metallic prints, the optical dot gain is much smaller than in paper prints. With the fitted n-values, we try to better understand the interaction of light and metallic halftones.

The paper aims to develop a method for multichannel halftoning based on the Direct Binary Search (DBS) algorithm.
We integrate specifics and benefits of multichannel printing into the halftoning method in order to further improve
texture quality of DBS and to create halftoning that would suit for multichannel printing. Originally, multichannel
printing is developed for an extended color gamut, at the same time additional channels can help to improve individual
and combined texture of color halftoning. It does so in a similar manner to the introduction of the light colors (diluted
inks) in printing. Namely, if one observes Red, Green and Blue inks as the light version of the M+Y, C+Y, C+M
combinations, the visibility of the unwanted halftoning textures can be reduced. Analogy can be extent to any number of
ink combinations, or Neugebauer Primaries (NPs) as the alternative building blocks. The extended variability of printing
spatially distributed NPs could provide many practical solution and improvements in color accuracy, image quality, and
could enable spectral printing. This could be done by selection of NPs per dot area location based on the constraint of the
desired reproduction. Replacement with brighter NP at the location could induce a color difference where a tradeoff
between image quality and color accuracy is created. With multichannel enabled DBS haftoning, we are able to reduce
visibility of the textures, to provide better rendering of transitions, especially in mid and dark tones.

Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque
color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However,
RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent
color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed
to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed
multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.

With the emergence of high-end digital printing technologies, it is of interest to analyze the nature and causes of image graininess in order to understand the factors that prevent high-end digital presses from achieving the same print quality as commercial offset presses. In this paper, we report on a study to understand the relationship between image graininess and halftone technology. With high-end digital printing technology, irregular screens can be considered since they can achieve a better approximation to the screen sets used for commercial offset presses. This is due to the fact that the elements of the periodicity matrix of an irregular screen are rational numbers, rather than integers, which would be the case for a regular screen. To understand how image graininess relates to the halftoning technology, we recently performed a Fourier-based analysis of regular and irregular periodic, clustered-dot halftone textures. From the analytical results, we showed that irregular halftone textures generate new frequency components near the spectrum origin; and that these frequency components are low enough to be visible to the human viewer, and to be perceived as a lack of smoothness. In this paper, given a set of target irrational screen periodicity matrices, we describe a process, based on this Fourier analysis, for finding the best realizable screen set. We demonstrate the efficacy of our method with a number of experimental results.

ICC has announced a preliminary specification for iccMAX, a next-generation colour management system that expands
the existing ICC profile format and architecture to overcome the limitation of the fixed colorimetric Profile Connection
Space and support a much wider range of functionality. New features introduced in iccMAX include spectral processing,
material identification and visualization, BRDF, new data types, an improved gamut boundary descriptor and support for
arbitrary and programmable transforms. The iccMAX preliminary specification is accompanied by a reference
implementation, and will undergo a period of public review before being finalized.

A need for a baseline algorithm for mapping from the Perceptual Reference Medium Gamut to destination media in
ICC output profiles has been identified. Before such a baseline algorithm can be recommended, it requires careful
evaluation by the user community. A framework for encoding the gamut boundary and computing intersections with
the PRMG and output gamuts respectively is described. This framework provides a basis for comparing different
gamut mapping algorithms, and a candidate algorithm is also described.

Monochrome images are often converted to false-colour images, in which arbitrary colours are assigned to regions
of the image to aid recognition of features within the image. Criteria for selection of colour palettes vary according
to the application, but may include distinctiveness, extensibility, consistency, preference, meaningfulness and
universality. A method for defining a palette from colours on the surface of a reference gamut is described, which
ensures that all colours in the palette have the maximum chroma available for the given hue angle in the reference
gamut. The palette can be re-targeted to a reproduction medium as needed using colour management, and this
method ensures consistency between cross-media colour reproductions using the palette.

Recently, many 3D contents production tools using multi-view system has been introduced: e.g., depth estimation, 3D reconstruction and so forth. However, there is color mismatch problem in multiview system and it can cause big differences for the final result. In this paper we propose a color correction method using 3D multi-view geometry. The propose method finds correspondences between source and target viewpoint and calculates a translation matrix by using a polynomial regression technique. An experiment is performed in CIELab color space which is designed to approximate an human visual system and proposed method properly corrected the color compare to conventional methods. Moreover, we applied the proposed color correction method to 3D object reconstruction and we acquired a consistent 3D model in terms of color.

Using aerial color images or remote sensing color images to obtain the earth surface information is an important way for
gathering geographic information. In order to improve visibility in the poor quality weather for aerial color images, a
changing scale Retinex algorithm based on fractional differential and depth map is proposed. After a new fractional
differential operation, it requires the image dark channel prior treatment to obtain the estimated depth map. Then
according to the depth map, Retinex scales are calculated in each part of the image. Finally the single scale Retinex
transform is performed to obtain the enhanced image. Experimental results show that the studied algorithm can
effectively improve the visibility of an aerial color image without halo phenomena and color variation which happens if
using a similar ordinary algorithm. Compared with the He’s algorithm and others, the new algorithm has the faster speed
and better image enhancement effect for the images that have greatly different scene depths.

The purpose of this study is to investigate the differences in the psychophysical judgment of mobile display color appearances between Europeans and Asians. A total of 50 participants, comprising 20 Europeans (9 French, 6 Swedish, 3 Norwegians, and 2 Germans) and 30 Asians (30 Koreans) participated in this experiment. A total of 18 display stimuli with different correlated color temperatures were presented, varying from 2,470 to 18,330 K. Each stimulus was viewed under 11 illuminants ranging from 2,530 to 19,760 K, while their illuminance was consistent around 500 lux. The subjects were asked to assess the optimal level of the display stimuli under different illuminants. In general, confirming the previous studies on color reproduction, we found a positive correlation in the correlated color temperatures between the illuminants and optimal displays. However, Europeans preferred a lower color temperature compared to Asians along the entire range of the illuminants. Two regression equations were derived to predict the optimal display color temperature (y) under varying illuminants (x) as follows: y = α + β*log(x), where α = -8770.37 and β = 4279.29 for European (R2 = 0.95, p < .05), and α = -16076.35 and β = 6388.41 for Asian (R2 = 0.85, p < .05). The findings provide the theoretical basis from which manufacturers can take a cultural-sensitive approach to enhancing their products’ appeal in the global markets.

With increase in size and resolution of mobile displays and advances in embedded processors for image enhancement,
perceived quality of images on mobile displays has been drastically improved. This paper presents a quantitative method
to evaluate perceived image quality of color images on mobile displays. Three image quality attributes, colorfulness,
contrast and brightness, are chosen to represent perceived image quality. Image quality assessment models are
constructed based on results of human visual experiments. In this paper, three phase human visual experiments are
designed to achieve credible outcomes while reducing time and resources needed for visual experiments. Values of
parameters of image quality assessment models are estimated based on results from human visual experiments.
Performances of different image quality assessment models are compared.

Changing illumination cause the measurements of object colors to be biased toward chromaticity of illuminants. Various color constancy algorithms are already exist to remove the chromaticity of illuminants in an image for improving image quality. Recently, NMFsc(nonnegative matrix factorization with sparseness constraint) was introduced to extract the illuminant and reflectance component in an image. NMFsc extract illuminant component and reflectance component by using nonnegative matrix decomposition and sparseness constraints. However, if an image has a chromaticity distribution dominated by a particular chromaticity, the sparse constraint values include that dominant chromaticity, thereby inducing color distortion. Therefore, the proposed method modified the matrix decomposition in NMFsc by using standard deviation and K-means algorithm in chromaticity space. Next, non-negative matrix decomposition and sparseness constraints are performed on an image. Subsequently, illumination is estimated by combining the low sparse constraint values that excludes the dominant chromaticity. The performance of the proposed method is evaluated by using angular error for Ciurea 11,346 image data set. Experimental results illustrate that the proposed method reduces the angular error over previous methods.

We usually recognize color by two kinds of processes. In the first, the color is recognized continually and a small difference in color is recognized. In the second, the color is recognized discretely. This process recognizes a similar color of a certain range as being in the same color category. The small difference in color is ignored. Recognition by using the color category is important for communication using color. It is known that a color vision defect confuses colors on the confusion locus of color. However, the color category of a color vision defect has not been thoroughly researched. If the color category of the color vision defect is clarified, it will become an important key for color universal design. In this research, we classified color stimuli into four categories to check the shape and the border of the color categories of varied color vision. The experimental result was as follows. The border of protanopia is the following three on the CIE 1931 (x, y) chromaticity diagram: y = -0.3068x + 0.4795, y = -0.1906x + 0.4021, y = -0.2624x + 0.3896. The border of deuteranopia is the following three on the CIE 1931 (x, y) chromaticity diagram: y = -0.7931x + 0.7036, y = -0.718x + 0.5966, y = -0.6667x + 0.5061.

The aim of this study is to investigate whether the Helmholtz-Kohlrausch effect exists among the images having various
luminance and chroma levels. Firstly, five images were selected. Then each image was adjusted to have 4 different
average CIECAM02 C and 5 different average CIECAM02 J. In total 20 test images were generated per each image for
the psychophysical experiment. The psychophysical experiment was done in a dark room using a LCD display. To
evaluate the overall perceived brightness of images a magnitude estimation method was used. Fifteen participants
evaluated the brightness of each image comparing with the reference image. As a result, participants tended to evaluate
the brightness higher as the average CIECAM02 J and also CIECAM02 C of the image increases proving the Helmholtz-
Kohlrausch effect in images.

The effect of the level of transmission and surround luminance on image quality of transparent display is studied. The
images on the OLED transparent display were simulated on LCD monitor by adding background scene to the original
images. The psychophysical experiment was carried out on normal and simulated transparent displays under four levels
of surround luminance (dark, dim, average and bright) and four different levels of transmission (17, 52, 70 and 87%).
Four test images were selected for the psychophysical experiment. Fifteen subjects were participated in this experiment
and they were asked to answer the degree of preference of each image for each condition with 7-point Likert-scale. As
the result, it is found that the image quality of OLED transparent display deteriorates as the surround luminance
increases and the transmittance increases but lowering the monitor gamma can be helpful to increase the image quality.
This result suggests that image quality requirement for a transparent display is different from conventional opaque
displays.

This paper proposes a method for estimating illuminant colors that have two different light sources, i.e. fluorescent light
and daylight. The conventional methods assume that one light source exists in a scene or in a small region and that
it spans a scene uniformly. Therefore the methods cannot estimate illuminant colors when two different light sources
are illuminating the scene or the region. Our method formulates the relationships among the colors of two regions that
have the same surface reflectance but are in different locations and have different illumination rates (which vary from
location to location). In order to clarify the unknown surface reflectance common to each color region, the method uses
the property that the colors derived in the regions comprise the plane through the origin in a three-dimensional color space.
By determining the normal of the plane, which is unique to the surface reflectance, the coefficients of the basis function of
surface reflectance are derived. In this way, we can estimate illumination rates, that is, the colors of the scene illuminants.
The results of numerical simulations using the reflectance dataset from the ISO-TR 16066 database and two illuminants (a
typical fluorescent lamp and sunlight) show that the estimated illumination rates are similar to the ground truth.

A display’s color subpixel geometry provides an intriguing opportunity for improving readability of text. True type fonts
can be positioned at the precision of subpixel resolution. With such a constraint in mind, how does one need to design
font characteristics? On the other hand, display manufactures try hard in addressing the color display’s dilemma: smaller
pixel pitch and larger display diagonals strongly increase the total number of pixels. Consequently, cost of column and
row drivers as well as power consumption increase. Perceptual color subpixel rendering using color component
subsampling may save about 1/3 of color subpixels (and reduce power dissipation). This talk will try to elaborate the
following questions, based on simulation of several different layouts of subpixel matrices: Up to what level are display
device constraints compatible with software specific ideas of rendering text? How much of color contrast will remain?
How to best consider preferred viewing distance for readability of text? How much does visual acuity vary at 20/20
vision? Can simplified models of human visual color perception be easily applied to text rendering on displays? How
linear is human visual contrast perception around band limit of a display’s spatial resolution? How colorful does the
rendered text appear on the screen? How much does viewing angle influence the performance of subpixel layouts and
color subpixel rendering?

A common task in universal design is to create a 'simulation' of the appearance of a colour image as it appears to a
CVD observer. Although such simulations are useful in illustrating the particular problems that a CVD observer has
in discriminating between colours in an image, it may not be reasonable to assume that such a simulation accurately
conveys the experience of the CVD observer to an observer with normal vision.
Two problems with this assumption are discussed here. First, it risks confusing appearance with sensation. A colour
appearance model can more or less accurately predict the change in appearance of a colour when it is viewed under
different conditions, but does not define the actual sensation. Such a sensation cannot be directly communicated but
merely located on a scale with other related sensations. In practice we avoid this epistemological problem by asking
observers to judge colour matches, relations and differences, none of which requires examination of the sensation
itself. Since we do not truly know what sensation a normal observer experiences, it seems unscientific to suppose
that we can do so for CVD observers.
Secondly, and following from the above, the relation between stimulus and corresponding sensation is established as
part of neural development during infancy, and while we can determine the stimulus we cannot readily determine
what sensation the stimulus is mapped to, or what the available range of sensations is for a given observer. It is
suggested that a similar range of sensations could be available to CVD observers as to normal observers.

Color deficient individuals have trouble seeing color contrasts that could be very apparent to individuals with
normal color vision. For example, for some color deficient individuals, red and green apples do not have the
striking contrast they have for those with normal color vision, or the abundance of red cherries in a tree is not
immediately clear due to a lack of perceived contrast. We present a smartphone app that enables color deficient
users to visualize such problematic color contrasts in order to help them with daily tasks. The user interacts
with the app through the touchscreen. As the user traces a path around the touchscreen, the colors in the image
change continuously via a transform that enhances contrasts that are weak or imperceptible for the user under
native viewing conditions. Specifically, we propose a transform that shears the data along lines parallel to the
dimension corresponding to the affected cone sensitivity of the user. The amount and direction of shear are
controlled by the user’s finger movement over the touchscreen allowing them to visualize these contrasts. Using
the GPU, this simple transformation, consisting of a linear shear and translation, is performed efficiently on each
pixel and in real-time with the changing position of the user’s finger. The user can use the app to aid daily tasks
such as distinguishing between red and green apples or picking out ripe bananas.

Color deficient people might be confronted with minor difficulties when navigating through daily life, for example when reading websites or media, navigating with maps, retrieving information from public transport schedules and others. Color deficiency simulation and daltonization methods have been proposed to better understand problems of color deficient individuals and to improve color displays for their use. However, it remains unclear whether these color prosthetic" methods really work and how well they improve the performance of color deficient individuals. We introduce here two methods to evaluate color deficiency simulation and daltonization methods based on behavioral experiments that are widely used in the field of psychology. Firstly, we propose a Sample-to-Match Simulation Evaluation Method (SaMSEM); secondly, we propose a Visual Search Daltonization Evaluation Method (ViSDEM). Both methods can be used to validate and allow the generalization of the simulation and daltonization methods related to color deficiency. We showed that both the response times (RT) and the accuracy of SaMSEM can be used as an indicator of the success of color deficiency simulation methods and that performance in the ViSDEM can be used as an indicator for the efficacy of color deficiency daltonization methods. In future work, we will include comparison and analysis of different color deficiency simulation and daltonization methods with the help of SaMSEM and ViSDEM.

This study describes a color enhancement method that uses a color palette especially designed for protan and deutan
defects, commonly known as red-green color blindness. The proposed color reduction method is based on a simple color
mapping. Complicated computation and image processing are not required by using the proposed method, and the
method can replace protan and deutan confusion (p/d-confusion) colors with protan and deutan safe (p/d-safe) colors.
Color palettes for protan and deutan defects proposed by previous studies are composed of few p/d-safe colors. Thus, the
colors contained in these palettes are insufficient for replacing colors in photographs. Recently, Ito et al. proposed a p/dsafe
color palette composed of 20 particular colors. The author demonstrated that their p/d-safe color palette could be
applied to image color reduction in photographs as a means to replace p/d-confusion colors. This study describes the
results of the proposed color reduction in photographs that include typical p/d-confusion colors, which can be replaced.
After the reduction process is completed, color-defective observers can distinguish these confusion colors.

A map is an information design object for which canonical colors for the most common elements are well established. For
a CVD observer, it may be difficult to discriminate between such elements - for example, it may be hard to distinguish a
red road from a green landscape on the basis of color alone. We address this problem through an adaptive color schema in
which the conspicuity of elements in a map to the individual user is maximized. This paper outlines a method to perform
adaptive color rendering of map information for users with color vision deficiencies. The palette selection method is based
on a pseudo-color palette generation technique which constrains colors to those which lie on the boundary of a reference
object color gamut. A user performs a color vision discrimination task, and based on the results of the test, a palette of
colors is selected using the pseudo-color palette generation method. This ensures that the perceived difference between
palette elements is high but which retains the canonical color of well-known elements as far as possible. We show examples
of color palettes computed for a selection of normal and CVD observers, together with maps rendered using these palettes.

Color-deficient observers are often confronted with problems in daily life due to the fact that some colors appear
less differentiable than for normal sighted people. So-called daltonization methods have been proposed to increase
color contrast for color-deficient people. We propose two methods for better daltonization solutions by Spatial
Intensity Channel Replacement Daltonization (SIChaRDa). We propose replacing the intensity channel with
a grayscale version of the image computed by using spatial color-to-gray methods that are either capable of
translating color contrasts into lightness contrasts or that are capable of translating color edges into lightness
edges, and/or integrating information from the red–green channel into the intensity channel. We tested two
implementations on different types of images, and we could show that results depend on the one hand on the
algorithm used for computing the grayscale image, and on the other hand on the content of the image. We show
that the spatial methods work best on real-life images were confusing colors are directly adjacent to each other,
respectively where they are in close proximity. On the contrary, using composed artificial images with borders
of white space between colors – like for example in the Ishihara plates – leads only to unsatisfactory results.

The goal of this study is to evaluate the difference of the preferred hues of familiar objects between the color deficient
observer and the normal observer. Thirteen test color images were chosen covering fruit colors, natural scene and human
faces. It contained red, yellow, green, blue, purple and skin color. Two color deficient observer (deuteranomal) and two
normal observers were participated in this experiment. They controlled the YCC hue of the objects in the images to
obtain the most preferred and the most natural image. The selected images were analyzed using CIELAB values of each
pixel. Data analysis results showed that in the case of naturalness, both groups selected the similar hues for the most of
image, while, in the case of preference, the color deficient observer preferred more reddish or more greenish images.
Since the deuteranomalous observer has relatively week perception for red and green region, they may prefer more
reddish or greenish color. The color difference between natural hue and preferred hue of deuteranomal observer is bigger
than those of normal observer.

For the luminaire manufacturer, the measurement of the lighting intensity distribution (LID) emitted by lighting fixture is
based on photometry. So light is measured as an achromatic value of intensity and there is no the possibility to
discriminate the measurement of white vs. colored light. At the Laboratorio Luce of Politecnico di Milano a new
instrument for the measurement of spectral radiant intensities distribution for lighting system has been built: the goniospectra-
radiometer. This new measuring tool is based on a traditional mirror gonio-photometer with a CCD spectraradiometer
controlled by a PC.
Beside the traditional representation of photometric distribution we have introduced a new representation where, in
addition to the information about the distribution of luminous intensity in space, new details about the chromaticity
characteristic of the light sources have been implemented.
Some of the results of this research have been applied in developing and testing a new line of lighting system “My White
Light” (the research project “Light, Environment and Humans” funded in the Italian Lombardy region Metadistretti
Design Research Program involving Politecnico di Milano, Artemide, Danese, and some other SME of the Lighting
Design district), giving scientific notions and applicative in order to support the assumption that colored light sources can
be used for the realization of interior luminaries that, other than just have low power consumption and long life, may
positively affect the mood of people.