Color management systems are being introduced worldwide to improve the color quality of digital image capture and device independent electronic color image reproduction. To be able to supply device independent color data at interfaces in imaging systems, device dependent color correction is required. The paper discusses concepts envisaged for color correction in image capturing devices with respect to fundamental requirements on color analysis. The common image capturing technology is based on the use of three color channels. Main points of the discussion are the shortcomings of this technology to analyze metameric colors correctly and the question if this will be an essential point for future imaging technology. Further parts of the paper cover the alternative multispectral technology. Multispectral cameras delivering the complete spectrum of color stimuli of each pixel of an image are available in the laboratory. This technology offers s solution to the problem of metameric color analysis and offers flexibility to match different illuminants as well, yet, the amount of additional effort is large. The paper summarizes studies and ideas on multispectral color technology and on how this technology might be introduced in future imaging and color management systems.

A lighting system is proposed for acquiring color images under a variety of illuminations. This system is constructed with halogen lamps, color filters, white diffusion filters, dimmers, and a personal computer as a controller. Colored light with continuous spectral power distribution is generated based on the additive color mixture of RGB primary lights. First, we describe a method for generating light of a desired color stimulus value. The basic procedure is performed in two steps: (1) XYZ-RGB color coordinate conversion and (2) correction of nonlinearities. A practical procedure is presented for generating colored light with (x,y) chromaticity coordinates of any value within a specified color gamut.

Image reproduction from one medium to another suffers from differences in size and shape of color gamut, as well as from strong metamerism when spectral compositions of light from corresponding pixels are very different. The present purpose is to suggest primaries, and their resulting color gamuts, for any form of display or hardcopy providing color images to be viewed by normal human vision. Both optimum primaries and optimum color gamuts, appropriate to images produced by either additive or subtractive color mixture, are here defined by the prime colors of the normal human observer. In the case of additive color mixture, the optimum primaries are spectral lights near 450 nm, 533 nm, and 611 nm; these wavelengths mark the peaks of the three spectral sensitivities of the normal human visual system. With subtractive color mixture, the suggested `primaries' are lights composed of incident light reflected from broader components of spectral reflectance, still peaked (or averaged) near the same three wavelengths. These additive primaries provide (1) maximum visual efficiency (brightest image per unit power output) and (2) a color gamut appreciably larger than those of the CIE Standard Observers. In addition, for subtractive color mixture, the optimum `primaries' lead (3) to `preferred' coloration of imaged objects, (4) to optimum constancy of hue and chroma of each pixel, against variation of the spectral power distribution of the illumination, and (5) to strong reduction of metamerism.

Today, the characterization of scanners combines properties of both the scanner and the medium. A different method is proposed here. If the characterization of the scanner is carried out spectrally, it is not specific to a medium and the medium properties can be acquired separately. The drawback of this approach is the lack of a simple solution to determine spectral properties of a scanner. Therefore, a method to estimate the spectral properties of a scanner by scanning a single test chart is proposed.

To achieve high image quality throughout a digital imaging system, the first requirement is to ensure the quality of the device that captures real-world physical images to digital images, for example a desktop scanner. Several factors have influence on this quality: optical resolution, bit depth, spectral sensitivities, and acquisition noise, to mention a few. In this study we focus on the colorimetric faculties of the scanner, that is, the scanner's ability to deliver quantitative device-independent digital information about the colors of the original document. We propose methods to convert from the scanner's device-dependent RGB color space to the standard device-independent color space sRGB. The methods have been evaluated using several different desktop scanners. Our results are very good: mean CIELAB (Delta) E*ab color errors as low as 1.4. We further discuss advantages and disadvantages of a digital color imaging system using the sRGB space for image exchange, compared to using other color architectures.

The purpose of this study was to identify the performance of novel realized color sensors manufactured in thin film technology and to compare the results with simulations. In a previous study, a novel technology of three- and six-channel color sensors was presented. The performance of the sensors was tested in simulations and compared to other sensors for different characterization methods (polynomial regression and smoothing inverse). Now, these method are supplemented by a new linear programming method. Moreover, practical experiments with real color capture have been conducted.

Several techniques are discussed in literature, such as first or higher order polynomial modeling and the application of neural networks. We used another technique based on a Principal Component Analysis (PCA) in order to predict spectra. To calibrate the camera one may work with reflectance spectra or with density spectra. The PCA is affected by this choice and leads to basic spectra, which are more sensitive to either reflectance or to density. We discuss the advantages/disadvantages working with reflectances or densities and we present the results we have obtained by calculating colorimetric and densitometric values.

In this article we describe the experimental setup of a multispectral image acquisition system consisting of a professional monochrome CCD camera and a tunable filter in which the spectral transmittance can be controlled electronically. We have performed a spectral characterization of the acquisition system taking into account the acquisition noise. To convert the camera output signals to device-independent data, two main approaches are proposed and evaluated. One consists in applying regression methods to convert from the K camera outputs to a device- independent color space such as CIEXYZ or CIELAB. Another method is based on a spectral model of the acquisition system. By inverting the model using a Principal Eigenvector approach, we estimate the spectral reflectance of each pixel of the imaged surface.

This paper compares different methods for the specification of matrices that transform cameras and illumination specific RGB values into device independent CIEXYZ tristimulus values. Various methods proposed in the literature as well as a completely new method are discussed. The performance is measured by calculating (Delta) E*ab values between the actual XYZ values of a set of color samples and the values achieved by applying a transformation matrix. Tests are performed for color samples, both inside and outside of the training set.

In this paper, we describe the method for color reproduction based on the spectral reflectance. The accuracy of color calibration depends on the number and the distribution of the color patches as well as color calibration method. The effect of dot overlap between the neighboring dots makes the printed color patches significantly darker. Therefore we propose a new design of the color patches where the distribution of the printed spectral reflectances is uniform in the spectral domain. We measure the spectral reflectances of the color patches with an inkjet printer. The basis functions are extracted from the measured patch data using principal component analysis by Karhunen-Loeve expansion. A neural network is used to transform the coefficients of principal component analysis to CMY colorants. This patch design increases the accuracy of the conversion of the neural network. The accuracy of color reproduction is evaluated according to the number and the distribution of the color patches in terms of the root mean square error and the color difference.

This paper has two objectives. The first is to explain our color reproduction procedure, especially our new gamut check and mapping method and comparative experimental results on it. The second is to discuss availability and effectiveness of multilayer perceptron (MLP) in the color reproduction fields. We will show problems and merits of MLP and its learning through experiments.

An accurate CMM, rendering intent, source and media- characterized destination profiles, and their proper concatenation, are key elements of a ColorSync workflow. Given a sequential architecture, all elements must function properly if the system is to succeed. Our experiment confirms that quality and accuracy of the source and destination profiles ultimately determine the performance of a ColorSync color-managed workflow.

In this paper, we propose a color image processing method by combining modern signal processing technique with knowledge about the properties of the human color vision system. Color signals are processed differently according to their visual importance. The emphasis of the technique is on the preservation of total visual quality of the image and simultaneously taking into account computational efficiency. A specific color image enhancement technique, termed Hybrid Vector Median Filtering is presented. Computer simulations have been performed to demonstrate that the new approach is technically sound and results are comparable to or better than traditional methods.

We investigate the problem of estimating scene illumination from an image. By approximating the spectral surface reflectance and the illumination spectra power distribution in finite linear spaces, the projection of the illumination onto the linear space is computed by minimizing the error in pixel value. The solution space is constrained by using the physical properties of both reflectance and illumination. Various techniques such as multiple hypotheses (hence multiple linear spaces) for illumination are used to improve the results. We have found that by approximating the surface reflectance in a 3 or 4 dimensional linear space and the illumination in a 5 or 6 dimensional linear space, the new algorithm significantly outperforms the gray-world algorithm for RGB images.

Digital Still Camera images often have undesired color casts due to unusual illuminant sources. This paper describes a novel color correction technique that uses adaptive segmentation techniques to identify the presence of such casts, estimate their chromatic strength, and alter the image's near-neutral color regions to compensate for the cast. The segmentation method identifies most major objects in the scene and their average color.

One of the image quality problems with printing images from the WEB is that the color information associated with those images is normally incomplete, or even incorrect. In order to generate a good printout of an arbitrary image from the WEB, certain assumptions have to be made. The major variant in image quality can be found in terms of the intended (gamma) the image was designed for. The example is a WEBsite, hosting an image collection. The site might allow you to browse and print images. However, browsing is normally done on a monitor with a (gamma) of roughly 2 and printing is done on a xerographic or ink-jet printer with a (gamma) of 1. The display and print images will therefore differ quite drastically, unless the (gamma) is corrected. Unfortunately, it is not known which (gamma) setting was used by the WEBsite, and therefore a fixed conversion (e.g.: always assuming the data is intended for monitor (gamma) ) is often faulty. This talk describes a way around this problem by trying to automatically identify the correct (gamma) based on the image data.

In this paper, the method to calculate the illuminant chromaticity of an image is proposed by combine the perceived illumination and highlight approach. The hybrid approach is more stable and accurate compared to each approach. The application for this algorithm is two-fold. For simple and quick implementation, perceived illumination is enough, and for more accurate case, hybrid approach can be used. And conversion of image illuminant chromaticity is also proposed. This can be applied into special effect for the images.

The colorimetric characterization of printers using more than three colorants is discussed. In such printers, there is no unique combination of colorant amounts for the reproduction of a particular color. We categorize these printers as either black printers or hi-fi printers. Black printers use black (K) in addition to cyan (C), magenta (M), and yellow (Y). Hi-fi printers use saturated colorants such as red (R), green (G), and blue (B) in addition to CMYK colorants. We propose two methods of determining combinations of colorant amounts: the variable reduction method and the division method. The variable reduction method uses connecting functions to reduce the number of variables controlling colorant amounts. Although this method offers simplicity, it does not always utilize the entire color gamut. The division method employs sub-gamuts composed of appropriate sets of three or four colorants; these sub- gamuts are combined to form the entire color. While the division method allows access to the entire color gamut, its boundaries tend to cause pseudo contours due to abrupt changes of colorant amount. To facilitate the use of the division method, we have developed a software tool and verified the algorithm involved using a hypothetical hi-fi printer in computer simulation.

This paper presents a method of efficiently converting from a set of noisy color values to a set of device colorants. Using a deterministic process, 24-bit scanned color values are reduced to dithered 12-bit RGB table indices. After the reduction, a small but complete lookup table with 4096 entries converts the RGB values directly to the output color space. This stochastic interpolation process, while minimizing banding and abrupt color transitions, eliminates the need for trilinear interpolation of the data and significantly reduces the size of the lookup table.

A CMYK image is often viewed as a large amount of device- dependent data ready to be printed. In several circumstances, CMYK data needs to be compressed, but the conversion to and from device-independent spaces is imprecise at best. In this paper, with the goal of compressing CMYK images, color space transformations were studied. In order to have a practical importance we developed a new transformation to a YYCC color space, which is device-independent and image-independent, i.e. a simple linear transformation between device-dependent color spaces. The transformation from CMYK to YYCC was studied extensively in image compression. For that a distortion measure that would account for both device-dependence and spatial visual sensitivity has been developed. It is shown that transformation to YYCC consistently outperforms the transformation to other device-dependent 4D color spaces such as YCbCrK, while being competitive with the image- dependent KLT-based approach. Other interesting conclusions were also drawn from the experiments, among them the fact that color transformations are not always advantageous over independent compression of CMYK color planes and the fact that chrominance subsampling is rarely advantageous.

This paper describes a process for separating a map, originally printed using an unknown ink specification into its component colors before being reprinted using a known ink specification. The methodology is based on two earlier papers by Kanamori and Kotera, (1991) and Harrington (1992) in which the use of logical operators in color central were explored. A detailed analysis of the scanned map identified primary, secondary and transition colors. Filter images containing pixels taken from across the scanned image were developed to describe the variation of color found within each of these color groups. The maximum and minimum values of hue, lightness and chroma were then used to derive logical operators and true/false statements which when applied to L*a*b* pixel arrays separate the scanned map it into its primary color components. This technique was refined to include secondary and transition colors. By combining true/false statements it was possible to separate more specific areas within the scanned map. The method was used to reproduce the map using the known ink specification with a (Delta) E value ranging between 2.1 (Yellow) to 11.9 (Black), for the known and unknown ink specifications. It was also used to change geographic features represented by each color component through the addition and deletion of color detail.

A new color inverse halftoning method that converts a scanned color image, halftoned by the clustered-dot ordered dither, into a more natural continuous-tone image, is proposed by analyzing the Fourier spectrum of color image. A color channel separated from the color image makes three kinds of peaks in its Fourier spectrum: a channel peak, an interference peak, and a moire peak. The channel peak is formed by the repeated pattern of the channel halftone cells in the color channel, whereas the interference peak is made by the other channel halftone cells, due to imperfect separation of the color channel. The moire peak is the secondary peak formed by the interaction of the channel peak with the interference peak. A new smoothing mask for each color channel was designed to effectively remove not only the channel peak but also the interference peak, thereby making a smoother continuous-tone image. A block based moire extraction algorithm has also been developed to remove the moire peak adaptively, which can smooth the moire region in the color channel without blurring the other regions. Experiments show that the proposed method outperforms all the published results from other color inverse halftoning methods.

The experiments described in this paper use the Munsell 100 Hue test to measure color film's ability to order chips the same as humans. The procedure is to photograph the chips in daylight and to scan the dye densities in the processed prints. If the film confuses colors, as colorblind and color anomalous humans do, then the dye density sequence will not be monotonic. Local reversals in dye density imply spectral responses different than humans. A triplet of monotonic dye curves would mimic the color response of people with much better than average color discrimination.

A general framework and first experimental results are presented for the `OPTimal IMage Appearance' (OPTIMA) project, which aims to develop a computational model for achieving optimal color appearance of natural images on adaptive CRT television displays. To achieve this goal we considered the perceptual constraints determining quality of displayed images and how they could be quantified. The practical value of the notion of optimal image appearance was translated from the high level of the perceptual constraints into a method for setting the display's parameters at the physical level. In general, the whole framework of quality determination includes: (1) evaluation of perceived quality; (2) evaluation of the individual perceptual attributes; and (3) correlation between the physical measurements, psychometric parameters and the subjective responses. We performed a series of psychophysical experiments, with observers viewing a series of color images on a high-end consumer television display, to investigate the relationships between Overall Image Quality and four quality-related attributes: Brightness Rendering, Chromatic Rendering, Visibility of Details and Overall Naturalness. The results of the experiments presented in this paper suggest that these attributes are highly inter-correlated.

We have developed a 3D stereo color imaging system based on a color appearance model. The system consists of a graphical user interface, a stereo matching engine, a data compression engine, a 3D data manipulation engine, and an output device for 3D color images. In this paper, the stereo matching engine which is based on a color appearance model is focused on especially.

Color difference acceptability judgments in the graphic arts were investigated using 26 observers and 7 color centers with approximately 20 samples in each color center. In comparison with (Delta) Eab, CIE94 color difference (Delta) E94 was found to reduce the relatively high acceptability thresholds found in high-chroma yellows. It only partially corrected the low significance that (Delta) Eab gives to greys.

The lack of blue hue constancy has been identified as a significant shortcoming of the CIELAB color space. The CIECAM97s, CIELUV, IPT and MLab color spaces have been proposed as alternatives to CIELAB. Color gradients with fixed anchor values were used as stimuli in a visual experiment. This allowed an efficient comparison of the hue constancy of different color spaces. The results show that IPT, MLab and CIECAM97s are better than CIELUV and CIELAB in the blue regions of color space. There are minimal differences between the color spaces for other hue angles. However, there are still differences in lightness and chroma for the different color spaces that should be further investigated. Finally, the blue constancy of CIELAB is considerably improved by converting to Hunt-Pointer-Estevez cone fundamentals before applying the standard CIELAB equations.

In the process of gamut mapping from monitor display into printer hardcopy in CIE L*a*b* color space, blue is tend to map to purple. This paper presents a new approach to solve the perceived blue hue shift problem. By this approach, the entire color gamut is divided into four regions: a non-blue region, a blue-region, and two in-between regions. The segmentation of the four regions is based on the hue angle in CIE L*a*b* color space. Different color spaces are applied to different regions for gamut mapping. In the non- blue region, CIE L*a*b* color space is applied for gamut mapping. In the blue region, CIE L*u*v* color space is applied to eliminate the perceived blue shift. In the two in-between regions, both color spaces are used for gamut mapping, and a weighting function is applied for smooth transaction.

In laser thermal transfer printing using dye sublimation type medium, a high definition and continuous tone image can be obtained easily because the laser beam is focused to small spot and heat energy can be controlled by the pulse width modulation of laser light. The donor ink sheet is composed of the laser absorbing layer and sublimation dye layer. The tone reproduction was depend on the mixture ratio of dye to binder and thickness of ink layer. The four color ink sheets such as cyan, magenta, yellow and black were prepared for color printing image which have a high resolution and good continuous tone reproduction using sublimation dye transfer printing by laser heating.

A new gray component replacement approach for four-color printing process is developed to directly convert CIE XYZ values into CMYK values. We start with building a colorimetric density lookup table (LUT) for black channel from 0 to 255 (for 8-bit per-channel). A color in CIE XYZ color space is converted into colorimetric density, then the colorimetric density is compared with colorimetric densities in the black densities LUT to find maximum black. The actual black is determined based on the maximum black that has been found. The remaining of the total colorimetric density subtracted from the colorimetric density of the actual black is converted into CIE XYZ value, and finally the CIEXYZ value is converted into CMY by a predictive printer color mixing model. A close-up correction algorithm is implemented to reduce color errors coming from both the CIE XYZ to CMYK inversion and the assumption that the colorimetric density is additive.

This paper describes a method for constructing a lookup table relating a 3D CMY coordinate system to CMYK colorant amounts in a way that maximizes the utilization of the printer gamut volume. The method is based on an assumption, satisfied by most printers, that adding a black colorant to any combination of CMY colorants does not result in a color with more chroma. Therefore the CMYK gamut can be obtained from the CMY gamut by expanding it towards lower lightness values. Use of black colorant on the gray axis is enforced by modifying the initial distribution of CMY points through an approximate black generation transform. Lightness values of a resulting set of points in CIELAB space are scaled to fill the four-color gamut volume. The output CMYK values corresponding to the modified CIELAB colors are found by inverting a printer model. This last step determines a specific black use rate which can depend on the region of the color space.

As the spectral sensitivities of most color devices are typically different from that of human vision or corresponding output devices, signals from different channels (such as Red, Green and Blue) of a color recording device need to be properly mixed to generate color information suitable for viewing. The mixing (or transformation) which minimizes some error measure between the target and the transformed colors of a large set of color patches is normally used for this purpose. As the color error is the only criterion in determining such transformation, the measurement noises of the color device may often be amplified in the target color space without much control. We present in this paper a new color correction method that takes account of both the color error and the noise variance in reproduced images. This method is useful in applications where the measurement noises of recording devices are not necessarily low. The proposed method is then extended to include other color reproduction constraints. Analytical solutions and experimental results of the proposed method are both reported in the paper.

High quality imaging technologies have been developed for color inkjet printers with a piezoelectric print head. In these technologies, we have used the MLChips head to eject multi-sized micro droplets in the most suitable placements and added a lower density ink in the cyan and magenta color components for the reproduction of highlighted areas. In addition to this hardware technology, we have developed halftone algorithms to obtain smoother tone reproduction. These technologies greatly decrease noise on the output and achieves very high quality image.

This paper describes a novel method for finding colorant amounts for which a printer will produce a requested color appearance based on constrained optimization. An error function defines the gamut mapping method and black replacement method. The constraints limit the feasible solution region to the device gamut and prevent exceeding the maximum total area coverage. Colorant values corresponding to in-gamut colors are found with precision limited only by the accuracy of the device model. Out-of- gamut colors are mapped to colors within the boundary of the device gamut. This general approach, used in conjunction with different types of color difference equations, can perform a wide range of out-of-gamut mappings such as chroma clipping or for finding colors on gamut boundary having specified properties. We present an application of this method to the creation of PostScript color rendering dictionaries and ICC profiles.

An approach to optimize dot area coverage is presented. It improves the color prediction accuracy for printer color modeling using Neugebauer narrow-band color mixing model. The Neugebauer colorimetric quality factor (CQF) is applied to integrate dot areas calculated from Neugebauer narrow- band color mixing model. However, simply using CQF to optimize dot area often results in very noisy dot areas. The noise comes from some spectral bands where measured spectral reflectance of the color patch is higher than the measured spectra reflectance of the paper white. To eliminate this kind of noise, the CQF weighting filter approach is modified. The simplest approach is to use Neugebauer CQF weighting but not to use dot areas in the bands where the spectral reflectances of color patches are very close to the spectral reflectance of the paper white. Another approach is to use the spectral reflectance differences between the paper white and the 100% ink coverage as the weighting of this ink channel. The third approach (most effective one) is to combine (multiply) the weightings generated from the first and the second approaches.

A theoretical approach describing the effect of ink penetration on the reflectance of the printed image is presented in this paper. Three different models with respect to the density of penetrating ink, constant, linear and exponential distribution, are studied. In addition to the constant model whose differential equations of light propagation can be solved analytically, series solutions corresponding to the linear and exponential models have been worked out. Generally good convergence of the series expansions has been found from simulation. It has been found that to a certain amount printed ink commanded by a printer, the printed image with ink penetration has bigger reflectance than that without ink penetration. Consequently, the range of color reproduction (color gamut) of printed image is reduced due to ink penetration.

Digital halftoning remains an active area of research with a plethora of new and enhanced methods. While several fine overviews exist, this purpose of this paper is to review retrospectively the basic classes of techniques. Halftoning algorithms are presented by the nature of the appearance of resulting patterns, including white noise, recursive tessellation, the classical screen, and blue noise. The metric of radially averaged power spectra is reviewed, and special attention is paid to frequency domain characteristics. The paper concludes with a look at the components that comprise a complete image rendering system. In particular when the number of output levels is not restricted to be a power of 2. A very efficient means of multilevel dithering is presented based on scaling order- dither arrays. The case of real-time video rendering is considered where the YUV-to-RGB conversion is incorporated in the dithering system. Example illustrations are included for each of the techniques described.

Monochrome dither halftoning is a gray-scale image rendering procedure where gray-levels in an image are compared against a periodic threshold array, placing a Black dot in every location whose gray-level is smaller than the corresponding threshold value. Current generalization to color printing result in Cartesian thresholding procedures, namely RGB values are compared component-wise to trivariate thresholds. In this report we develop a color dithering framework based on a non-Cartesian coordinate system. To this end we generalize multidimensional dithering in a non-separable manner, and define it only as a space-varying point- operator. Three dithering methods are proposed within this framework. The main advantage of simplex dither over the traditional Cartesian dither is that it allows the rendition of solid color patches while using no more than a preselected quadruple of colors, thereby enabling a reduction in halftone noise.

This paper presents a system for reducing moire artifacts in halftoned images. Moire artifacts arise when the source image contains periodic texture with a frequency close to that of the halftone screen. In addition, moire-like artifacts occur with aperiodic texture in the frequency range of the halftone screen, as well as fine lines. The system is effective in suppressing each of these classes of artifact. The system first generates a trial halftone using a standard halftone screen. It then analyzes the moire present in this trial halftone, generating a correction signal, which is then added to the source image to form a compensated image. This compensated image is then halftoned using a second halftone screen to produce the final resulting halftone. Several refinements are discussed which reduce potential artifacts caused by the system.

This paper describers a method that uses morphological filters and operators to find the regions around the connection points. The thresholds in those regions are rearranged so dot connections become nicely distributed over the tile for all levels and those connections will be spread out over several levels which eliminates the abrupt dot gain jump.

This paper proposes a uniform color sample selection and color halftoning method based on color correction using neural network with a set of uniform color samples and selective vector error diffusion for enhancing color reproduction on a printer. In order to generate uniform color samples in CIELAB color space, a set of uniformly populated color samples in a CIELAB printer gamut and monitor gamut are calculated by LBG (Linde, Buzo, Gray) quantization algorithm. Then, the corresponding device- dependent values of CMY and RGB are estimated by a trained NN, which was temporally trained by a set of uniform samples in the device-dependent spaces.

Space-filling curves are regularly proposed as methods of traversing an image plane. Such methods, though more complicated than point movement by rows, have the advantage of moving the point within an area before moving to another area. Point movement is inherently within tiles and within areas of similar frequencies. Isis Imaging's Standard threshold-modulation screen, first commercially released in March 1994, is a device-independent screening method available in software residing on the host computer. It is widely used for print and photography applications. The algorithm is described, its properties examined and its output evaluated.

The vector error diffusion method was applied to spectral color reproduction of hardcopy. Reflection spectra of the object and primary colors used in the printer were estimated by low dimensional linear model. The accuracy of spectral color reproduction was evaluated and analyzed by computer simulation. It was shown that the spectral color reproduction can be improved by the proposed method.

The aim of this paper is to present a mathematical discussion of some aspects of digital printing, and other related problems such as analog/digital data conversion. In particular, we present results on the boundedness of the errors generated by error diffusion (and related) algorithms, and discuss the relationship between some of the mathematical questions arising in digital printing and some classical mathematical problems such as symbolic dynamics, the chairman assignment problem, and coding theory.

Multilevel halftoning (multitoning) is an extension of bitonal halftoning, in which the appearance of intermediate tones is created by the spatial modulation of more than two printing states, i.e., black, white, and one or more shades of gray. Conventional multitone schemes using error diffusion halftoning algorithms normally generate contouring artifacts at regions where the code values of the original image are close to those of the multilevel printing states (or output gray levels). A distinct texture transition also typically occurs near these gray levels. In a previous study, an over-modulation algorithm was proposed to improve the texture transition at those regions for the ease of multilevel dithering. In this study, a new algorithm will be proposed that combines error diffusion, blue-noise dithering, and over-modulation so that high-quality multilevel printing with smooth texture transition can be achieved.

For displays with limited color capability, color halftoning is often used to induce the illusion of more color levels than the device can truly render. In such cases, the pattern created by the halftone technique can be visible due to the limited geometrical resolution of the display device. Unlike printing, the halftone pixels of the display devices are still visible and the patterns created on display devices can be easily detected. When animated images are played and the frame rates are played in a mode that uses a limited number of colors and halftoning techniques, the patterns are very visible because they are perceived as a reference system fix to the playing window, while the image content is moving. In this paper we propose a halftone technique that varies the halftone pattern in time such that the pattern visibility is reduced. This conducts to a higher quality of the displayed image, especially noticeable for dynamic images. The technique enables the use of display devices (or displaying modes) with limited color capability for higher quality images.

Introduction of 2 X 2 centering can significantly reduce the number of dot patterns required in color calibration. As a result, local calibration can be performed using direct measurement and fast table lookup. In this paper, 2 X 2 calibration is applied to error diffusion. A novel concept for error diffusion is introduced. It matches the color increment instead of color. The concept makes integration of error diffusion and 2 X 2 calibration natural and free of causality issue.

The perceived quality of the halftoned image strongly depends on the spatial distribution of the binary dots. Various error diffusion algorithms have been proposed for realizing the homogeneous dot distribution in the highlight and shadow regions. However, they are computationally expensive and/or require large memory space. This paper presents a new threshold modulated error diffusion algorithm for the homogeneous dot distribution. The proposed method is applied exactly same as the Floyd-Steinberg's algorithm except the thresholding process. The threshold value is modulated based on the difference between the distance to the nearest minor pixel, `minor pixel distance', and the principal distance. To do so, calculation of the minor pixel distance is needed for every pixel. But, it is quite time consuming and requires large memory resources. In order to alleviate this problem, `the minor pixel offset array' that transforms the 2D history of minor pixels into the 1D codes is proposed. The proposed algorithm drastically reduces the computational load and memory spaces needed for calculation of the minor pixel distance.

An error diffusion system can be unstable due to inadequate design. It can generate ever-increasing quantization error that masks the input and produces unacceptable output images. For black and white images, the instability is mainly caused by two factors. One is an error diffusion weight set that over-compensates. The second is an input signal set that exceeds the output gamut. We have discovered that quantization could be another source of instability for color error diffusion when vector quantization is applied. The requirements for a quantization scheme that leads to a stable system are analyzed.

In the present paper, we shall focus our attention on different artistic halftoning techniques developed by the author at EPFL. First, we shall explore Artistic Screening, a library-based approach, which has been presented at SIGGRAPH95. Contour-based generation of halftone screens effectively provides a new layer of information. We show how this layer of information can be used to convey artistic and cultural elements related to the content of the reproduced images. Artistic Screening is basically black-and-white technique. Multicolor and Artistic Dithering, presented at SIGGRAPH99, extends it to multiple colors. This technique permits to print with non-standard colors such as opaque or semi-opaque inks, using traditional or artistic screens of arbitrary complexity.

In this paper, we develop a model based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in low human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.

A method has been developed to induce symmetric edge enhancement in error diffusion with threshold modulation. The existing threshold modulation algorithm induces edge enhancement by subtracting a term proportional to the input image from the threshold. The edge enhancement that results in asymmetric due to the asymmetric shape of the error diffusion filter. The new method induces a symmetric edge enhancement by subtracting a term from the threshold that is proportional to a filtered version of the input image. The filtering is accomplished with a recursive filter, applied during the error diffusion algorithm, which cancels out the error diffusion filter and imposes a symmetric sharpening filter. The result is a symmetric edge enhancement that is more pleasing to the eye.

This paper restricts to the study of the effect of vignetting of dual beam scanning of polygonal scanners. The scanning of dual beam scanning field of a polygon scanner is not symmetric around any points or axes in the scanning system when the incident ray has an incident angle (theta) z. Therefore, in general, a symmetrical scan field distribution cannot be expected. In this paper, some fundamental aspects of the structure of the scan field, such as the effective scanning height of polygon, effect of vignetting and the scan duty cycle in y' direction are considered. The vignetting effect in z' directions are also analyzed and the condition of the vignetting free is given. From discussion, we can conclude that to educe the processing area of polygon facet, the condition of (theta) y < arcsin(RF/(chi) 0) is needed.