Both the active column sensor (ACS) pixel sensing technology and the PVS-Bus multiplexer technology have been applied to a color imaging array to produce an extraordinarily high resolution, color imager of greater than 8 million pixels with image quality and speed suitable for a broad range of applications including digital cinema, broadcast video and security/surveillance. The imager has been realized in a standard 0.5 μm CMOS technology using double-poly and triple metal (DP3M) construction and features a pixel size of 7.5 μm by 7.5 μm. Mask level stitching enables the construction of a high quality, low dark current imager having an array size of 16.2 mm by 28.8 mm. The image array aspect ratio is 16:9 with a diagonal of 33 mm making it suitable for HDTV applications using optics designed for 35 mm still photography. A high modulation transfer function (MTF) is maintained by utilizing micro lenses along with an RGB Bayer pattern color filter array. The frame rate of 30 frames/s in progressive mode is achieved using the PVS-Bus technology with eight output ports, which corresponds to an overall pixel rate of 248 M-pixel per second. High dynamic range and low fixed pattern noise are achieved by combining photodiode pixels with the ACS pixel sensing technology and a modified correlated double-sampling (CDS) technique. Exposure time can be programmed by the user from a full frame of integration to as low as a single line of integration in steps of 14.8 μs. The output gain is programmable from 0dB to +12dB in 256 steps; the output offset is also programmable over a range of 765 mV in 256 steps. This QuadHDTV imager has been delivered to customers and has been demonstrated in a prototype camera that provides full resolution video with all image processing on board. The prototype camera operates at 2160p24, 2160p30 and 2160i60.

We present a CMOS image sensor for speed determination of fast moving luminous objects. Our circuit furnishes a 16-gray level image that contains both spatial and temporal information on the fast moving object under observation. The spatial information is given by the coordinates of the illuminated pixels and the temporal information is coded in the gray level of the pixels. By applying simple image processing algorithms to the image, the trajectory, direction of motion and speed of the moving object can be determined. The circuit is designed and fabricated in standard CMOS 0.6μm process from Austria MicroSystems (AMS). The core of the circuit is an array of 64 × 64 pixels based on an original Digital Pixel Sensor (DPS) architecture. Each pixel is composed of a photodiode as the light sensing element, a comparator, a pulse generator and a 4-bit static memory for storing the gray value of the pixel. The working principle of the circuit, its design and some quantitative experimental results are presented in the paper.

We present a 1.3 megapixel CMOS active pixel sensor dedicated to industrial vision. It features both rolling and synchronous shutter. Full frame readout time is 33 ms, and readout speed can be boosted by windowed region of interest (ROI) readout. High dynamic range scenes can be captured using the double and multiples slope functionality. All operation modes and settings can be programmed over a serial or a parallel interface.

A frequency-demodulation CMOS image sensor for capturing images only by the modulated light is proposed and demonstrated. The pixel circuit has two FD (floating diffusion) for accumulating signal charges and one photo-gate for detecting the modulated light and the background light. By operating the image sensor synchronously with a frequency and a phase of the modulated light, signal charges generated by the modulated light and the background light are accumulated at FD of one side, while signal charges generated only by the background light are accumulated at another FD, respectively. By subtracting outputs of two FD with the off-chip subtraction circuits, images produced only by the modulated light can be obtained.
Based on the proposed circuit, an image sensor with 64 × 64 pixels are fabricated by using 0.6 μm CMOS technology. We captured images by using this image sensor and demonstrate the sensor can capture images only by the modulated light. When the object is partially illuminated by the modulated illumination under constant background illumination, we can successfully demonstrate the image sensor captures the potion illuminated by the modulated light with removing any static background light. Also we demonstrate the marker detection. When the marker is attached to an object under several background illuminations, the image sensor can extract the marker without affected by the background illumination intensities. A motion capturing is successfully demonstrated by use of this sensor.

We have developed a CMOS vision chip, an image sensor with pixel-level signal processing, to replace photoreceptor cells in the retina. In this paper, we describe a pixel-level signal processing, which is to control on the stimulus waveform and the amount of the electrical injection charge.
Our CMOS vision chip is an array of a pixel, which consists of a photo detector, a pulse shaper, and a current stimulus circuit. The photo detector circuit generates a pulse frequency modulated (PFM) pulse, which frequency is proportional to the intensity of the incoming light. The PFM photo detector is also modified to restrict the maximum frequency of PFM pulse signal for safety neural stimulation.
The PFM pulse signal should be converted into suitable waveform for efficient neural stimulation. We have employed a pulse shaper to generate one stimulus pulse from one PFM pulse. The pulse parameters (i.e., pulse duration, polarity, etc) of the output pulse signal are controlled by the external signal.
For the electrical neural stimulus, the stimulus intensity is given by the amount of the electrical injection charge. The amount of the injection charge should be enough to evoke a phosphene but should be low to avoid the damage of the retinal tissue caused by the excess charge injection. In our prototyped CMOS vision chip, the stimulus current amplitude is used to control the amount of charge. The 6-bit binary-weighted digital-to-analog converter (DAC) with 2μA resolution is used to control the stimulus current amplitude.

This paper describes an optical sensor interface designed for a programmable mixed-signal vision chip. This chip has been designed and manufactured in a standard 0.35μm n-well CMOS technology with one poly layer and five metal layers. It contains a digital shell for control and data interchange, and a central array of 128 × 128 identical cells, each cell corresponding to a pixel. Die size is 11.885 × 12.230mm2 and cell size is 75.7μm × 73.3μm. Each cell contains 198 transistors dedicated to functions like processing, storage, and sensing. The system is oriented to real-time, single-chip image acquisition and processing. Since each pixel performs the basic functions of sensing, processing and storage, data transferences are fully parallel (image-wide). The programmability of the processing functions enables the realization of complex image processing functions based on the sequential application of simpler operations. This paper provides a general overview of the system architecture and functionality, with special emphasis on the optical interface.

One of the important features in CMOS image sensors regarding high sensitivity is that the random readout noise can be better than that of the CCD, if the property of narrow noise bandwidth in CMOS active pixel sensors is effectively used. This is especially important for mega-pixel video-rate image sensors. To meet the requirement, the use of high-gain amplifier at the column of the CMOS imager is effective, because the noise due to wideband amplifier at the output of the image sensor can be relatively reduced. However, it has not been clarified how much the column amplifier can contribute to the noise reduction effect.
In this paper, we present a noise calculation model of a switched-capacitor type column amplifier. The total noise consists of a noise component due to the noise charge sampled and held at the charge summation node of the amplifier and transferred to the output, and a noise component directly fluctuates the S/H stage at the output of the column amplifier. The analytically calculated noise has well agreement with that of the simulation results using a circuit simulator.

In this paper, we demonstrate that an electroluminescence phenomenon associated to hot carriers generation of the in-pixel source follower transistor can occur in CMOS APS pixels. These effects have been observed in several process generations ranging from 0.7μm to 0.25μm with various power supply voltage values. This paper is focused mainly on the behavior of 0.5μm and 0.25μm generation. It is shown that when a pixel is selected its follower transistor can generate excess minority carriers, and that a small amount of these charges flows towards the photosensitive area to be collected. This implies a significant drop of the photodiode voltage when the amount of the collected carriers becomes larger than the junction leakage current.

This paper presents a method of low light imaging using an extremely small capacitor for charge detection in a CMOS image sensor and a high-precision low-noise analog-to-digital converter. A condition for photon counting is that the charge-to-voltage conversion gain is much higher than the root mean square (rms) random noise of the readout circuits. The other condition is that the quantization step of the A/D converter is chosen to be the same as the conversion gain or the amplified conversion gain if the pixel output is further amplified. Simulation results show that if the rms random noise is
reduced to one-sixth of the conversion gain, the 10 times digital integration without the noise increase is possible. This means that even if a very small charge detection capacitor and a relative small power supply voltage are used, a sufficient dynamic range can be achieved by the digital integration without noise increase.

In this article, we propose a method to extend the dynamic range of the CMOS image sensor to both high illumination and low illumination. A one-frame period is divided into a period of long accumulation for low illumination, and periods of short accumulation for high illumination. The sensor data accumulated for short time are read out several times, and all of data are added repeatedly in a processing unit integrated. The SNR dip at boundary of low illumination and high
illumination regions is reduced by the integration of the short accumulated signals. Because the proposed method is independent of the
type of pixel structure, the low-noise characteristics can be achieved, allowing to use, for example, 4-transistor APS with a pinned photo diode. If the ratio of the long accumulation time and the short accumulation time is 128 and the number of addition times is 8, the dynamic range is extended to 41.6dB and the SNR dip is -11 dB.

We propose and demonstrate a new image sensor specialized in an optical wireless LAN (local area network) system. The use of the image sensor brings excellent features to the optical wireless LAN systems; because the image sensor can capture the scene around the communication node or the hub at once like ordinary image sensors, it is easy to implement detection and tracing of the nodes or the hub without any mechanical component to search them. In addition, the image sensors inherently catch the multiple optical signals in parallel by a huge amount of micro photodiodes, which means that they have potentiality of concurrent data acquisition from multiple nodes at the hub. In this paper, we describe a pixel structure to implement two functional modes: a communication (COM) mode to receive temporally modulated optical signals and an image sensor (IS) mode to capture an image in which photodiode operates in an integration mode. The pixel is a fusion of an active pixel sensor and a current amplifier without temporal integration. We fabricated an 8×8-pixel image sensor in a standard 0.8μm BiCMOS technology, and successfully demonstrated position detection of a light source and acquisition of optical serial data when wavelength and frequency of incident optical data were 830nm and 1MHz, respectively. We also show the design of a 50×50-pixel CMOS image sensor with 3-stage main amplifiers.

This paper describes the adaptation and calibration of domestic CCD cameras for a novel air based measurement system that assesses the intensity, alignment and colour of aerodrome ground lighting (AGL) in service. The measurement system comprises calibrated domestic cameras, lenses and filters capable of examining the desired lighting area. The system has been corrected for distortion, bias, partial pixel coverage and colouration effects. The problems of inbuilt camera signal processing and dynamic range have also been studied and allowed for. The developed image processing techniques allow luminaire location and extraction between successive images. For each extracted luminaire, pixel information is automatically correlated and related to an illuminance value using apriori information. The corresponding intensity and alignment is derived using position and orientation information estimated using a vision model and differential GPS. These techniques have been applied to sequences of images collected at various aerodromes. The results reflect the belief that the highly developed enabling technologies of GPS and digital imaging can be combined to tackle further photometry problems.

Polarimetric analysis of solar rays reflected from the Earth's surface is expected to play an important role in future Earth environment observation. Research on an imaging spectropolarimeter using a liquid crystal tunable filter (LCTF), which is able to measure the polarimetric properties at selected wavelengths of solar rays reflected from land or water surfaces, has been conducted over the past five years at NAL for such analysis. Efforts are now under way to put this sensor to practical use, for airborne and ultimately space-based Earth environment remote sensing.
This paper first presents the principle and construction of an LCTF spectropolarimeter which senses radiation in the 400-720 nm wavelength band. Next, an outline of an onboard observation system that incorporates an LCTF spectropolarimeter and its performance characteristics obtained by laboratory tests are presented. Third, the apparatus and procedures for the field experiment using such observation system are described, and the area for the field experiments is shown. Spectral characteristics of solar rays reflected from the observed spots are then shown by relative radiance as the analyzed results of experimental data and spectral images at various wavelengths and polarization angles are also shown as further analyzed results. It is made clear from the experimental results that solar rays reflected from targets with differing characteristics have different spectropolarimetric properties. Moreover, the result of the flight experiment conducted preliminarily to confirm the operational functions of the observation system in a flight environment is shown. Finally it is concluded that the way has been paved for determining the surface conditions from the properties of the images acquired by the LCTF spectropolarimeter.

In Electronic Imaging 2002 we proposed the spectral mathing imager, which detects an object having a particular spectral property out of a scene in real time using the correlation image sensor. This paper proposes another type of spectral matching imager that employs AM-coded multispectral illumination, instead of the variable wavelength monochrome illumination used previously, in order to increase power efficiency of the illumination. The AM-coded multispectral illumination consists of lights from LEDs with different illuminant spectra, which are also amplitude-modulated with orthogonal carrier signals. The reference signal to the correlation image sensor is created as a weighted sum of the AM signals with weights given by the values of a reference spectrum sampled at the peak wavelengths of the LED spectra. Owing to the orthogonality of the AM carrier signals,
the correlation image sensor outputs the pixelwise spectral correlation between the imaged object and a reference material in every frame. The theory and an implementation of the AM-coded spectral matching imager are described and preliminary experimental results on a set of oxide-doped colored glass pieces are given.

Alcatel Space Industries has acquired a wide experience in the development of optical payloads for space application. The increasing demand in terms of ground sampling distance, swath width and number of spectral bands leads to high and critical detector frequency ranges. The data rate at detector video output may increase to values which are not compatible with the required radiometric performances. The present paper describes the figure of merits which allow to express the performances of CCD video signal and their relations with detector and optical payload radiometric performances. Improving CCD video signal performances is performed through the optimization of CCD operating conditions: sequencing and phasing between critical clocks, clocks rise and fall slopes, clock high and low levels and detector biasing. Even if closely related to the application, a methodology for optimizing the video signal is proposed. This methodology is applied on a video signal for operation up to 10Mpixels/second on several detector outputs, and the video signal performance improvements are described for each step of the optimization process, starting from the operating conditions provided by the detector manufacturers. The final improvements are presented and conclusions on the acceptability of the CCD performances are given.

An experimental ultrahigh-definition color video camera system with 7680(H) × 4320(V) pixels has been developed using four 8-million-pixel CCDs. The 8-million-pixel CCD with a progressive scanning rate of 60 frames per second has 4046(H) × 2048(V) effective imaging pixels, each of which is 8.4 micron2. We applied the four-imager pickup method to increase the camera’s resolution. This involves attaching four CCDs to a special color-separation prism. Two CCDs are used for the green image, and the other two are used for red and blue. The spatial image sampling pattern of these CCDs to the optical image is equivalent to one with 32 million pixels in the Bayer pattern color filter. The prototype camera attains a limiting resolution of more than 2700 TV lines both horizontally and vertically, which is higher than that of an 8-million-CCD. The sensitivity of the camera is 2000 lux, F 2.8 at approx. 50 dB of dark-noise level on the HDTV format. Its other specifications are a dynamic range of 200%, a power consumption of about 600 W and a weight, with lens, of 76 kg.

When interline CCD image sensors increase in size beyond 4 million pixels, CCD dark current begins to degrade the signal. Some scientific and photographic applications use very slow readout rates (less than 1 MHz) to reduce the noise level. At a 1-MHz readout rate, a 4-megapixel imager will take at least 4 s to read out. This extended time period allows a significant amount of dark current to build up and frustrate efforts to reduce noise. Often this situation leads to the additional expense of a low-temperature operation. The accumulation-mode readout method for interline CCD image sensors is being developed at Eastman Kodak Company. Previously, accumulation mode could only be applied to the full-frame architecture because the p-type substrate acted as a source for holes. Interline CCD image sensors with n-type substrates have no ready source of holes to accumulate the surface of the CCD under all phases. This problem has been overcome, allowing room-temperature operation without significant dark current generation.

The spatial resolution of an optical device is generally characterized by either the Point Spread Function (PSF) or the Modulation Transfer Function (MTF). To directly obtain the PSF one needs to measure the response of an optical system to a point light source. We present data that show the response of a back-illuminated CCD to light emitted from a sub-micron diameter glass fiber tip. The potential well in back-illuminated CCD’s does not reach all the way to the back surface. Hence, light that is absorbed in the field-free region generates electrons that can diffuse into other pixels. We analyzed the diffusion of electrons into neighboring pixels for different wavelengths of light ranging from blue to near infrared. To find out how the charge spreading into other pixels depends on the location of the light spot, the fiber tip could be moved with a piezo-electric translation stage. The experimental data are compared to Monte Carlo simulations and an analytical model of electron diffusion in the field-free region.

A new high-speed CCD-sensor, capable of capturing 103 consecutive images at a speed of 1 million frames per second, was developed by the authors. To reach this high frame-rate, 103 CCD-storage-cells are placed next to each image-pixel. Sensors utilizing this on-chip-memory-concept can be called In-situ Storage Image Sensor or ISIS. The ISIS is build in standard CCD-technology. To check if this technology could be used for an ISIS, a test sensor called ISIS V1 was designed first. The ISIS V1 is just a simple modification of an existing standard CCD-sensor and it is capable of taking 17 consecutive images. The new sensor called ISIS V2 is a dedicated design in the existing technology. It is equipped with storage CCD-cells that are also used in the standard CCD-sensor, large light-sensitive pixels, an overwriting mechanism to drain old image information and a CCD-switch to use a part of the storage cells also as vertical read-out registers. Nevertheless, the new parts in the architecture had to be simulated by a 3-D device simulator. Simulation results and characteristic parameters of the ISIS-CCD as well as applications of the camera are given.

We present a development of a high image quality color VGA CMOS image sensor with 4-micron pixel pitch, especially focusing on reduction of image lag. In order to eliminate image lag, improvement of charge transfer efficiency from photodiode (PD) to floating diffusion (FD) is a key point. We implemented two novel techniques for this purpose. The first technical point is an optimum design of pixel layout, which provides both high fill factor and a large channel width of transfer gate transistor (TG). We achieved both high fill factor of 42% without microlens and large TG channel width of 1.79um, which is 2.4 times larger than the minimum channel width allowed by design rule. The second technical point is a new device structure of TG by a novel Boron implantation process. This brings a wide path of charge transfer from PD to FD. High-quality images with low image lag, less than 0.75%, were obtained.
Moreover, we also achieved a CMOS image sensor with high-temperature operability. In order to stabilize an adequate black level, we developed a new scheme of a black level control that adjusts offset voltages of amplifiers with feedback signal from analog-digital converter (ADC). An adequate black level was realized up to 100 degrees centigrade for a 5.6-micron pitch monochrome CIF sensor.

Two different thermal imagers are tested to find out their system noise properties such as the noise variance, the distribution of the system noise, the effect of the scanning element in the image and the possible uneven distribution of the temperature caused by the optics or other phenomena. The obtained results can be used for comparing the properties of different thermal imagers and in the process of designing optimal image processing algorithms. The system noise estimation is done with three different methods under certain assumptions. These methods are; the use of the two-dimensional autocorrelation-function and the fitted polynomial, the use of suitable high frequencies of the two-dimensional spectrum and the use of stable image series. The first two methods are closely related and can give the noise variance only. The shape of the system noise histogram can be approximated somewhat from the image series under suitable conditions. The variability between the even and the odd lines in image and other, possibly stable phenomena, are also analysed. These methods are first tested with simulated data sets and comparison between the methods is performed. Also real image series from two different cameras are used and conclusions regarding their performance are drawn.

Many applications, such as industrial inspection and overhead reconnaissance benefit from line scanning architectures where time delay integration (TDI) significantly improves sensitivity. CCDs are particularly well suited to the TDI architecture since charge is transferred virtually noiselessly down the column. Sarnoff's TDI CCDs have demonstrated extremely high speeds where a 7200 x 64, 8 um pixel device with 120 output ports demonstrated a vertical line transfer rate greater than 800 kHz.
The most recent addition to Sarnoff's TDI technology is the implementation of extended dynamic range (XDR) in high speed, back illuminated TDI CCDs. The optical, intrascene dynamic range can be adjusted in the design of the imager with measured dynamic ranges exceeding 2,000,000:1 with no degradation in low light performance. The device provides a piecewise linear response to light where multiple slopes and break points can be set during the CCD design. A description of the device architecture and measured results from fabricated XDR TDI CCDs are presented.

A system for controlling and testing high-resolution non-destructive astronomical imagers was constructed using open-source components, both hardware and software. The open-source electronics design, originated by Carnegie Observatories (OCIW) for CCD cameras, was modified, assembled, and augmented with new circuitry which facilitates monitoring of voltages and currents. The electronics was run from Python user interface software based on a design from the University of Rochester. This new software utilized the Numarray and pyFITS modules developed at the Space Telescope Science Institute (STScI). Interfacing to the "dv" FITS image analysis package from the NASA IRTF was also implemented. Python (the STScI language of choice) was used as the primary language for systems integration, scripts for data acquisition, and scripts for data analysis. The DSP clocking software was a mixture of C and Motorola 56303 assembly. An interrupt-driven kernel-mode PCI device driver for Red Hat Linux was written in C, and used the PC processor and memory for image processing and acquisition. Two 1Κ × 1Κ Raytheon SB226-based hybridized silicon p-i-n arrays were operated and tested with the new system at temperatures as low as 10K. Signal path gain, node capacitance, well depth, dark current, and MTF measurements were made and are presented here.

We present a new technique for accurate estimation of quantum efficiency, conversion gain, and noise in imagers. The traditional mean-variance method provides an erroneous estimation of these parameters for a non-linear device. Quantum efficiency, estimated by the mean-variance method, changes with the illumination level, a result that is inconsistent with theory. The estimation error can be easily larger than 50%, at mid-level illumination, and results from incorrect modeling of the transfer function. This is corrected by using non-linear estimation methods that force the slope of the photon transfer function to be proportional to the conversion gain. This results in accurate modeling of the signal dependence of the conversion gain, and in turn, accurate estimation of quantum efficiency and noise. By applying both methods to the measured data gathered from the same mega-pixel imager operated under different biasing conditions, it is shown that the non-linear estimation method provides a reliable and accurate estimation of quantum efficiency and noise, while the mean-variance method over-estimates quantum efficiency and under-estimates noise.

Processing architecture for digital camera has been built on JPEG2000 compression system. Concerns are to minimize processing power and data traffic inside (data-bandwidth at interface) and out-side (compression efficiency) of camera system. Key idea is to decompose Bayer matrix data given from image sensor into four half-resolution planes instead of interpolating to three full-resolution planes. With a new compression standard, JPEG2000, capable of handling multi-component image, the four-plane representation can be encoded into a single bit-stream. The representation saves data traffic between image reconstruction stage and compression stage by 1/3 to 1/2 compared to the Bayer-interpolated data. Not only reduced processing power prior to and during compression but also competitive or superior compression efficiency is achieved. On reconstruction to full resolution is Bayer-interpolation and/or edge-enhancement required as a post-processing to a standard decoder, while half or smaller resolution image is reconstructed without a post-processing. For mobile terminals with an integrated camera (image reconstruction in camera h/w and compression in terminal processor), this scheme helps to accommodate increased resolution with all the limited data-bandwidth from camera to terminal processor and limited processing capability.

Digital imager sensor responses must be transformed to calibrated (human) color representations for display or print reproduction. Errors in these color rendering transformations can arise from a variety of sources, including (a) noise in the acquisition process (including photon noise and sensor noise) and (b) sensor spectral responsivities inconsistent with those of the human cones. These errors can be summarized by the mean deviation and variance of the reproduced values. It is desirable to select a color transformation that produces both low mean deviations and low noise variance. We show that in some conditions there is an inherent trade-off between these two measures: when selecting a color rendering transformation either the mean deviation or the variance (caused by imager noise) can be minimized. We describe this trade-off mathematically, and we describe a methodology for choosing an appropriate transformation for different applications. We illustrate the methodology by applying it to the problem of color filter selection (CMYG vs. RGGB) for digital cameras. We find that under moderate illumination conditions photon noise alone introduces an uncertainty in the estimated CIELAB coordinates on the order of 1-2 ΔE units for RGGB sensors and in certain cases even higher uncertainty levels for CMYG sensors. If we choose color transformations that equate this variance, the color rendering accuracy of the CMYG and RGGB filters are similar.

n existing single solid-state color image sensors, three primary color filter arrays are pasted on the surface of photo detectors according to the Bayeris pattern. To utilize information conveyed by the incoming light more efficiently, we can employ new color filters whose color components in their passband are overlapped with each other to a larger extent. As the filter array pattern we adopt the Bayeris pattern; but unlike the three primary color filters we use only two color filters, red and blue color filters, and we replace green pixels of the primary color filters with white-black pixels that correspond to pixels on which no light-absorption chemicals are pasted. In this scheme, the key point is to construct a demosaicking method suitable to these color filter arrays. We employ a hybrid demosaicking method that can restrain the occurrence of false color caused by the demosaicking and preserve original hue variations as thoroughly as possible while enhancing the spatial resolution of the restored image. The hybrid demosaicking method first applies the Landweber-type iterative algorithm, equipped with the frequency-band limitation corresponding to the sub-sampling pattern of the white-black pixel array, to the interpolation of white-black pixels, and then performs the interpolation of red and blue pixels with some existing chrominance-preserving-type method such as the gradient-based method. Experiments using test color images demonstrate that our hybrid demosaicking method reproduces a sharpened high-resolution color image without producing noticeable artifacts of false color. Our color image acquisition scheme gives a good compromise between false color occurrence and high-fidelity color reproduction.

Image resolution and sharpness are essential criteria for a human observer when estimating the image quality. Typically cheap small-sized, low-resolution CMOS-camera sensors do not provide sharp enough images, at least when comparing to high-end digital cameras. Sharpening function can be used to increase the subjective sharpness seen by the observer. In this paper, few methods to apply sharpening for images captured by CMOS imaging sensors through color filter array (CFA) are compared. The sharpening easily adds also the visibility of noise, pixel-cross talk and interpolation artifacts. Necessary arrangements to avoid the amplification of these unwanted phenomenon are discussed. By applying the sharpening only to the green component the processing power requirements can be clearly reduced. By adjusting the red and blue component sharpness, according to the green component sharpening, creation of false colors are reduced highly. Direction search sharpening method can be used to reduce the amplification of the artifacts caused by the CFA interpolation (CFAI). The comparison of the presented methods is based mainly on subjective image quality. Also the processing power and memory requirements are considered.

We are developing a set of dyed red, green, and blue color filter coatings for the fabrication of high resolution CCD and CMOS image sensor arrays. The resists contain photosensitive polymer binders and various curing agents, soluble organic dyes, and solvents. The new dyed photoresists are sensitive to i-line radiation, primarily at 365 nm, and are negative-working, requiring less than 500 mJ of exposure energy for patterning. The coatings are developed in standard Tetramethylammonium Hydroxide (TMAH) developers. Many dyes were examined in order to achieve the desired spectral properties as well as the meet the solvent solubility and thermal stability requirements. Computer modeling was utilized to determine the correct proportions of dye(s) in each resist, after which the modeling results were verified by actual formulation and testing. Thermal stability of the dyes was determined using isothermal. Thermogravimetric Analysis (TGA) at 200°C for 30 minutes. The dyes were evaluated in both traditional (free radical) and novel polymer systems to see if adequate sensitivity, resolution, and feature quality could be obtained. The studies showed that traditional free radical-based photochemistries are marginal at best for high resolution (1-2 micron) applications. To overcome this limitation, a new polymer system having photodimerizable functional units and acid functional groups was developed to impart photosensitivity and developer solubility, respectively. This system, which does not use free radical-initiated photopolymerization as a mechanism for patterning, shows low exposure dose requirements and is capable of resolving features less than 2 micron in size.

Color demosaic of CCD data has been thoroughly studied for still digital cameras. But much to our surprise, there has seemingly been an absence of research on color demosaic techniques that are tailored to CCD video cameras. The temporal dimension of a sequence of colour mosaic images can reveal new information on the missing color components due to the subsampling of mosaic, which is otherwise unavailable in the spatial domain of individual frames. In the temporal approach of color demosaic a pixel of the current frame is to be matched to another in a reference frame via motion analysis such that the CCD camera samples the same position in different colors in the two different frames in question. As a result the color sample that is missing in spatial domain may be recovered from temporal domain. Or, even better, the intra-frame and inter-frame color demosaic techniques can be combined via data fusion to achieve more robust color restoration.

We have developed a new concept high-speed and high-resolution color scanner (Blinkscan) using digital camera technology. With our most advanced sub-pixel image processing technology, approximately 12 million pixel image data can be captured. High resolution imaging capability allows various uses such as OCR, color document read, and document camera. The scan time is only about 3 seconds for a letter size sheet. Blinkscan scans documents placed "face up" on its scan stage and without any special illumination lights. Using Blinkscan, a high-resolution color document can be easily inputted into a PC at high speed, a paperless system can be built easily. It is small, and since the occupancy area is also small, setting it on an individual desk is possible. Blinkscan offers the usability of a digital camera and accuracy of a flatbed scanner with high-speed processing.
Now, about several hundred of Blinkscan are mainly shipping for the receptionist operation in a bank and a security.
We will show the high-speed and high-resolution architecture of Blinkscan. Comparing operation-time with conventional image capture device, the advantage of Blinkscan will make clear. And image evaluation for variety of environment, such as geometric distortions or non-uniformity of brightness, will be made.

Reconstruction techniques exploit a first building process using Low-resolution (LR) images to obtain a "draft" High Resolution (HR) image and then update the estimated HR by back-projection error reduction. This paper presents different HR draft image construction techniques and shows methods providing the best solution in terms of final perceived/measured quality. The following algorithms have been analysed: a proprietary Resolution Enhancement method (RE-ST); a Locally Adaptive Zooming Algorithm (LAZA); a Smart Interpolation by Anisotropic Diffusion (SIAD); a Directional Adaptive Edge-Interpolation (DAEI); a classical Bicubic interpolation and a Nearest Neighbour algorithm. The resulting HR images are obtained by merging the zoomed LR-pictures using two different strategies: average or median. To improve the corresponding HR images two adaptive error reduction techniques are applied in the last step: auto-iterative and uncertainty-reduction.

A method for synthesizing enhanced depth of field digital still camera
pictures using multiple differently focused images is presented. This
technique exploits only spatial image gradients in the initial
decision process. The image gradient as a focus measure has been
shown to be experimentally valid and theoretically sound under weak
assumptions with respect to unimodality and monotonicity. Subsequent majority filtering corroborates decisions with those of neighboring pixels, while the use of soft decisions enables smooth transitions across region
boundaries. Furthermore, these last two steps add algorithmic
robustness for coping with both sensor noise and optics-related
effects, such as misregistration or optical flow, and minor intensity
fluctuations. The dependence of these optical effects on several
optical parameters is analyzed and potential remedies that can allay
their impact with regard to the technique's limitations are
discussed. Several examples of image synthesis using the algorithm are
presented. Finally, leveraging the increasing functionality and
emerging processing capabilities of digital still cameras, the method
is shown to entail modest hardware requirements and is implementable
using a parallel or general purpose processor.

When rendering photographs, it is important to preserve the gray tones despite variations in the ambient illumination. When the illuminant is known, white balancing that preserves gray tones can be performed in many different color spaces; the choice of color space influences the renderings of other colors. In this behavioral study, we ask whether users have a preference for the color space where white balancing is performed. Subjects compared images using a white balancing transformation that preserved gray tones, but the transformation was applied in one of the four different color spaces: XYZ, Bradford, a camera sensor RGB and the sharpened RGB color space. We used six scenes types (four portraits, fruit, and toys) acquired under three calibrated illumination environments (fluorescent, tungsten, and flash). For all subjects, transformations applied in XYZ and sharpened RGB were preferred to those applied in Bradford and device color space.

This paper introduces a method for the automatic discrimination of digital images based on their semantic content. The proposed system allows to detect if a digital image contains or not a text. This is realized by a multi-steps procedure based on low-level features set properly derived. Our experiments show that the proposed algorithm is competitive in efficiency with classical techniques, and it has a lower complexity.

Although the number of pixels in image sensors is increasing exponentially, production techniques have only been able to linearly reduce the probability that a pixel will be defective. The result is a rapidly increasing probability that a sensor will contain one or more defective pixels. Sensors with defects are often discarded after fabrication because they may not produce aesthetically pleasing images. To reduce the cost of image sensor production, defect correction algorithms are needed that allow the utilization of sensors with bad pixels. We present a relatively simple defect correction algorithm, requiring only a small 7 by 7 kernel of raw color filter array data that effectively corrects a wide variety of defect types. Our adaptive edge algorithm is high quality, uses few image lines, is adaptable to a variety of defect types, and independent of other on-board DSP algorithms. Results show that the algorithm produces substantially better results in high-frequency image regions compared to conventional one-dimensional correction methods.

Future earth observation optical systems will be more and more demanding in terms of ground sampling distance, swath width, number of spectral bands, duty cycle. Existing architectures of focal planes and video processing electronics are hardly compatible with these new requirements: electronic functions are splitted in several units, and video processing is limited to frequencies around 5MHz in order to fulfill the radiometric requirements expected for high performance image quality systems. This frequency limitation induces a high number of video chains operated in parallel to process the huge amount of pixels at focal plane output, and leads to unacceptable mass and power consumption budgets. Furthermore, splitting the detection electronics functions into several units (at least one for the focal plane and proximity electronics, and one for the video processing functions) does not optimize the production costs: specific development efforts must be performed on critical analog electronics at each equipment level and operations of assembly, integration and tests are duplicated at equipment and subsystem levels. Taking into account these constraints, Alcatel Space has developed with CNES a new concept of Highly Integrated Detection Electronic Subsystem (SEDHI). This paper presents the design of this new concept and summarizes the main performances which have been measured at component and subsystem levels. The electrical, mechanical and thermal aspects of the SEDHI concept are described, including the basic technologies: ASIC for phase shift of detector clocks, ASIC for video processing, ASIC for phase trimming, hybrids, multi chip modules... The adaptability of a large amount of missions and optical instruments is also discussed. Design, performance and budgets of the subsystem are given for the Pleiades mission (successor of SPOT) for which the SEDHI concept has been selected.

The new Kodak KAI-11000CM image sensor-a 35-mm format, 11-Megapixel interline CCD-has been characterized to evaluate its performance in photography applications. Traditional sensor performance parameters, including quantum efficiency, charge capacity, dark current, and read noise are summarized. The impact of these performance parameters on image quality is discussed. A photographic evaluation of the sensor, including measurements of signal-to-noise and color fidelity, is described.

Digital still cameras are getting popular not only in the professional market but also in the consumer market. In recent years, the number of pixels employed in consumer digital still cameras is increasing dramatically, resulting in that over a half of the digital still cameras currently used in the market have over 2 mega pixels. As the number of pixels increases, the ability to depict fine details is improved steadily. Although the resolution is one of the important factors of image quality, current consumer digital cameras seem to have adequate pixels if the resultant image is to be displayed on a monitor or to be printed in small size. In this paper, first we review some important aspects to improve image quality of consumer digital still cameras. Then we propose our new conceptual CCD, which enables capture of a wider dynamic range. The new technology to realize a wider dynamic is introduced and new algorithm to utilize such dynamic range is described also.

This paper addresses the problem of wide dynamic range scenes and presents a new method of compressing the dynamic range of wide dynamic range scenes. This method is based on the Multiscale Retinex Algorithm. The paper presents the performance of the Multiscale Retinex Algorithm on wide dynamic range pictures. Two modifications that enhance the results obtained with the original Multi Scale Retinex algorithm for wide dynamic range pictures are proposed. The first modification is obtained by the recombination of the resulting image with the original picture in a certain weight. The second modification is achieved by adjusting the histogram of the resulting picture. The modifications improve the results of the original Multi Scale Retinex Algorithm in a way that retains the global contrast of brightness and the natural impression of the resulting image. The paper explores the performance of this modified algorithm on different wide dynamic range scenes and points out its advantages over other dynamic range compression algorithms.

This paper presents the pioneer use of our unique Sub-micron Scanning System (SSS) for point spread function (PSF) and crosstalk (CTK) measurements of focal plane CMOS Active Pixel Sensor (APS) arrays. The system enables the combination of near-field optical and atomic force microscopy measurements with the standard electronic analysis. This SSS enables full PSF extraction for imagers via sub-micron spot light stimulation. This is unique to our system. Other systems provide Modulation Transfer Function (MTF) measurements, and cannot acquire the true PSF, therefore limiting the evaluation of the sensor and its performance grading. A full PSF is required for better knowledge of the sensor and its specific faults, and for research - to enable better optimization of pixel design and imager performance.
In this work based on the thorough scanning of different “L” shaped active area pixel designs (the responsivity variation measurements on a subpixel scale) the full PSF was obtained and the crosstalk distributions of the different APS arrays are calculated. The obtained PSF points out the pronounced asymmetry of the diffusion within the array, mostly caused by the certain pixel architecture and the pixels arrangement within the array. We show that a reliable estimate of the CTK in the imager is possible; the PSF use for the CTK measurements enables not only its magnitude determination (that can be done by regular optical measurements), but also to discover its main causes, enabling the design optimization per each potential pixel application.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Journal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews