The color filter array is the classic Bayer. Canon explains the resolution choice:

Illustrating the separate CFA array and the CMOS imager while also
showing the CFA separated into its component color filters to better expose
the structure of their respective sparsely sampled lattices

"The image sensor readout strategy radically departs from the customary “De-Bayer” deployment of quincunx sampling of the green photosites to maximize the green video resolution (and hence the matriced Luma resolution). The design strategy of this new sensor is to not to seek any form of “4K” resolution — but rather to specifically confine the reconstruction of each of the R,G, and B video components to a full digital sampling structure of 1920 (H) x 1080 (V) — according to the SMPTE 274M HDTV Production Standard."

Showing the concept of structuring the final Green video component within the
pre-processing LSI from the two dual video readouts from the CMOS image sensor

"The dual Green process offers the following significant technical advantages:

Increases the noise of the final Green output only by a factor of square root of two

Combination of 1) and 2) increases the effective dynamic range of the green signal — and as a consequence, that of the matriced Luma signal

Increases the effective output green video bit depth

The half-pixel offset between the two separate green sampling lattices — both horizontally and vertically — virtually eliminates the first order sideband spectra associated with the sensor sampling process. This eliminates green aliasing.

Creates an effective FIR* filter within the readout process that aids the optimization of the horizontal MTF and the progressive vertical MTF and associated aliasing."

The summation of two greens is said to increase the DR from 70dB to 73.5dB in green (in fact, from 70.5 to 73.5) or to 72dB in luma (12 stops).

Although the camera frame rate is 24p fps, the readout speed is 1/60s to reduce rolling shutter effects:

In 60i mode each half-frame is read at 1/120s - same per-row speed as in 24p mode.

The low read noise is achieved by limiting the readout amplifier bandwidth, as shown below:

Showing the two separate green 1920 (H) x 1080 (V) photosite lattices
and the horizontal and vertical timing offsets between each of the
two “diagonal” pixels that are summed during the readout process

The resulting horizontal and vertical MTFs of the whole system are improved:

The summary says: "A new CMOS image sensor has been described. It represents a definitive decision by Canon to enter the global field of digital cinematic motion imaging. It is anticipated that there will be many progressive advances in the years ahead. Accordingly, a priority was assigned to taking a first step into this important field of imaging by placing an initial focus on originating a very high quality RGB video component set specifically intended for high-performance High definition video production."

"A 3D-ToF FSI image sensor using novel concentric photogate [CG] pixels with single-tap operation is described. Through the use of CG structure, we are able to achieve high DC at larger pixel pitches. The new CG pixel structure substantially improves DC [demodulation contrast] to 53% at 20MHz at 28 μm pixel pitch. Recent initial results from a backside-illuminated (BSI) implementation of the same sensor show further improved performance and will be reported elsewhere."

Friday, December 30, 2011

As written in comments, the recently acquired Kodak Image Sensor Solutions has been quietly renamed to Truesense Imaging, Inc. Kodak has first used Truesense name for its W-RGB color filter products almost 3 years ago. I wonder if the new company name meant to emphasize the W-RGB products importance.

Meanwhile, Tessera renamed its imaging and optics division into Digital Optics Corporation. The new entity is responsible for wafer-scale optics (former Shellcase), EDoF (former Eyesquad and Dblur), MEMS AF motors (former Siimpel), micro-optics (the original bearer of Digital Optics Corporation name, acquired by Tessera in 2006) and image enhancement software (former Fotonation). It appears that the division has been renamed and separated into the wholly owned subsidiary in June 2011.

Another part of Tessera dealing with chip packaging is separated and renamed too. Its new name is Invensas. In Nov. 2011 Invensas acquired patent assets of California-based TSV foundry ALLVIA. It does not seem to target image sensor applications though.

BBC, US Army: The A160 Hummingbird helicopter-style drones with 1.8 Gigapixel color cameras are being developed by the US Army promising "an unprecedented capability to track and monitor activity on the ground".

A statement added that three of the sensor-equipped drones were due to go into 1-year trial service in Afghanistan in either May or June 2012 as a part of a Quick Reaction Capability, an acquisition approach aimed at delivering cutting-edge and emerging technologies to theater. The army developers and engineers are now finishing up some wiring work on the A160 aircraft and performing ground tests with the ARGUS sensor suite.

Boeing built the first drones, but other firms can bid to manufacture others. The 1.8 Gigapixel ARGUS-IS camera is developed and manufactured by BAE Systems.

The army said that was enough to track people and vehicles from altitudes above 20,000 feet (6.1km) across almost 65 square miles (168 sq km). In addition, operators on the ground can select up to 65 steerable "windows" following separate targets to be "stared at".

DARPA is also working with the UK-based division of BAE Systems to develop a more advanced version of the Argus-IS sensor that will offer night vision. It said the infrared imaging sensors would be sensitive enough to follow "dismounted personnel at night". In addition, the upgrade promises to be able to follow up to 130 "windows" at the same time. The system's first test flight has been scheduled to take place by June 2012.

Thursday, December 29, 2011

Digitimes quotes its sources saying that next generation iPad 3 would be released in two versions. The high end version will feature 8MP camera with Sony sensor. As for the mid-range model, Samsung is said to be among the suppliers of its 5MP sensor.

The new iPad 3 tablets are to be announced at iWorld on Jan. 26, 2012, according to the newspaper. The original version of iPad was announced on Jan. 27, 2010, while the iPad 2 was first shown on March 2, 2011.

"[A] 3-D depth camera system includes an illuminator and an imaging sensor. The illuminator creates at least one collimated light beam, and a diffractive optical element receives the light beam, and creates diffracted light beams which illuminate a field of view including a human target. The image sensor provides a detected image of the human target using light from the field of view but also includes a phase element which adjusts the image so that the point spread function of each diffractive beam which illuminated the target will be imaged as a double helix. [A] ...processor ...determines depth information of the human target based on the rotation of the double helix of each diffractive order of the detected image, and in response to the depth information, distinguishes motion of the human target in the field of view."

Actually, it's much easier to understand this idea in pictures. Below is the illuminator with a diffractive mask 908:

There is another mask 1002 on the sensor side:

Below is the proposed double-helix PSF as a function of distance. One can see that the two points line angle changes as a function of depth:

The orientation angle of the PSF points depends on wavelength (not shown here, see in the application) and the distance (shown below):

From this angle the object distance can be calculated - this is the idea. Microfoft gives an image example and how it changes with the distance in what looks like Wide-VGA sensor plane:

Update: As written in comments, University of Colorado, Denver has been granted a patent US7705970 on a very similar idea. A figure in the patent looks very similar:

"This thesis presents a current-mode CMOS image sensor operating in linear-logarithmic response. The objective of this design is to improve the dynamic range of the image sensor, and to provide a method for mode detection of the image sensor response. One of the motivations of using current-mode has been the shrinking feature size of CMOS devices. This leads to the reduction of supply voltage which causes the degradation of circuit performance in term of dynamic range. Such problem can be alleviated by operating in current-mode. The column readout circuits are designed in current-mode in order to be compatible with the image sensor. The readout circuit is composed of a firstgeneration current conveyor, an improved current memory is employed as a delta reset sampling unit, a differential amplifier as an integrator and a dynamic comparator."

"This paper addresses the difﬁculty of generating High Dynamic Range (HDR) images using current Low Dynamic Range (LDR) camera technology. Typically, several LDR images must be acquired using various camera f-stops and then the images must be blended using one of several exposure bracketing techniques to generate HDR images. Based on Fourier analysis of typical Color Filter Array (CFA) sampled images, we demonstrate that the the existing CFA sampled images provide information that is currently underutilized. This thesis presents an approach to generating HDR images that uses only one input image while exploiting that underutilized CFA data. We propose that information stored in unsaturated color channels is used it to enhance or estimate details lost in saturated regions."

One must note that the DR extension is not that big and is based on the assumption that not all colors saturate simultaneously.

"This thesis covers four main contributions: A physical sensor model is presented which enables the analysis and optimization of the process of raw image acquisition. This model supports the proposal of a new ToF sensor design which employs a logarithmic photo response.
Due to asymmetries of the two read-out paths current systems need to acquire the raw images in multiple instances. This allows the correction of systematic errors. The present thesis proposes a method for dynamic calibration and compensation of these asymmetries. It facilitates the computation of two depth maps from a single set of raw images and thus increases the frame rate by a factor of two.
Since not all required raw images are captured simultaneously motion artifacts can occur. The present thesis proposes a robust method for detection and correction of such artifacts.
All proposed algorithms have a computational complexity which allows real-time execution even on systems with limited resources (e.g. embedded systems). The algorithms are demonstrated by use of a commercial ToF camera."

Quite significant RTS and 1/f noise reduction in image sensors has been reported:
"In the case of this research it was shown that once the noise source and mechanism was understood necessary steps could be taken to reduce the source of the noise. Two examples shown here are the impact of substrate bias and modification of the doping levels. Substrate biasing is a relatively straight forward approach to reducing the noise and has been show here to have this repeatable effect. With additional understanding of the percolation currents modification of the channel dopant profile can serve as an additional means for device noise improvement. Once understood, these relatively easy steps, as in the case of reducing the implant dose in the channel, verified the theory and model developed during this research and resulted in a superior performing CMOS image sensor
product."

Thursday, December 22, 2011

e2v applies for a patent extending its EMCCD technology to the realm of CMOS sensors: "Electron multiplication image sensor and corresponding method" by Frédéric Mayer (France). Fig. 1 of the US20110303822 application shows a prior art 4T pixel having a pinned photodiode PHD:

e2v proposes to split the PHD into two with the "accelerating gate" GA in between, as on Fig. 2. By applying multiple voltage pulses on GA the electrons can be moved in and out of it, as shown on Fig. 3.

"The electron multiplication takes place during the charge integration and in the photodiode itself in the sense that the electrons (photogenerated or resulting already from the impacts of carriers with atoms) are accelerated in turn from the photodiode towards the accelerating gate and from the accelerating gate towards the photodiode. During these movements, impacts with atoms of the semiconductor layer of the photodiode region or of the region located beneath the accelerating gate make other electrons in the valence band pass into the conduction band. These electrons lose energy during these impacts but they are again accelerated by the electric field that is present.

The number of alternations in potential applied to the accelerating gate defines the overall multiplication coefficient obtained at the end of an integration period T, i.e. between two successive pulses for transferring charge from the photodiode to the charge storage region."

Fig. 4 shows one of the possible pixel layouts with GA located in the middle of PHD.

Update: As said in comments, in 2009 Sanyo published a different idea of electron multiplying CMOS pixel. The idea is shown on the figure below:

Update #2: As EF said in comments, Sanyo presented its electron multiplying sensor at ISSCC 2009 (paper, presentation). The pixel structure and the gain non-uniformity are taken from the presentation slides:

Physorg.com, Optics InfoBase: "Many optical systems today, such as those in sensing, nanolithography, and many others, are built on a general belief: An optically opaque metal film would block light transmission even if the film has small holes, as long as the holes are covered with opaque metals which geometrically block the light path through the holes. For example, light transmission from one side of a glass to the other side is assumed to be blocked, when an opaque metal film is coated on one surface of the glass, even if the surface unavoidably has tiny dusts. This is because the coated metal covers the dust completely, hence blocking the light geometric path through the dust. Here, we report our experimental and theoretical study that demonstrates otherwise: Not only the light can transmit, but also the transmission is greatly enhanced, which is much better than an open hole. Furthermore, we found the transmission can be tuned by the metal blocker’s geometry and by the gap between the blockers and the metal film."

These electron microscope images show an experiment in which Princeton Professor of Engineering Stephen Chou showed that blocking a hole in a thin metal film could cause more light to pass through the hole than leaving the hole unblocked. The top image shows an array of 60nm holes spaced 200nm apart with gold caps, each of which is 40 percent bigger than the hole on which it sits. The bottom image shows a cross-section view of one hole with the cap sitting on top of SiO2 pillar. The gold film in the experiment was 40nm thick. The hole covered with the cap surprisingly allows 70% more light to be transmitted through the film than a hole without the cap, Chou's research team found.

"We did not expect more light to get through," Chou said. "We expected the metal to block the light completely."

Chou said the metal disk acts as a sort of "antenna" that picks up and radiates electromagnetic waves. In this case, the metal disks pick up light from one side of the hole and radiate it to the opposite side. The waves travel along the surface of the metal and leap from the hole to the cap, or vice versa depending on which way the light is traveling. Chou's research group is continuing to investigate the effect and how it could be applied to enhance the performance of ultrasensitive detectors.

Comparison of transmittance measurements showing 70% transmission enhancement by the blocked hole array than the open hole array.

(a) Experimental transmittance spectra measured on a periodic gold hole array blocked by Au nanodisks and the same gold hole array after removal of the nanodisks. The hole array has a hole diameter of 70 nm and a gold thickness of 40 nm, the gold nanodisks have a diameter of 85 nm, and the SiO2 pillar height is 52 nm.

(b) Plot of transmission enhancement ratio calculated by dividing the optical transmission of blocked and open gold hole arrays. A maximum enhancement of 1.7x is observed at 680 nm.

Eedoo is now negotiating its 3rd round of financing. The company's CEO said that next year the company's total investment in the iSec project will reach 100M yuan ($15M).

Update:PC World: Eedoo has pushed back its launch date again to some time later in 2012, said Eedoo spokesman Victor Wang on Monday. A source close to the situation however said on condition of anonymity that the launch of the product may be delayed further as the product was not found to be robust enough.

The 2.5D vs 3D process and device simulations are compared based on Synopsys Sentaurus simulator. Also Lumerical FDTD simulator was used for the optical part. The 1.4um FSI pixel simulations show 3D Qsat of 4200e- while it is 5800e- in both measurements and 2.5D simulations.

The discrepancy exists also between the simulated and measured QE:

The paper conclusion is that "further simulations calibration adjustments are required to match experimental Qsat and QE".

Wednesday, December 14, 2011

The device has been developed by the MIT Media Lab’s Camera Culture group in collaboration with Bawendi Lab in the Department of Chemistry at MIT. A laser pulse that lasts less than one trillionth of a second is used as a flash and the light returning from the scene is collected by a camera at a rate equivalent to roughly half a trillion frames per second. However, due to very short exposure times (roughly two trillionth of a second) and a narrow field of view of the camera, the video is captured over several minutes by repeated and periodic sampling. The new technique is able to compose a single 2D movie of roughly 480 frames each with an effective exposure time of 1.71 picoseconds.

Tuesday, December 13, 2011

As BD pointed in comments, Caeleste publications page has been updated to include the latest CNES Workshop 2011 presentations. The most interesting one is "A 0.5 noise electrons CMOS pixel" by Bart Dierickx, Nayera Ahmed, and Benoit Dupont. The presentation explains the 1/f and RTS noise reduction principle by cycling the pMOSFET between accumulation and inversion:

150 inversion-accumulation cycles are averaged to reduce pixel noise down to 0.5e level:

The result was measured on the technology demonstrator based on 100um standalone test structure, ~7μm MOSFET area, pixel is used in CTIA mode with >1000μV/e- conversion gain:

Teledyne DALSA announces the NASA-designed, DALSA-manufactured CCDs are embedded in the Engineering Cameras of the Mars Curiosity Rover, launched on Saturday, November 26, 2011. The Engineering Cameras, known as the Navcam and Hazcam cameras, are located on the Mars Science Laboratory (MSL) Rover and are used for navigation on the surface of Mars. The Rover will use 4 Navcam cameras and 8 Hazcam cameras.

Navcams (Navigation Cameras) are B&W stereo cameras using visible light to gather panoramic, 3D imagery for ground navigation planning by scientists and engineers. Hazcams (Hazard Avoidance Cameras) are B&W cameras using visible light to capture 3D imagery to safeguards against the rover getting lost or inadvertently crashing into unexpected obstacles, and works in tandem with software that allows the rover to make its own safety choices and to "think on its own."

Teledyne DALSA also announced it will partner with Surrey Satellite Technology Limited (SSTL) to develop a new multispectral sensor for an advanced earth observation application. The multimillion dollar development project is expected to begin delivering high resolution images during 2014 for applications such as urban planning and environment and disaster monitoring. Custom multispectral sensors to be designed and manufactured by 2013:

Monolithic multispectral imagers--3, 4, 5 or more different imaging areas on one chip

e2v has signed a multi-million dollar contract for a 2 year program to supply the complete 1.2 Giga-pixel camera system for the Javalambre Physics-of-the-Accelerating-Universe Astrophysical Survey (J-PAS) project funded by a consortium of Spanish and Brazilian astronomy institutes. J-PAS will be dedicated to creating a map of the observable Universe in 56 continuous wavebands from 350nm to 1000nm. The e2v cryogenic camera system has a 1.2 gigapixel mosaic array capable of being read out in 10 seconds.

The camera will be designed and built by e2v, will use 14 newly developed CCD290-99 sensors and includes a guarantee of the camera’s performance levels and a commercial warranty. The 85MP CCDs will be back-thinned and given a multi-layer, anti-reflection coating. They are a 9k x 9k pixel format, with multiple outputs for rapid readout times, and are mounted in a precision package to allow them to be assembled into a mosaic, providing an image area that is nearly 0.5m in diameter. The focal plane assembly will also include the telescope guide and wavefront sensors. The whole focal plane will then be contained in a custom cryogenic camera, with vacuum and cooling components and integrated electronics which will provide state-of-the-art low noise for maximum sensitivity.

e2v has also signed a multi-million Euro contract with Thales Alenia Space for the design, development and manufacture of a space qualified CMOS imaging sensor for use in the Flexible Combined Imager (FCI) instrument of the Meteosat Third Generation (MTG), an ESA and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) program. The first MTG-I satellite is expected to be launched in 2017, with the first MTG-S following in early 2019.

The success of the next generation of extremely large ground-based optical telescopes (E-ELT, GMT, and TMT) will depend upon improving the image quality (correcting the distortion caused by atmospheric turbulence) by deploying sophisticated Adaptive Optics (AO) systems

One of the critical components of the AO systems for the E-ELT has been identified as the wavefront sensor detector

The combinations of large array size, 1760x1760 pixels needed to account for the elongation of laser guide stars (LGS), the fast frame rate of 700 (up to 1000) frames per second, the required high QE (90%), and low read out noise of 3e-makes the development of such a device extremely challenging

A CMOS Imager is under development with a highly parallel read out architecture consisting of over 60,000 on-chip ADCs and 88 parallel high speed LVDS ports to achieve the low read out noise at the high pixel rates of ~3 Gpixel/s (~30 GBit/s). The Imager will be thinned and backside illuminated to reach the 90% QE

This talk reports on the development of the full size Imager and results of Technology Demonstrators

Albert Theuwissen posted that Hiroaki Fujita passed away. Hiro used to work as pixel and process engineer for Sony, Kodak, Aptina, and, briefly, for Panasonic. He was in the group of Kodak engineers received 2009 Walter Kosonocky Award for RGB-W sensor with 1.4um pixel with p-type photodiode.

Saturday, December 10, 2011

Reuters: Total Immersion, a pioneer in emerging field of augmented reality (AR), is working with Intel to bring AR features, like gesture recognition, into Intel's chipsets, Total Immersion's marketing chief Antoine Brachet said.

"What we are doing together with Intel is working on their chipset ... so inside the chipset you can have some AR features, like gesture recognition that can be transferred from software to hardware," Brachet said.

Intel Capital has invested in Total Immersion $5.5M in March 2011. The 1999-founded (other sources - 1998-founded) Total Immersion has lately attracted attention of other heavyweights, Google and Qualcomm, according to Reuters.

"The industry's Most Respected Private Semiconductor Company award is designed to identify the private company garnering the most respect from the industry in terms of its products, vision and future opportunity. GSA's Awards Committee reviews all private semiconductor companies, and the selected nominees and winner are based on the committee's analysis of each company's performance and likelihood of long-term success."

Update:Ambarella: "It is an honor to receive the GSA’s 2011 award for Most Respected Private Semiconductor Company, our fourth GSA award and our second back-to-back award in this category," said Fermi Wang, CEO of Ambarella.

RIT Center for Detectors published a nice EMCCD lecture by Craig Mackay, Institute of Astronomy, University of Cambridge, UK. The lecture covers main principles and applications of EMCCDs, primarily produced by e2v. Some slides from the lecture:

Introduction to EMCCDs: General Characteristics

EMCCDs are standard CCDs plus an electron multiplication stage.

EMCCDs may be read out at high pixel rates (up to 30 MHz for E2V EMCCDs, probably up to 60 MHz for TI EMCCDs).

The gain mechanism increases the variance in the output signal so that the signal-to-noise ratio goes as √(2N) rather than √(N).

The RTS noise and 1/f noise is reduced by cycling the MOSFET between inversion and accumulation to produced un-correlated noise which when sampled become “white”.

A CTIA configuration was used and a very high conversion gain of nearly 1000uV/e- was reported.

When the cycling of the MOSFET was not used a 2e- readout noise was obtained. While when the cycling was performed a 0.5e- readout noise at dark was measured. However it mentioned that the measurements showed variance and it might be due to the CVF. The research institutes and PhDs were invited to do an independent confirmation!

Wednesday, December 07, 2011

Albert Theuwissen published a report from the 1st day of CMOS Detector Workshop being held in Toulouse, France these days. The report was written by M. Sarkar and covers most of the papers presented on the workshop. The workshop is geared toward space and scientific applications. The full list of the workshop papers can be found here.

e2v: On November 26, e2v sensors were launched into space onboard an Atlas V rocket as part of NASA’s Mars Science Laboratory mission, which plans to land a rover named “Curiosity” on the surface of Mars as part of NASA’s Mars Exploration Programme.

The Mars Science Laboratory is a long-term robotic exploration to assess if Mars is, or ever has been, an environment that can support life. It will be the biggest, most capable robot to ever land on another planet. e2v imaging sensors equip both the rover’s Chemistry and Mineralogy instrument (CheMin) which was developed by NASA’s Jet Propulsion Laboratory (JPL) and the Chemistry & Camera instrument (ChemCam) which was developed by the Los Alamos National Lab under an agreement with NASA’s JPL. CheMin will identify and measure the minerals on the planet using sophisticated x-ray detection techniques. The ChemCam instrument consists of a laser, which will be used to vaporise rock samples, and a camera which will then use Laser Induced Breakdown (LIB) spectroscopy to analyse the material produced.

CheMin uses the e2v CCD224, a specialised imaging sensor array optimised for the detection of x-rays in a space environment. This high performance imaging sensor is based upon technology originally implemented in the European Space Agency’s XMM-Newton X-Ray observatory, where it has been operating successfully in the EPIC Instrument for the last 10 years. CheMin will expand the use of e2v’s x-ray imaging sensor technology to the Martian surface.

ChemCam uses the e2v CCD42-10 which is part of a standard range of imaging sensors used for various commercial and high performance applications including ground and space borne astronomy, and spectroscopy. The variant used in ChemCam was back-thinned to maximise sensitivity and coated with a custom graded anti-reflection coating to match the spectroscopic requirements of the mission.

High performance 300 mm backside illumination technology for continuous pixel shrinkage, by D. Yaung (TSMC).
"QE values for a 0.9 um pixel were shown : 50 % in blue, 47 % in green and 45 % in red. The pixels were realized in a 65 nm process with a remaining thickness of the silicon equal to 2 um … 4 um. In the case of the 0.9 um pixel, the optical cross-talk is about 4 times as large as in the 1.1 um version."

A 1.4 um front-side illuminate image sensor with novel light-guiding structure consisting of stacked lightpipes, by H. Watanabe (Panasonic).
"QE in green 74 % in comparison with 69 % for the BSI and 43 % for the FSI without stacked lightpipe."

1. The Sony paper was an embarrassment for Sony. The presenter who claimed significant noise reduction and increased saturation charge could not answer the question as to what the actual value of noise and saturation signal was, saying he was not knowledgeable about the details. This is shameful in a conference like IEDM.

2. TSMC said that they had a lot of particulate problems in wafer bonding leading many manufacturing issues, including wafer distortion during alignment and "breaking bubbles" etc. They said they were able to now reduce particulates to a lower level. This speaker was good about answering questions so a plus for TSMC. He also called the process "TSMC BSI" and not by his co-authors' company Omnivision. Interesting.

3. A very nicely presented paper although the 74% QE number is somewhat hard to accept. If true, it is remarkable. Reminds me that Aptina has also said lightpipes with FSI makes BSI less necessary. Watanabe says at 1.1 um, FSI might be comparable to BSI using the lightpipe technology. He is not sure about 0.9 um. Personally this is on own my short list for WKA nominations.

4. Interesting investigation of blinking pixels. Location of the traps were not determined but there was a lot of interesting statistical data presented.

5. Very interesting results. Too bad we cannot really work on this in the open in the US.

6. Enthusiastically presented student paper.

7. Well, I found this paper interesting, in as much as it is my device but with a life of its own at Samsung. When AT says "pretty good" performance, I think he means "pretty much SOA" performance. Note also that normal two-tap operation throws away 50% of the light (because you also need the quadrature signals) so losing 75% in single-tap is not as bad as it sounds. It is only a ~30% loss in SNR compared to other techniques, but the reduction in FPN makes up for the SNR loss when it comes to determining depth accuracy.

PR Newswire: Based on unaudited results of operations in accordance with Korean GAAP on a non-consolidated basis, Pixelplus revenues for Q2 and Q3 2011 were US$10.5M and US$10.4M respectively, compared to US$5.8M and US$5.9M in the Q2 and Q3 a year ago. Net incomes in Q2 and Q3 2011 were US$2.8M and US$2.4M compared to a net income of US$0.6M and US$1.4M a year ago. Gross margins for Q2 and Q3 2011 were 40.1% and 39.4%, compared to 36.8% in Q1 2011.

"We continue to design and introduce cutting-edge products and technologies and release to the market other innovative technologies," said S.K. Lee, CEO and Founder of Pixelplus. "For this purpose, we continue to develop our core strategic business for automobile, security and surveillance applications, and positively collaborate with medical endoscope manufacturers in South Korea as well as key distributors and manufacturers in China, Hong Kong, Taiwan, and Japan. In parallel, we continue to vigorously pursue cost-control measures and are encouraged that we continue to effectively manage our operating expenses on a reliable and consistent basis."

PR Newswire: Pixelplus announces that it will terminate its American Depositary Receipts (ADR) Program. The company's ADRs will continue to trade over the counter (OTC) in the US until the date of termination of the Deposit Agreement on February 29, 2012.

"We are inclined to terminate the Deposit Agreement and ADR Program as we do not envision the ADR Program as positively enabling, effectuating, or contributing to our short-term and long-term business goals and strategies now and into the foreseeable future," said S.K. Lee. "In addition, the Company's financial costs and expenses incurred in connection with sustaining the ADR Program poses an undue and unnecessary economic burden which we would like to eliminate in moving forward. For these reasons, we have no choice but to terminate the Deposit Agreement and ADR Program in a timely and effective manner."

Pixelplus expects to become a part of the "Free Board" in South Korea in due course, which is South Korea's equivalent of the OTC in the US.

Applied Materials announces Applied Producer Optiva CVD system aimed to manufacture of BSI sensors. "Emerging BSI image sensor designs present a new opportunity for Applied Materials to provide customers with the technology they need to be successful in this rapidly growing market," said Bill McClintock, VP and GM of Applied’s Dielectric Systems and Modules business unit. "The Optiva low temperature process runs on our lightning-fast Producer platform, which is great news for chipmakers looking to satisfy the demand for an estimated 300 million BSI image sensors expected to be needed by 2014."

Producer Optiva system is capable of depositing low temperature, conformal films that boost the low-light performance of the sensor while improving its durability. The system enhances the performance of the microlens by covering it with a tough, thin, transparent film layer that reduces reflections and scratches, and protects it from the environment. Importantly, the Optiva tool is the first CVD system to enable >95% conformal deposition at temperatures less than 200°C. As typical bonding adhesives have thermal budgets of approximately 200ºC, all subsequent processing on these temporarily bonded wafers must be done below 200ºC.

According to iSuppli, three-quarters of all smartphones will be fitted with BSI sensors in 2014, up from just 14% in 2010.

Monday, December 05, 2011

MESA Imaging has opened a 3D ToF Forum. It is mainly related to MESA products but also covers general issues of ToF in its nice FAQ and Applications sections. There are quite a few postings there and it's open for general discussion.

Saturday, December 03, 2011

New Imaging Technology announces NSC1105, a 1.3MP WDR sensor with a 10.6µm pixel and DR of more than 140 dB in a single frame time. Thanks to its Native Wide Dynamic Range technology, the NSC1105 does not require any setting or exposure time control and always provides a useful image whatever the illumination conditions are.
The NSC1105 also benefits from an extended spectral response in the IR range and good low light sensitivity:

Friday, December 02, 2011

Business Wire: Samsung announces that it began mass production of 40-inch "Optical Sensor in Pixel" LCD panels in November this year.

The Optical Sensor in Pixel LCD panel detects reflected images of an object on the panel using Infrared sensors that are built into the panel. With optical sensor in each pixel of the panel, the new panel can much more accurately assess touch sensitivity compared to existing touch panels. The panel can detect more than 50 touch points simultaneously and can display images with Full HD resolution and wide-angle viewing.

All of the input functions of a keyboard, mouse or scanner can be carried out on the panel itself. The panel can be installed in a variety of applications including table top and wall-mounted types. Its tempered glass is strong enough to withstand external loads over 80 kilograms.

As the panel can perform touch and multi-touch sensing and image display simultaneously, it represents a new paradigm for massively interactive communications, compared to the one-way communication of today’s kiosk touch panels.

The Optical Sensor in Pixel LCD panel has been installed in "Samsung SUR40 for Microsoft Surface", a table-type PC product, co-developed by Samsung and Microsoft. SUR40 has been available for pre-order since last month.

"Our Optical Sensor in Pixel panel has overcome the limitations of touch functionality that have hampered the effectiveness of most interactive displays," said Younghwan Park, SVP sales and marketing team, Samsung Electronics LCD Business. "With the world’s first mass production of an Optical Sensor in Pixel LCD, Samsung Electronics has set its sights on taking the lead in the global interactive display market," he added.

"In digital still cameras, some are actually backing off the resolution to deliver better results. It’s said some sanity is coming to the megapixel race. But is there sanity coming to the mobile megapixel race? Will that race end soon?"

Robbert Emery, OmniVision:

"That’s a very interesting question. I think about it a different way: What can I do with more data? With more pixels?
We look at digital zoom; could be better. We look at image stabilization; could be better. If we look at what happened in the economic downturn, of course, there’s been some slowing in the adoption of higher resolution. But coming out of the economic downturn, the numbers shipped for lower resolution camera phones is lowering, and that for higher resolution phones is increasing. We’re looking at the adoption of 5-, 8-, 10-megapixel and above.

With applications and other ways of using more data, the race is definitely still on for higher resolution."

Paul Gallagher, Samsung:

"Right now it’s difficult to say. I agree we saw a slowdown during the economic downturn, but coming out of it, we’re seeing very aggressive adoption of 8 megapixel, and aggressive interest in 12MP. I think what we need to start looking at is we are starting to hit some cost barriers. Historically, the image sensor shrunk the pixel as a means to reduce its cost or to increase the pixel count. But now with the adoption of BSI, you saw a reset of market price. And when you move into sub-micron pixels, you’re probably going to see another reset. The consequences, from the economic point of view, will cause a slowdown. When you start looking at 16-megapixel third-inch optical-format products, the cost may break the model enough that the OEMs start rethinking when enough is enough."

Lars Nord, Sony Ericsson:

"I think it’s about what specifications people use to select the phone. Right now we don’t have many, so resolution
is the one they use. If you compare phones: this has five, this has eight… they’ll take the eight because you get more. More is better? Maybe not every time, but that’s how people think. Until we get some other measures of quality, we will see this megapixel race go on, unfortunately."

Other topics covered at the panel discussion are sensors and optics improvements, recent innovations and what's next in mobile imaging.

Sharp announces a 12.1MP, 1/3.2-inch CMOS camera module with optical image stabilization and autofocus that features the industry’s thinnest profile - 5.47 mm in height. The new RJ63YC100 is intended for use in mobile devices such as smartphones. Sample shipments will begin from December 2, 2011, volume production from January 10, 2012.

Yole Developpement report on WLCSP devices forecasts: "All in all, ‘fan-in’ WLCSP shows the first early signs of a maturing market with price pressure process standardization, but it still grows faster than the average semiconductor packaging market due to the fast growth rates of smartphones and tablet PCs in which WLCSP considerably helps save space and reduce costs."

Wednesday, November 30, 2011

Toshiba announced a reorganization of its semiconductor production facilities in Japan that affects its discrete, analog and imaging IC businesses. In the analog and imaging IC businesses, Toshiba will continue to promote a shift to production on larger wafers to improve manufacturing efficiency and cost competitiveness. Toshiba has been implementing a series of measures to restructure its discrete and analog and imaging IC businesses, including accelerating the transfer of assembly and test operations to overseas facilities, outsourcing, shifting to larger diameter wafer production lines and halving its product line-up. Regular employees at the affected facilities will, in principle, be redeployed within Toshiba Group.

In addition Toshiba is temporarily cutting production at some of its semiconductor facilities from late November 2011 to early January 2012, including Oita Operations, which produces analog semiconductors and image sensors. The Oita facility will have 6-day shutdown during Year-end and New-Year's Holidays (Dec.30-Jan.4) plus the production will be decreased.

"For the second quarter of fiscal 2012, we’re reporting revenues of $217.9 million, down 21.1% sequentially and down 9% on a year-over-year basis."

"Our fiscal 2012 second quarter gross margin was 30.6% compared with the 31.7% that we reported in our prior quarter."

"Our GAAP operating income in the second quarter totaled approximately $19.5 million as compared to $40.6 million the prior quarter."

Shaw Hong, CEO:

"...we encountered an unanticipated cutback in orders from major customers. For sensors that were designed into assist the conventional consumer devices. This event derailed our ability to deliver the financial performance that we had forecasted in August."

"...we acknowledge that a company’s near-term financial performance is disappointing."

"I’d like to express my disappointment in the results for our second fiscal quarter. However, from Q2, OmniVision posted a record revenues in four of its last five quarters. Our focus has always being on executing and technology leadership. The focus remains the heart and soul of the company."

Ray Cisneros, VP Worldwide Marketing:

"Our execution felt short of expectations for our second quarter. This was brought about by a sudden cut back of orders from our largest end used customers of sensors. We also expect further degradation in demand for our third fiscal quarter."

Some quotes from Q&A session:

Harsh Kumar - Morgan Keegan:

"..the order cutbacks, is that primarily related to one customer or are you seeing cutbacks in orders from multiple customers? And is the cutback in orders related to one product or is that multiple products at one or many customers?"

Ray Cisneros:

"The order cutbacks are associated with several customers and they are associated also with several market segments. It's in several products."

Paul Coster - JPMorgan:

"The product cycle leadership that you had a year or so ago seems to have evaporated. Are we correct in assuming that you’re losing market share here?"

Ray Cisneros:

"It’s difficult for us to draw conclusions on market share. If you look at the 2011 calendar year, we are looking at a fairly-fairly still robust number of units OmniVision delivered to the marketplace once 2011 is tallied up."

Betsy Van Hees - Wedbush Securities:

"...I was wondered if we could go back to BSI-2? So the 8830 is shipping in limited volumes. Is it shipping in production to anyone or is it qualified, and so that if there could be a tear down of our product, for example, we would be able to see it?"

Ray Cisneros:

"Yes. It is shipping in production. It is shipping in the customer’s product in production. If you find that product you could tear it down and find our product in there, but obviously due to confidentiality agreements we have we can’t divulge particular names and products, but it is in the marketplace."

Betsy Van Hees - Wedbush Securities:

"And then you talked about ASP pressure... That is something we haven’t heard in a while. So could you talk a little bit more about that ASP pressure, where you are seeing it and if it is broad-based or specific to certain market segments?"

Ray Cisneros:

"...the product mix going from a slant of higher resolution product mix to a lower resolution product mix due to the cut back in orders we experienced in Q3... that’s the big driving factor for the step-down in price. And then, what I mentioned in my prepared commentary is, it's normal quarter-over-quarter price erosion. That was as simple as that. That's common and fair."

Raji Gill - Needham & Company:

"A question on the gross margins. If you look at the guidance, would it be fair to say that the implied guidance of the gross margin would be 28% range?"

Anson Chan:

"...In terms of unit shipments for the BSI-2 products because again these products are still in a very early stage of introduction and the yield is below optimal. So the more we ship the less the margin will be and so that will present itself with a headwind."

Raji Gill - Needham & Company:

"...if I do the math, and it seems like the camera phone business was down about 24% sequentially... And then PCs were down like 53% sequentially... maybe ... you could comment on the competitive landscape coming from other competitors?"

Ray Cisneros:

"...we’ve been in business for 15 years in this space, and we've seen a lot of competitors come and go and this is no different. That said, however, we always respect our competition. Everybody is moving very fast with technology... and we like our chances."

"The real blow to the stock price came from the Q3 outlook. The company expects fiscal third quarter 2012 revenues will be in the range of $160 million to $180 million [well below consensus of $201.4M]. This is an indication that the company may have lost more key contracts than merely the originally feared Apple iPhone 4s contract."

"While they took a hit this most recent quarter, Omnivision is still cash rich with $464 million of net cash currently on their balance sheet. They are also unhindered by long term debt with less than $50 million reported on the balance sheet."

Tuesday, November 29, 2011

Business Wire: Aptina announces the AS0260 SOC sensor. The 2MP native 1080p SOC meets strict form factor requirements (z-height less than 3.5mm) for ultra-thin, full HD video applications within the video-centric consumer electronics market. The new SOC has a 1/6-inch optical format and a new 1.4-micron pixel featuring A-Pix technology to improve low-light performance. The new SOC provides HD video at 1080p/30fps or 720p/60fps with image processing capabilities.

The AS0260 provides SOC-specific features including integrated multi-camera synchronization for stereo or 3D cameras, perspective correction for off-axis camera placement, adaptive polynomial lens shading correction, UVC interface support for USB/ISP bridge devices, as well as automatic image correction and enhancement. Additionally, the AS0260 provides OEMs with advantages over many other Full HD (or 1080p) solutions currently on the market with MJPEG formatted data output to enable video streaming with reduced bandwidth; a presence detection feature combined with ambient light sensing for system power savings, and face detection and tracking capability for identity and security applications.

AS0260 is currently being sampled by several tier-1 OEMs. Mass production is scheduled for CYQ1 2012. It is available in Die and CSP packages.

PR Newswire: Talking about other video solutions announced today, proDAD launches a Windows-based software for video stabilization Mercalli Easy that features also rolling-shutter distortions compensation. The software is said to automatically correct/improve rolling-shutter-caused skew and wobble - this seems to be a new feature appearing in more and more video processing software products.

"At AltaSens we have significant circuit verification challenges to deliver CMOS image sensors with the highest image quality along with lower noise and lower power in a cost-effective solution," said Manjunath Bhat, VP of Engineering at AltaSens. "With the AFS Platform we can run post-layout verification of our high-performance wide-dynamic-range HD CMOS image sensors with nanometer SPICE accuracy. When running AFS single core we get performance on par with our existing parallel simulator running on 8 cores."

The Analog FastSPICE is said to deliver SPICE accuracy 5x-10x faster than any other simulator on a single core and an additional 2x-4x performance with multithreading. For circuit characterization, the AFS Platform includes the device noise analysis and delivers near-linear performance scaling with the number of cores. For large circuits, it is said to deliver more than 10M-element capacity, the industry’s fastest near-SPICE-accurate simulation, and mixed- analog-digital co-simulation with leading Verilog simulators. Available licenses include AFS circuit simulation, AFS Transient Noise Analysis, AFS RF Analysis, AFS Co-Simulation, and AFS Nano SPICE.

PR Newswire: OmniVision announced that its OV6930 CMOS sensor performed exceptionally well in a recent 448-patient clinical study on Avantis Medical Systems' Third Eye Retroscope. The Third Eye Retroscope is an FDA-cleared, disposable, catheter-based camera for use with a standard colonoscope that provides a continuous backward-looking view while the colonoscope provides the usual forward view. In 2010, Avantis received a Medicare and Medicaid reimbursement code for the Third Eye Retroscope.

The OV6930 is a CMOS sensor designed specifically for use in medical devices. With a packaged footprint of only 1.8 x 1.8 mm, the OV6930 is an ideal solution for endoscopic applications that require a small profile, including bronchoscopy, colonoscopy, gastroscopy, OB-GYN and urology. It has low power consumption and is based on OmniPixel3-HS technology with low-light performance of 3300 mV/lux-sec. Its 1/10-inch array is capable of operating up to 30 fps in 400 x 400 HVGA or 60 fps in 400 x 200 resolution, providing RAW serial output. The low-voltage OV6930 allows cabling up to 14 feet, and is now shipping in volume to multiple medical device customers.