Saturday, March 31, 2012

Analog Devices introduces ADSP-BF608 and ADSP-BF609 Blackfin DSPs featuring a high performance video analytics accelerator, the Pipelined Vision Processor (PVP). The PVP is comprised of a set of configurable processing blocks designed to accelerate up to five concurrent image algorithms, enabling a very high level of analytics performance. These processors are ideal for many applications such as automotive advanced driver assistance systems (ADAS), industrial machine vision, and security/surveillance systems.

"With over a decade of research on optimizing and shrinking global shutter pixels we are proud to unveil our latest advances in high performance global shutter technology," said David Zimpfer, GM of Aptina’s Automotive Industrial Business Unit. "By shrinking the global shutter pixel to 3.75-microns we are able to provide high-speed motion capture capability in stunning HD resolution in the standard 1/3-inch optical format."

The 1.2MP sensors can operate at 45 fps at full 1280x960 pixel resolution or at 60fps speed at 720pHD resolution (reduced FOV). The power consumption is 270mW in 720p60 mode. The dynamic range is 83.5dB - quite big for the global shutter sensor. The responsivity at 550nm is 8.5 V/lux-sec.

The only difference between MT9M031 and MT9M021 sensors appears to the the package type. Both sensors are currently sampling with full production start expected in Q2 2012.

Business Wire: Another Aptina sensor announced today is 1/3-inch 1.2MP AR0130CS made with conventional rolling shutter. The sensor features extended IR performance in 850-900nm range. Its QE at 830nm is 26.80%. The responsivity at 550nm is 5.5V/lux-sec. SNRmax is 44dB.

Other than the pixel parameters and the rolling shutter, the AR0130CS appears to be identical to the newly announced global shutter counterparts. The same 83.5dB DR is stated, same power of 270mW in 720p60 mode, same speed of 45 fps at full 1280x960 resolution.

"The AR0130CS provides the surveillance market with a path to upgrade legacy CCTV cameras to high resolution 600-1000 TV line CCTV, or move directly to an HD IP camera solution," says David Zimpfer.

Business Wire: GEO Semiconductor announces the availability of its new security camera reference design jointly created by GEO and Aptina. Code-named Janus, the reference design features GEO’s AnyView technology.

Elimination of multiple cameras by the selection and display of multiple (1-8) views of any size from the fish eye input view while performing real-time full HD De-warping with independent dynamic pan tilt and zoom capabilities in each of the windows;

PR Newswire: Himax has placed a repeat order for EV Group's (EVG) IQ Aligner UV nanoimprint lithography (UV-NIL) system. The IQ Aligner will be used to support Himax's capacity increase in the production of wafer-level cameras for mobile phones, notebooks and other consumer electronic devices, as well as to support the increasingly stringent manufacturing requirements for wafer-level cameras demanded by Himax's customer base. The IQ Aligner will be shipped and installed at Himax's manufacturing facility in Tainan, Taiwan.

"This adds to our already advanced manufacturing capabilities for CMOS image sensors, and provides us with a key competitive edge by enabling us to offer a complete manufacturing solution to the mobile handset market", said HC Chen, fab director at Himax.

Hynix got an investment from South Korean SK Group, changed its name to SK Hynix, and declared its "plans to further strengthen its mobile business such as ... CMOS Image Sensor following the new IT trend in view of the application shift from PC-based to the mobile-centered".

Monday, March 26, 2012

As Eric Fossum mentioned in comments, his presentation from Image Sensors 2012 Conference is available on-line here. The presentation talks about possible paradigm shifts in image sensors, including QIS idea:

MarketWire: Lattice announces that it has released a bridge design to interface Sony IMX036/IMX136 image sensor to parallel input ISPs.
The Lattice MachXO2-1200 FPGA interfaces directly to the subLVDS I/Os of the Sony IMX136, and no external discrete components are required. The image sensor bridge application can support full HD 1080p resolution at 60fps with a 12-bit ISP interface. The design code in the MachXO2 device can also be modified easily to accommodate support for the full 1080p120 capability of the Sony IMX136 for customers who need this functionality.

Sony Sub-LVDS-to-Parallel Sensor Interface Bridging

MarketWire: Some time ago Lattice announced Panasonic MN34041 1080p support in its HDR-60 Video Camera Development Kit. The sensor is fully supported with a 60 fps color ISP pipeline implemented on a LatticeECP3 FPGA within the Lattice HDR-60 Kit.

"It's quite clear it's a development announcement more than a retailable proposition, the technology is not new, it's only what our cameras have done for about a year now."

Sony cameras use "pixel digital zoom", which groups pixels together for increased sensitivity.

"In that respect, it's not especially stand out, but within the mobile sector, yes it is, so I can understand why it's drawn an awful lot of attention," Genge said. "But, it is still only a technological announcement, it's not a plausible retail solution yet."

Nokia camera group leader Damian Dinning responds in comments:

"1. I am delighted to say (as per previous the information we disclosed during the 808s announcement) the Nokia 808 PureView IS a product that will be available during Q2 of this year.

2. The algorithms we needed to develop to provide the incredible detail the 808 PureView captures and creates in just 5mpix easy to share images were developed by Nokia and are the basis of Nokia proprietary technology.

3. We know of no other camera that uses a high resolution [41mpix] sensor in the unique ways we do to provide the following benefits:

ii) despite the high levels of detail, file sizes are far smaller (because the pixels are purer) and therefore faster and easier to upload straight from the device. Which of course our devices have had the capability to do for many years.

iii) LOSSLESS zoom in full HD video and stills. There is NO upscaling used in ANY way in the 808 PureView. Unlike many digital cameras which rely on upscaling for digital zoom. Whilst some digital zoom implementations simply crop the sensor to provide a feeling of zoom. In our case when we are cropping (unless at full zoom) we have an abundance of pixels. We put those pixels to extremely effective use by oversampling the data from those pixels.

iv) One of the most important benefits of Nokia's proprietary pixel oversampling is that it retains the information you want (the detail), whilst filtering out most of the information you dont (noise). This is most noticeable in low light. Pixel oversampling is NOT the same as pixel binning. Others may be using binning but Nokia is not in the case of pixel oversampling. We are also NOT interpolating to create pixels that represent completely false information. As said we only oversample information originally captured by our super high resolution sensor and optics. The level of oversampling is as high as 16:1 in the case of full HD video. No other device I know of has such capability.

v) Using this method of zoom not only provides high image quality in a compact device but it also provides a silent zoom as well as allows the maximum aperture to be used even at full zoom."

Saturday, March 24, 2012

DxOMark, possibly the biggest sensor database in the world, proclaimed Nikon D800 sensor being the best one that DxOMark ever analysed. The 4.7um-pixel, 36.8MP full-frame sensor has shown 14.4 stops DR, color reproduction comparable with medium-format sensors and won score of 95 - the highest ever score in the database.

Image sensor offers the highest price among the four parts, with the cost roughly accounting for 30-50% of the entire CMOS camera module.

In the lens domain, all manufacturers are confronted with profit straits. This is a labor-intensive industry and veteran employees are more than ever in shortage, which is especially obvious in the production bases located in Chinese mainland. On the one hand, the labor cost is increasing; and on the other hand, the raw materials upstream saw price hike in 2011. By contrast, the lens below 8MP witnessed drop in price because of tough competition.

In 2011, the revenue of medium-and small-scale lens manufacturers fell to varied extent, with their profits plummeting. But it was not the case for Taiwan-based Largan Precision and Genius Electronics Optical, both of which saw soaring increase in revenue. The two firms contracted all the lens business for Apple’s camera module. In particular, Largan Precision dominated the high-end market, while Genius Electronics Optical occupied middle-and low-end market. Genius Electronics Optical, the revenue of which grew by 136% in 2011, is the leading iPad camera module lens provider and the only supplier of iPhone VGA camera module lens. Nonetheless, the gross margin of the two declined.

In the camera module field, benefiting from “Apple effect”, the business of LG Innotek, as the major provider of camera module for Apple, grew by more than 100% compared to less than 10% in the entire industry in 2011. The three camera module suppliers approved by Apple include LG Innotek, Sharp, and Primax.

Sharp, also as the major supplier of Nokia, is the second supplier of Apple. For the Taiwan-based Primax Electronics Ltd focuses on low-end products. Furthermore, Vistapoint under Flextronics once served as Apple’s supplier. But the rising wages in Chinese Mainland forced it to sell its plant located in Zhuhai and narrow the business scale in March 2012. Vistapoint has been excluded in the supplier list published by Apple in 2012.

Cambridge Mechatronics Ltd (CML) announced that it has made working prototypes of its latest Optical Image Stabilisation (OIS) and Continuous Autofocus (CAF) lens actuator design.

CML's first OIS related announcement unveiled its Smart metal OIS camera module Tilt or SOT architecture. That approach, which the company (with its partners) has developed to the point whereby it is scheduled for mass production later this year, enables the standard 8.5mm square camera footprint and is optimised for camera performance and time-to-market.

The most recent prototypes are based on an architecture called Smart metal OIS lens barrel Shift or SOS, also ensure the 8.5mm square footprint but are optimised for low camera z-height and cost. CML sees SOS entering mass production in late 2013.

The actuator structure is simplified and camera integration process is straightforward. This will result in a lower cost OIS camera

All the above means that CML is targeting this OIS camera for mainstream smartphones

Both architectures facilitate devices that provide high quality rapid CAF allowing for point and click image capture at 13 MPixels and above. Smart metal technology also consumes significantly less power than the VCM most often found moving lenses in today's smartphone AF cameras.

CML believes that its two architectures will co-exist. SOT will always provide the best OIS performance across the whole image, as much as 4 optical stops of handshake suppression even in the corners. However it will add 0.3mm of z-height to the camera. Alternatively, SOS will add nothing to the overall z-height of the camera. This means that with the latest wide field of view lenses a camera height of 4.0mm can be achieved, almost 2.0mm lower than current smartphone cameras. At this dimension the camera will no longer be dictating the thickness of the handset. As SOS is mechanically simpler than SOT, the manufacturing cost of the camera will be lower.

CML built the SOS prototypes using parts injection moulded by one of its manufacturing licensees, Actuator Solutions GmbH (ASG). CML is currently optimizing the micro-electronic control of the SOS actuators and building fully functional SOS camera systems. ASG and Seiko Instruments Inc (SII) (another publicly announced manufacturing licensee of CML) are working with multiple major camera module makers, including Foxconn, to deliver SOT and rapid CAF cameras into mass production before the end of 2012.

Sony announces "IPELA ENGINE", capable of the industry’s first 130dB WDR in full HD quality at 30fps speed. The "IPELA Engine" is the general term of Sony's integrated signal processing system for high picture quality which combines the company's unique signal processing and video analytics technologies. The “IPELA Engine” is composed mainly of the four components below:

1. View-DR:

This is Sony name for locally tuning contrast and adaptively correcting tone for light and dark areas through the combination of images taken at varying shutter speeds within a single frame.

2. High Frame Rate:

Full HD (1920 x 1080) video is possible at 60fps.

3. DEPA Advanced:

The functions for detection of moving objects, humans and any objects blocking the view among others have been enhanced through an alarm detection function using image processing.

4. XDNR (Excellent Dynamic Noise Reduction):

The detection and removal of noise within a single frame is combined with the reduction of noise from differential signals in the consecutive frames.

The "IPELA ENGINE" will be equipped into new security camera products from Fall 2012.

MIT Camera Culture group published a paper proposing a way to see around the corners. MIT 3D range camera is said to be able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimeter lateral precision. Youtube video shows its principle:

Wednesday, March 21, 2012

Imaging Resource reports that Fujifilm intends to re-design its latest sensors due to excessive blooming. The sensors are used in Fujifilm X10 and Fujifilm X-S1 cameras. Fujifilm USA said that this redesigned sensor should arrive in late May. A nice picture below shows 33-pixel sized blooming orbs in some conditions:

Albert Theuwissen reports from Intertech-Pira Image Sensors 2012 conference being held these days in London, UK.

Nobukazu Teranishi of Panasonic talked about "Dark Current and White Blemishes" in sensors. After talking about different gettering techniques : internal gettering, external gettering and proximity gettering, Nobukazu-san presented a new dark current generation model for the pinned photodiodes. His conclusion was that the best dark current is achieved by using channel-stop diffusions in place of LOCOS and STI isolation.

Officially introduced in Q4 2011, the ELiiXA+ camera range and the Ruby image sensor family are demonstrating the successful relationship between e2v and TowerJazz. The new product families join several of e2v's products already running in volume production at TowerJazz's Fab 2, including sensors for industrial, medical, scientific and space applications. With a strong relationship of over six years, e2v has been progressively increasing production at TowerJazz to match demand for these advanced sensor solutions.

According to Yole Development's Image Sensor market research, the machine vision market is expected to be $88M by 2015 with a CAGR of 23%.

Globes' market source told: "The increase in production by this customer will boost Tower's revenue by $10 million a year within two years."

PRWeb: e-con Systems an embedded design services company specializing in development of advanced camera solutions announces what is says to be the world’s first Stereo Vision camera reference design based on TI’s OMAP/DM37x family of processors and Aptina's 1/3-inch Global Shutter monochrome WVGA image sensors MT9V024. The Capella reference design aims to machine vision, robotics, 3D object recognition and other applications.

Image signal processing algorithm which accounts for effectiveness in smart functionalities, such as smart pattern and motion recognition, etc.

Methodology of analysis and characterization in pixel and system level for advanced smart imaging devices.

Research questions

Why do the new smart functionalities of proposition bear technological impact and possibly open new consumer electronics markets in regards of image sensor?

How can the proposition be realized with new architectures?

How can the proposition be realized in practice? For instance, is the proposition achievable with current CMOS technologies? Would the power of electrical consumption and operational speed be acceptable?

Why do the methodology of analysis and characterization of proposition bear academic and technological importance and effectiveness?

Subject 6: Si Photonic Biosensor for Healthcare

Scope

Smart biosensor technologies, especially in the area of disease detection such as cancer, virus, glucose and DNA sequencing for health-care

Saturday, March 17, 2012

Imaging Resource published an interview with Samsung Imaging Business execs. Few interesing quotes:

Byungdeok Nam, SVP, R&D Team, Digital Imaging Business says: "It usually takes about a year and a half to two years to develop sensors, and we have what are called test vehicles, where on a wafer we can try different samples of sensors with different technologies. Of all these different sensors, we see which is most strong, appropriate or optimal for us, and then we concentrate our development of that technology with that sensor. So in the beginning, we would have many different samples of sensors, and we would then do the evaluation , and decide on one sensor, and then do the development on that sensor."

Byungdeok Nam responds on smartphones vs. digital cameras question: "Well, basically, the OS for cameras and the OS for smartphones are different. Right now, phones have more processing power and they have more memory, So semiconductor companies are providing products that are needed by the smartphone companies, but I think that the same goes for cameras. I guess that in a year or two, cameras can have the same processing power or memory as smartphones."

"Super Hi-Vision (SHV) is a future broadcast system that will give viewers a great sensation of reality. SHV consists of an extremely high-resolution (16 times of HDTV) imagery system and 22.2 channel super surround multi-channel sound system.
We are now proposing to extend its frame frequency from 60 Hz to 120 Hz to improve the motion picture quality, and to have a wide-gamut colorimetry for better color reproduction. We call this new SHV system "full-spec SHV"."

EETimes: Apical announced a licensing agreement with TI in which TI will use Apical’s iridix ISP IP cores in future products.

iridix acts as a central component in high dynamic range imaging and also helps address several imaging design challenges for converged mobile imaging devices.

The iridix image processing IP cores will be integrated into TI products targeting digital imaging and display applications. Apical also licensed its ISP IP to Samsung in 2009, Hynix in 2010, HiSense in 2011, and HiSilicon in 2012.

Thursday, March 15, 2012

Microsoft Research had held a TechFest event on March 6, 2012 where it presented a new 3D-mapping webcams, shown in this Youtube video (a higher resolution version directly from Microsoft site is here):

Wednesday, March 14, 2012

IMS Research believes Apple will need to embrace embedded vision-based technologies in its next product releases in order for the company to maintain its competitive edge.

Competitors such as Samsung and Microsoft have steadily begun integrating these technologies in recent releases and several more have products slated for debut in the next year, as competitive differentiators to employ against Apple. These technologies will also become commonplace in the years to come.

Apple’s competitors are also more aggressively deploying camera-based gesture recognition applications. Microsoft has already shown its commitment to gesture control with the Xbox 360 and upcoming Windows 8 platforms, along with gesture-friendly common interfaces across devices. Windows 8-based laptops and tablets incorporating gesture control with either standard or enhanced front-facing cameras are debuting this year. Android-based smartphones and tablets incorporating gesture control will debut in volume in late 2012. In the home video arena, where Apple has significant aspirations, Samsung is only the first of several major consumer electronics companies to debut camera-based gesture recognition this year in its Smart TVs. Vision-based applications are thus expected to be a competitive differentiator going forward.

Business Wire: Mixel and Graphin announce what they call the world’s first end-to-end video transmission over a MIPI M-PHY link. In 2010, the two companies established a strategic partnership to address the emerging M-PHY and to produce a “Golden M-PHY” IC to be used in Graphin’s evaluation system. As a result of that collaboration, Mixel achieved first-silicon success with its M-PHY test chip supporting all use cases, and was the first and only IP provider to demonstrate that capability in the MIPI face-to-face meeting in Copenhagen in June 2011. The companies will now be demonstrating end-to-end video transmission using the Mixel chip in the MIPI Alliance face-to-face meeting in Seoul, Korea on March 13th.

Mixel’s M-PHY IP supports both TYPE I and TYPE II operation, A and B data rates, and all current and future MIPI M-PHY use-cases, such as DigRF v4, UniProSM 1.4, CSI-3, LLI, and JEDEC’s UFS. The MXL-MIPI-M-PHY-HSG2 supports High-Speed (HS) Gear1 (G1), Gear2 (G2), as well as Low-Speed Gear 0 (LS-G0) through LS-G7. The IP supports 1.0 version of the M-PHY specifications, and has been silicon proven for over a year now.

Tuesday, March 13, 2012

PR Newswire: TowerJazz announces its TS11IS hybrid CIS process, a combination of 0.11um and 0.16um platform. The TS11IS combines TowerJazz's 0.16um CMOS for periphery circuits and its 0.11um pushed design rules for the pixel array. The process is targeted for applications in high end photography, machine vision, 3D imaging and security sensors.

The new platform, based on Tower's 0.16um CMOS shrink process, will allow easy re-use of existing customers' 0.18um circuit IP which will save them from investing in resources to redesign existing blocks, and increase the probability for first time success. The TS11IS offers improved pixel performance, smaller pixel pitch, higher resolution, improved sensitivity, and improved angular response. It allows up to 50% reduction of pixel size, mainly for high-end global shutter pixels.

The platform includes a new local interconnect layer to allow denser metallization routing in pixels while maintaining good QE. It also includes tighter design rules for all metal layers and implant layers as well as provides a "Bathtub" option for lower stack height, improving the sensors' angular response.

"By allowing significantly smaller pixels, higher resolution and enhanced pixel performance, our new platform ideally serves our customers' needs for the professional CIS markets, allowing them to create new business opportunities, expand the span of applications accessible for their designs, and enlarge their market share," said Jonathan Gendler, Director of CIS Marketing. "We have received enthusiastic feedback from all of our customers on the opportunity to keep working with our established process environment and reuse their design block IP, while being able to shrink the pixel array and die size. This new platform not only improves the cost model of their products, but at the same time enhances device performance."

The new hybrid CIS process platform will be offered for prototyping for select customers in Q3 2012, and for production towards the end of 2012. The new process and other advances will be showcased at the Image Sensors (IS) conference in London on March 20-22, 2012.

According to Yole Development, the forecast for high-end CMOS image sensors is expected to be ~$2B in 2015 with a CAGR of 13%.

Albert Theuwissen discusses color shading measurements in his latest post in "How to measure..." series: "Even if the shading component is small, it can result in (minor) changes in spectral response across the sensor. These type of errors can have a severe effect on colour shading in a colour sensor and can make colour reconstruction pretty complicated. So it absolutely worthwhile to check out the shading under light conditions."

Monday, March 12, 2012

There is an HDR course planned for SPIE Defense, Security+Sensing Conference in Baltimore, MD on April 23-27, 2012. "High Dynamic Range Imaging: Sensors and Architectures" 4-hour course by Arnaud Darmont, Aphesa "describes various sensor and pixel architectures to achieve high dynamic range imaging as well as software approaches to make high dynamic range images out of lower dynamic range sensors or image sets".

Sunday, March 11, 2012

Globes: PrimeSense is firing 50 of its 190 employees. The company is holding hearings for employees today, ahead of sending them pink slips later this week. PrimeSense is cutting its workforce in all departments: marketing, R&D, and operations.

Friday, March 09, 2012

Laser Focus World: Fraunhofer Institute for Microelectronic Circuits and Systems (IMS; Duiburg, Germany) lateral-drift-field photodiode (LDPD) achieves a complete charge transfer from the pixels into the readout node in just 30ns - quite an achievement for 40 sq.um-large pixel. The researches used LDPD to create 128 × 96 pixel ToF sensor and a human arm was easily imaged in 3D using the sensor within a standard camera setup in conjunction with a 905 nm pulsed source (with a pulse duration of 30 ns) operated at 10 kHz. Responsivity of the LDPD was 230 μV/W/m2 and dynamic range was about 60 dB. The sensor is made in 0.35um process. The pixel fill factor is 38%.

"The photodiode is divided in two main parts: a pinned surface one and a part which resembles a buried CCD cell, as it can be observed in Fig. 1. The pixels and the entire sensor have been fabricated in the 2P4M automotive certified 0.35 μm CMOS technology at the Fraunhofer IMS with the addition of an extra surface-pinned n-well yielding a non-uniform lateral doping profile, as shown in Fig. 1 (upper picture). The doping
concentration gradient of the extra n-well was chosen in such a way that it induces an intrinsic lateral drift field parallel to the Si-surface in the direction of the pixel readout node (x-axis in Fig. 1) as well as from the periphery of the n-well in the direction of the n-well centre (y-axis in Fig. 1).

The potential distribution within this intrinsic lateral drift-field photodiode (LDPD) n-well resembles a hopper leading the photogenerated charge directly to the assigned readout nodes. It remains fully depleted during operation, sandwiched between the substrate and a grounded p+ pinning layer on top of it (see Fig. 1). In this manner, the almost noiseless reset and readout operations of the photodetector are enabled.

A buried collection-gate (CG) is fabricated at the one end of the n-well, which remains biased at a certain voltage VCG. It induces an additional electrostatic potential maximum in the system and enables the proper and symmetrical distribution of the signal charge among the readout nodes. Each of the four transfer-gates (TX) plays two main roles:

1) it serves to create a potential barrier in the well to prevent the collected charge to be transferred into any of the three “floating” diffusions (FD) aimed at pixel readout or the so called “draining” diffusion (DD) permanently biased at a reset potential
2) to facilitate the transport of the photocharge into a desired FD or the DD."

Wednesday, March 07, 2012

The official Nokia Conversations blog published a post by Damian Dinning on the journey to the 41MP camera phone. Some quotes:

"the innovation and news is NOT the number of pixels but rather HOW those pixels are used."

"For some of our team, it’s taken over five years to bring this to the market..."

"After developing several optical zoom modules, we were still seeing significant performance trade-offs caused by optical zoom: performance in low light; image sharpness at both ends of the zoom range; audible noise problems; slow zooming speed and lost focus when zooming during video. We became convinced this could never be the great experience we once hoped. You’d need to accept a bigger, more expensive device with poor f no., a small and noisy image sensor and lower optical resolution just to be able to zoom."

"We had often debated that, for the vast majority, 5-megapixels completely fulfils their real world needs, but the market for many years has been pixels, pixels, pixels. It’s hard to block that out. Our friends at Carl Zeiss believed the same."

Tuesday, March 06, 2012

Reuters: Baird expects Omnivision to supply a 5MP sensor for the rear camera and a 1MP sensor for the front camera of the iPad3. Baird also said Omnivision may supply sensors for an upcoming iPad mini.

The brokerage expects Sony to remain the rear-camera supplier of the iPhone5, but believes OmniVision could be a potential second supplier.

iPhone 4S camera has also been reverse engineered and Sony's 8MP BSI IMX145 sensor identified, made in 90nm process. For both iPhone back and front cameras, Tong Hsing (former ImPac) was identified as ceramic packaging supplier and LG Innotek as the module vendor.

The only difference from AR0330 sensor announced a year ago appears to be a 6.28mm x 6.65mm CSP package and a slower frame rate of 30fps in 1080p mode, whereas the last year's sensor had ceramic package and was capable of 60fps at 1080p resolution.

The AR0330CS is currently sampling with mass production expected in Q2 CY2012.

Omnivision's patent application US20120038014 proposes a stress film to passivate the backside surface:

"For a BSI CMOS image sensor, dark currents may be a particular problem. A typical BSI CMOS image sensor has dark current levels that are over 100 times greater than that of a front side illuminated sensor.

A BSI image sensor's backside surface stress may affect its dark current level. The present application discloses utilizing structures and methods to adjust the stress on a CMOS image sensor's backside silicon surface, thereby reducing the dark current effect by facilitating the movement of photo generated charge carriers away from the backside surface.

Stress on a backside silicon surface may be adjusted by forming a stress loaded layer on the surface. A stress loaded layer may include materials such as metal, organic compounds, inorganic compounds, or otherwise. For example, the stress loaded layer may include a silicon oxide (SiO2) film, a silicon nitride (SiNx) film, a silicon oxynitride (SiOxNy) film, or a combination thereof."

Apple parent application US20120044328 proposes to split image sensor into three - one luminance sensor and two chrominance ones:

"Typically, the luminance portion of a color image may have a greater influence on the overall image resolution than the chrominance portion. This effect can be at least partially attributed to the structure of the human eye, which includes a higher density of rods for sensing luminance than cones for sensing color.

While an image sensing device that emphasizes luminance over chrominance generally does not perceptibly compromise the resolution of the produced image, color information can be lost if the luminance and chrominance sensors are connected to separate optical lens trains, and a “blind” region of the luminance sensor is offset from the “blind” region of the chrominance sensor. One example of such a blind region can occur due to a foreground object occluding a background object. Further, the same foreground object may create the blind region for both the chrominance and luminance sensors, or the chrominance blind region created by one object may not completely overlap the luminance blind region created by a second object. In such situations, color information may be lost for the “blind” regions of the chrominance sensor, thereby compromising the resolution of the composite color image."

So Apple proposes to use two chrominance sensors and the following processing flow:

DOC will pay approximately $23M in cash for "certain assets" of Flextronics's camera module business located in Zhuhai, China. The transaction, which is expected to close in Q3 2012, if not sooner, includes existing customer contracts and a lease to an approximately 135,000-square-foot facility. The transaction also includes an intellectual property assignment and license agreement, and a transition services agreement. DOC intends to offer employment to a portion of the existing work force of the Flextronics camera module business in Zhuhai, China. DOC anticipates that the business will have a capacity to manufacture approximately 50M camera module units per year.

Flextronics will retain a portion of Vista Point Technologies assets, but repurpose them and focus engineering talent toward "strengthening its ability to deliver manufacturing services".

"The Zhuhai Camera Module Business will allow us to drive rapid market introduction of DOC's next-generation technology in a manner that complements our existing collaborations with camera module makers. We believe our approach is the best way to address the requirements of Tier One OEM manufacturers, which require that camera modules be delivered through dual sourcing from high-volume manufacturing facilities," said Robert A. Young, Tessera CEO.

"This transaction is a critical step in our strategy of transforming DOC from an optical and image enhancement software and components business into a Tier One qualified, vertically integrated supplier of next-generation camera modules to the $9-billion market for mobile cameras," Young continued. "In parallel, we continue to have active discussions with multiple Tier One OEM manufacturers of mobile phones regarding our MEMS autofocus product, and remain on track to obtain a design win in the first half of 2012 and to begin high-volume manufacturing in the fourth quarter of 2012," said Young.

"These assets will enable DigitalOptics Corporation to significantly increase sales of the imaging technologies we've acquired and developed over the past five years. Our strategy is to combine our breakthrough autofocus solutions with our other proprietary technologies so that DOC will become a leading supplier of integrated camera modules in the mobile phone market," said Bob Roohparvar, president of DigitalOptics Corporation.

DOC has been developing its capacity to oversee the high-volume manufacturing operations required by mobile phone makers. DOC's steps in the past year have included hiring more than a dozen executives and managers who have experience in engineering scale-up as well as in manufacturing at similar facilities.

Sensors Magazine: Morocco-based Nemotek Technologie debuts what it says the world's first two element wafer-level camera, Exiguus H12-A2. The Exiguus H12-A2 features high resolution and less than 0.5% overall distortion all in an 1/10-inch form factor.

"Today we unveil the first camera that successfully incorporates a two element wafer-lens and is technically more complex while providing better resolution than any current wafer-level offering on the market to date," said Hatim Limati, VP of sales and marketing for Nemotek Technologie. "The Exiguus H12-A2 produces extraordinarily clear, sharp pictures which make it the perfect choice for a wide range of applications. With this new achievement, we are able to further showcase our position as the industry's leader in innovation and design."

In addition, Nemotek marks its debut into the High End VGA market with another new camera based on a 720P High End sensor.

Samples of Nemotek's Exiguus H12-A2 and its High End VGA camera are currently available.

Thursday, March 01, 2012

DSC unit shipments increased by a compound average growth rate (CAGR) of slightly more than 37% in the 2000-2005 period, but slowed to 9% per year between 2005 and 2010, and are projected to rise by merely 2.1% annually from 2010 through 2015.

A:Well of course my own pet project - Quanta Image Sensing (QIS)- could become a major disruption. I don't expect lightning to strike twice but as I like to say, you can't win the lottery if you don't buy a ticket. Computational imaging is getting interesting but it might be a few years before Moore's Law catches up to the aspirations of computational imaging and enables its full potential. I think computational imaging combined with the QIS could become a major paradigm shift but it is still early in that game. I think use of non-silicon materials could be disruptive if any of them work out. But, silicon is an amazing material and manufacturing and noise issues with non-silicon materials are non-trivial. Meanwhile, the rate of continuous improvement is so large that emerging technologies have to mature rapidly to have enough compelling advantage that they can grab a toehold in the marketplace once they get there. To that end, even a few years of continuous improvement can look disruptive to the user community.