Thursday, March 31, 2016

Sony announces that is has developed the world's first ghost catching device - Proton Pack equipped with a state-of-the-art, miniaturized superconducting synchrotron, which accelerates injected protons from a hydrogen plasma cell to capture ectoplasmic matter. The Proton Pack integrates the super slow motion video camera, allowing users to capture the matter in 960 fps rate and accurately record the movement of its target.

The company's official Youtube video explains the new product functionality:

Wednesday, March 30, 2016

DNews posts a nice educational video talking about some people seeing life in 4 base colors, rather than in usual 3. Researchers claim that "Some 12% of women are carriers of the mild, X-linked forms of color vision deficiencies called “anomalous trichromacy.” Owing to random X chromosome inactivation, their retinae must contain four classes of cone rather than the normal three."

“Even the most modern image sensors are limited in the dynamic range which they can capture,” said Alfred Zee, President & CEO of Pinnacle Imaging Systems. “We believe that cameras should be able to provide the same contrast range that we naturally see with our own eyes, so we based our technology on the human vision model. It’s this unique approach that allows our Ultra HDR technology to deliver such color-accurate, high contrast video quality.”

Automatic Exposure Controls – Real time calculation and adjustment of the sensor’s exposure settings based on an automatic or manually selected region of interest to allow accurate exposure throughout a scene

With initial FPGA implementation completed, Pinnacle Imaging IP blocks can now be ported to ASIC, DSP+SoCs or ISPs. “We are currently seeing growing demand for HDR capabilities embedded into video cameras and production equipment,” said Ron Tussy, Director of Business Development for Pinnacle Imaging Systems. “Our proprietary embedded HDR tone mapping is a critical underlying technology necessary to improve data capture for technologies used in range finding and recognition in automotive, security and surveillance or any other field demanding video to be captured across very bright and very dark areas.”

In application field, warning-only ADASs will gradually exit market, while the ADASs with actuator represent the mainstream in the future. For example, it is hard for drivers to take prompt countermeasures, as the warning time of FCW is no more than 3 seconds. Moreover, AEB (Autonomous Emergency Braking) may become the most important ADAS application. AEB will be a mandatory safety function across the world during 2021-2025.

Future orientation of development will be stereo camera rather than mono camera, especially in AEB field. As AEB concerns human life, there must be as much performance redundancy as possible, thus ensuring the safety of drivers to the utmost extent. Stereo camera has an overwhelming advantage over mono camera in the aspect of pedestrians recognition. However, the majority of companies (OEMs & Tier 1 suppliers) still adopt mono camera, as AEB is largely an optional component rather than a standard one, and the costs of stereo camera are much higher, resulting in higher price and low popularity.

Pedestrians Recognition will be a must of next-generation AEB, meaning that stereo camera has to be employed. Mercedes-Benz, Subrau, Jaguar, and Suzuki have adopted stereo camera from the very beginning, while VW, Toyota, Honda, and Nissan all employ stereo camera in their experimental models. As to Tier 1, Hitachi Automotive System has used stereo camera at the very start, while Continental, Bosch, Denso, and Fujitsu Ten see stereo camera as the priority of development. These companies are iconic ones in automobile industry and their moves represent the direction of automobile industry as a whole.

Global automotive camera module shipments approximated 50.3 million pieces in 2015 and are expected to reach 62.1 million pieces in 2016, 141 million pieces in 2020, and 246 million pieces in 2025. There are three cameras on each light vehicle on average, respectively for LKA, AEB, and Parking. Unlike mobile phone camera modules, automotive camera modules are highly demanding on reliability and range of operating temperature. Major vendors are Panasonic, Sony, Valeo, Fujitsu Ten, MCNEX, Magna, Gentex, Continental, and Hitachi. Panasonic ranks first globally in terms of market share and is far ahead of the second place.

Global automotive vision system market size was worth about USD3.1 billion in 2015 and is expected to hit USD6.1 billion in 2020. Magna, TRW (ZF), Hitachi Automotive System, and Continental are in the first camp, with Magna being the world’s largest, and Autoliv, Valeo, Denso, Fujitsu-ten, and Bosch are in the second camp. As the demand from carmakers varies greatly, the market concentration has been lower and this will continue for a considerable time."

Sony low cost STARVIS sensor, IMX323, for security and surveillance applications is reportedly offered for less than $3 in volume in China. The 1080p30 1/2.9-inch sensor is based on 2.8um pixels. A complete datasheet of the new sensor can be downloaded here.

"North America leads the backside illumination (BSI), complementary metal oxide semiconductor (CMOS) image sensor market, due to several design and technological improvements in this technology... The Asia-Pacific BSI CMOS image sensor market is expected to grow at the highest rate, in the coming years, due to which, it is expected to become the largest market, in the coming years. One of the key reasons for high growth of Asia Pacific in this market is its swift transition from analog to digital systems...

BSI CMOS image sensors eliminate the bulk substrate in the devices to decrease diffusion component of dark current and electrical crosstalk. BSI CMOS image sensors have higher quantum efficiency, which in turn enhances the production of output images with similar signal to noise ratio (SNR). However BSI CMOS image sensors are mechanically weaker, due to wafer thinning, which results in more chances of breakage for a large BSI CMOS image sensor."

Friday, March 25, 2016

EETimes publishes an article "What Do InVisage & InvenSense Have in Common?" by Peter Clarke, comparing Invisage and Invensense business models. While Invensense is successful at MEMS inertial sensors market, licensing its process to others does not bring much income for the company. So, the article questions whether this approach can work for Invisage.

Thursday, March 24, 2016

Brookman Technology delivers engineering samples of 33MP, 120fps BT3300N for 8K Super Hi-Vision broadcast, which was co-developed with NHK and Prof. Kawahito's Group of Shizuoka University. BT3300N is said to be the only image sensor (as of January 2016) that meets the full specifications of the Super Hi-Vision (8K, 120fps) and its optical format is Super 35mm. The new sensor is based on 3.2um pixels and features 14b 2-stage ADC.

8K Super Hi-Vision is set to begin test broadcasting in 2016 and roll out full broadcasting by 2018.

Brookman BT3300N

MarketWired: Silvaco announces that Brookman has adopted Silvaco's power integrity tool suite InVar Power-IR-Thermal for the design and development of its CMOS sensor products. InVar Power analyzes dynamic power consumption; InVar IR analyzes voltage drops in power sources and signal networks; and InVar Thermal performs thermal analysis at the full chip level. Concurrent observations of power, IR and thermal make it possible to perform real-time analysis, considering the effect of heat generation.

Brookman Technology's President, Satoshi Aoyama, stated, "8K standard demands the challenging specs to sensors, which are 33Mega-pixel resolution, 120 frames/second speed and 12-bit image gradation. This large and high-speed sensor is facing the issue of voltage drops so critically that the analysis by InVar IR is extremely helpful. In addition, as for the problem regarding the degradation of image due to intra-chip heat generation, we expect that InVar Thermal contributes effectively to the shortening of development TAT, as well as the improvement of design quality."

“Accelerating the Sensing World through Imaging Evolution”, Tetsuo Nomoto, VP and SGM, SONYThe evolution of CMOS Image Sensors (CIS) and the future prospect of a “sensing” world utilizing advanced imaging technologies promise to improve our quality of life by sensing anything, anywhere, anytime. Charge Coupled Device image sensors replaced video camera tubes, allowing the introduction of compact video cameras as consumer products. CIS now dominates the market for digital still cameras created by its predecessor and, with the advent of column-parallel ADCs and back-illuminated technologies, outperforms them. CIS’s achieve better signal to noise ratio, lower power consumption, and higher frame rate. Stacked CIS’s continue to enhance functionality and user experience in mobile devices, a market that currently comprises over one billion new image sensors per year. CIS imaging technologies promise to accelerate the progress of sensing world by continuously improving image quality, extending detectable wavelengths, and further improving depth resolution and temporal resolution.

Tuesday, March 22, 2016

Jörg Kunze from Basler AG kindly allowed me to publish few slides from his presentation at the London Image Sensors 2016 conference about his novel Debayering algorithm called PGI.

His implementation is a hardware-efficient single-step 5x5 pixel algorithm, which performs a zipper-free high-quality Bayer-pattern interpolation up to the theoretical frequency limit, color-anti-aliasing, sharpness enhancement and noise reduction together. The pictures look very convincing. Basler has currently a single-lane FPGA implementation with a throughput of 140 MPix/s using 880 Cyclone V logic cells and a quad lane implementation with a throughput of 400 MPix/s using 2600 logic cells. Jörg says, Basler is interested in licensing, cross-licensing or technology exchange.

Monday, March 21, 2016

WSJ: Honda is releasing automated highway driving features on its entry-level vehicle Civic LX sedan. With price tag of $20,440 it makes it accessible to significantly more buyers, including younger ones.

As auto makers offer the components needed to power these functions in option packages as low as $1,800, they are being snapped up at a far higher rate than electrified vehicles.

Real-Time Large Aperture Depth-of-Field Effect – With the built-in 3D sensor, the ISP can capture depth mapping in real-time, and with a large aperture of greater than f/0.8, it can produce large aperture depth-of-field effects in real-time even for previews.

Reality Depth of Field (DOF) – Users can map objects and backgrounds to multiple layers with a DOF feature. The smartphone can smartly position the object and background and apply creative effects to each layer in real time to produce photos or videos with DOF effects.

Bayer and Mono Cameras – This multiple-sensor design can capture three times the light of a traditional single-bayer sensor, thus reducing image noise and increasing quality.

Dual Camera Zoom – The ISP is equipped with a wide-angle lens and a telescope in its dual camera system. Imagiq combines these wide-angle and zoomed captured images into one image.

Sunday, March 20, 2016

Sony 3.45um global shutter pixel, said to be the smallest in the industry, is the base of two new sensors: IMX253LLR/LQR 1.1-inch 12.37MP and IMX255LLR/LQR 1-inch 8.95MP, available in both monochrome (LLR) and color (LQR) versions.

"We have really appreciated everyone’s patience in waiting for these cameras and apologize for the delay. It took much longer than we had thought to get the cameras to where we needed them to be, but I am incredibly proud to bring 15 stop, Super 35 true digital film cameras to everyone...

However the big reason for the delay is the problems we were having with the global shutter feature in both of these cameras. The problems are different between the models and on the Micro Cinema Camera we have been seeing random bad pixels when in global shutter mode. On the URSA Mini 4.6K we have been seeing problems with sensor calibration when using global shutter.

Our engineers have been killing themselves working on this for months, but the performance is just not where it needs to be for us to feel comfortable shipping with global shutter, and so we have had to remove the global shutter feature from both these cameras to allow them to ship.

Obviously this is very upsetting for us, as we really wanted to produce high dynamic range cameras that also had a global shutter for an all in one design. The reality is that this is just one feature on cameras that are ready right now to shoot with and get incredible results. So we have made the decision to ship now."

Snap Sensor was founded in 2011 as a spin-off from an 8-year research program at Swiss research and technology organization CSEM. "SNAP Sensor’s cutting-edge optical technology and vision software and algorithm expertise allow us to continue unlocking new possibilities for our customers in a wide range of IoT applications such as building automation, building security, city management, transportation, and more,” said Michael Murray, GM of Industrial Sensing, Analog Devices. “This acquisition further enhances our sensing portfolio and ensures that we’re helping customers realize the best possible outcomes from IoT solutions.”

The SNAP Sensor team will remain in Switzerland to establish a new Analog Devices R&D center and continue its close collaboration with CSEM. The acquisition will enhance Analog Devices’ Blackfin Low Power Imaging Platform (BLiP). “Our team is very enthusiastic about joining Analog Devices,” said Pascal Dorster, CEO of SNAP Sensor. “This provides us access to the engineering, supply-chain, and commercialization resources needed to accelerate our growth and continue advancing our technology vision.”

MarketWired: Movidius and DJI announce that Movidius Myriad 2 vision processor is used in DJI’s flagship Phantom 4 aircraft, giving it the ability to sense and avoid obstacles in real time and hover in a fixed position without the need for a GPS signal.

The agreement is said to mark an industry first in making advanced visual guidance systems a standard feature for consumer drones.

“Movidius’ vision processor platform, Myriad 2, met the rigorous requirements we set for our flagship product, and we look forward to continued collaboration with Movidius as we push the boundaries in the drone market,” said Paul Pan, Senior Product Manager at DJI.

“DJI has set the direction for the future of the drone market and we are excited to incorporate Movidius’ low power artificial vision intelligence technology into DJI drones moving forward,” said Sean Mitchell, COO of Movidius. “Moving the technology from a demonstration to a highly reliable production worthy stage was a tremendous effort for both DJI and Movidius. The DJI Phantom 4 launch represents a milestone for the future of visually aware devices. We believe we are entering the golden age of embedded computer vision and our technology has placed Movidius at the forefront of this trend.”

Tuesday, March 15, 2016

Caeleste presents "The future of high-end imaging" workshop to be held on Wednesday, April 6, 2016 from 13:30 – 17:40 at the Square meeting Center, Kunstberg – Mont des Arts in Brussels. This seminar will run in conjunction with the SPIE Photonics Europe Conference.

Caeleste brought together experts in the field and they provide their view on the trends in future high-end image sensors:

Monday, March 14, 2016

"We have not said to much about the material structure of QuantumFilm. There are concerns over cadmium in quantum dots so the first thing to say is there is no cadmium. What we have said is that it is a metal-chalcogenide material, similar to a II-VI material surrounded by ligands in a matrix.

The dots are of a diameter of between about 3-micron and 5-micron and it is these dimensions that affect the electron band structure and govern the sensitivity to light.

We are limited by the state of silicon I/O and analog-to-digital converters. As that rises we can go at hundreds, even thousands of frames per second. There is no difference between us and comparable CMOS image sensors.

We see QuantumFilm as a platform used by us as the first and second customer. To have third and fourth customers is better for us. We are prepared to work with partners to enable them, with QuantumFilm. We wouldn't license the technology out but there are a number of other ways to enable partners.

...we are on a mature silicon platform – 110nm. It's a very different capital investment process there. We have our own fab in Taiwan but it is only focused on a couple of processes – a spin-on process to add the film and the definition of the pixels."

CTimes quotes Photonics Technology & Industry Development Association (PIDA) saying that "in 2014 the total global CMOS sensor shipments amounted to 4.65 billion units. Chinese manufacturer Galaxycore was the top company, accounting for 21% of the shipments; whereas Japan’s Sony and the United States’ Omnivision made the second highest number of shipments at 19%. Meanwhile, low pixel VGA products showed the most rapid growth during the past few years.

Chinese cell phone manufacturers are winning over CMOS sensors that were manufactured by Sony, which has resulted in Sony’s proposed early expansion of investment."

Phonearena quotes Finnish-language Taskumuro site making an unscientific comparison of Samsung Galaxy S7 equipped with Sony IMX260 and Samsung S5K2L1 dual-pixel AF sensors. One can judge the differences on real-life subjects with HDR mode off (many more pictures on the original site):

Friday, March 11, 2016

PRWeb: MEMS Drive and OPPO come up with a joint press release on their MEMS-based SmartSensor, the first image sensor-based image stabilizer for smartphones, also said to be the industry’s first sub-pixel-level optical image stabilizer.

While VCM smartphone cameras are limited to shake compensation on just two axes of movement, the new MEMS-based approach compensates for motion on three axes. This additional degree of mobility is said to vastly outperform traditional OIS technologies for smartphones, because it is faster – compensating for vibrations in 15 ms compared to 50 ms for lens-based technologies – and more accurate, and it allows for significantly lower power consumption.

MEMS Drive was founded to develop and advance the field of MEMS OIS technology for smartphone cameras. “The collaboration with OPPO has been very successful. The fact that OPPO is taking such an active role in co-developing this technology with MEMS Drive is accelerating our roadmap, and will ultimately come to benefit smartphone users sooner,” said Colin Kwan, CEO and founder of MEMS Drive.

“OPPO recognized that the MEMS Drive OIS actuator could vastly improve the end users’ camera experience. We therefore decided to invest in MEMS Drive and to co-develop the SmartSensor image stabilizer, and bring yet another significant advance in smartphone technology to market,” said King Liu, VP of Product Development at OPPO.

New Imaging Technologies introduces the NSC1401, an analog WDR QVGA InGaAs sensor series. The sensor uses a new generation of ROIC with a 320x256 pixels at 25um pitch coupled to an InGaAs retina that operates in WDR mode and global shutter. The spectral response ranges from 900nm to 1700nm. Its AFE provides ultra fast response time down to 200ns for applications such as active imaging. The sensor operates both in linear integration mode and in log response at speeds up to 300fps at full resolution.

One of NIT customers postes a WDR video shot with its older NSC1003 GS sensor:

Wednesday, March 09, 2016

BusinessWire: Samsung announces its 12MP, 1.4um dual pixel sensor for smartphones, already in mass production. The dual pixel is said to enable rapid AF even in low light situations.

“With 12 million pixels working as a phase detection auto-focus (PDAF) agent, the new image sensor brings professional auto-focusing performance to a mobile device,” said Ben K. Hur, VP Marketing, System LSI Business at Samsung. “Consumers will be able to capture their daily events and precious moments instantly on a smartphone as the moments unfold, regardless of lighting conditions.”

The new image sensor employs two PDs located on the left and right halves of a pixel, while a conventional PDAF-equipped sensor dedicates less than 5% of its pixels, with one photodiode each that converts light particles into measurable photocurrent for phase detection. As each and every pixel of the Dual Pixel image sensor is capable of detecting phase differences of perceived light, significantly faster auto-focus has become possible, especially for moving objects even in poor lighting conditions.

The image sensor has also adopted Samsung’s ISOCELL technology, which isolates the photodiodes in each pixel with a physical wall to further reduce color cross talk, maximizing the image sensor’s performance.

The new image sensor is built with chip-stacking technology: a 65nm sensor on top of 28nm logic chip.

Tuesday, March 08, 2016

"Our lab staff have completed the initial cross-sectioning work for our IMX260 project and we have a substantial update to share: the Sony IMX260 is, in fact, a stacked chip CMOS image sensor! As mentioned, we had expected to find through silicon vias (TSVs) consistent with Sony’s Exmor RS technology platform. Our early teardown results revealed what appeared to be a conventional Sony non-stacked back-illuminated (Exmor R) chip. After going deeper inside, we see that Sony is leading the digital imaging sector into an era of hybrid bonding. It’s not currently known if Sony considers this an extension of its Exmor RS platform, or if the IMX260 marks the first of a new (as of now unannounced) family of back-illuminated image sensors. For now we consider the IMX260 to be a 3rd generation Exmor RS chip.

Our cross-section reveals a 5 metal (Cu) CMOS image sensor (CIS) die and a 7 metal (6 Cu + 1 Al) image signal processor (ISP) die. The Cu-Cu vias are 3.0 µm wide and have a 14 µm pitch in the peripheral regions. In the active pixel array they are also 3.0 µm wide, but have a pitch of 6.0 µm. Note that in the images we’ve included we do see connections from the Cu-Cu via pads to both CIS and ISP landing pads."

"Major industry players—such as ON Semiconductor, CMOSIS, e2v, and Sony — have grown even larger as they’ve acquired smaller challengers, yet they continue to compete to strengthen their hold on existing markets and their competitive position with new customers as the demand for devices that rely on image sensors expands.

For end customers, industry consolidation means the promise of innovation leading to new, higher-quality sensors that deliver greater features and functionality, and are available at a lower cost.

The image sensor industry holds a vast repository of intellectual property and consolidation among former competitors will result in the integration of this intellectual property and the sharing of best practices, which in turn, should facilitate improved image sensor quality. In fact, CMOS image sensor quality has already improved in recent years.

The cost of image sensors and the price of the cameras or other products in which they’re incorporated will continue to decrease as the remaining competitors jockey for expanded market share, and consumers will be the beneficiaries."

Monday, March 07, 2016

Boston University Associate Professor Vivek Goyal lecture "First-Photon Imaging and Other Imaging with Few Photons" is published on Vimeo:

Abstract:
"LIDAR systems use single-photon detectors to enable long-range reflectivity and depth imaging. By exploiting an inhomogeneous Poisson process observation model and the typical structure of natural scenes, first-photon imaging demonstrates the possibility of accurate LIDAR with only 1 detected photon per pixel, where half of the detections are due to (uninformative) ambient light. I will explain the simple ideas behind first-photon imaging. Then I will present related subsequent works that enable the use of detector arrays and improve robustness to ambient light."