"Here we introduce Focus-Induced Photoresponse (FIP), a novel method to measure distances. In a FIP-based system, distance is determined by using the analog photoresponse of a single pixel sensor. This means that the advantages of high-density pixelation and high-speed response are not necessary or even relevant for the FIP technique. High resolution can be achieved without the limitations of pixel size, and detectors selected for a FIP system can be orders of magnitude slower than those required by ToF based ones. A system based on FIP does not require advanced sensor manufacturing processes to function, making adoption of unusual sensors more economically feasible.

In the FIP technique, a light source is imaged onto the photodetector by a lens. The size of its image depends on the position of the detector with respect to the focused image plane. FIP exploits the nonlinearly irradiance-dependent photoresponse of semiconductor devices. This means that the signal of a photodetector not only depends on the incident radiant power, but also on its density on the sensor area, the irradiance. This phenomenon will cause the output of the detector to change when the same amount of light is focused or defocused on it. This is what we call the FIP effect."

Wednesday, August 16, 2017

Coolpad, one of the large China-based smartphone makers that uses 13MP monochrome + 13MP RGB dual camera in its devices, says about the benefits of that configuration:

"The two lenses may look the same, but they have very different functions. One shoots in RGB to produce a color image, while the other takes care of the monochrome images. The monochrome lens brings out the detail, and engage more light than the RGB lens when in low-light condition, which takes care of the colors. The Dual camera 2.0 technology in Cool Dual actually enhanced the overall clarity of the image by 20%, help reduce image noise by 8% and improved brightness by 20%. “With these, we believe the real dual 13MP cameras brings us smart framing and the 6P lens gives customers the best quality of pictures”, said Jeff Liu, Coolpad Group CEO."

Digitimes Research comes up with its analysis of LiDAR adoption in the car industry, forecasting first LiDAR-equipped production cars appearing this year:

"...Audi will take the initiative to launch car models equipped with LiDAR sensors in 2017 and Mercedes Benz, Cadillac, Ford and Volvo are expected to follow suit in 2018-2019...

...only one million LiDAR sensors worth US$200 million will be shipped globally in 2019. However, global shipment value for LiDAR sensors will fast grow to US$500 million in 2024 along with decreasing cost and increasing adoption.

For Level 2 self-driving... LiDAR sensors are required to detect objects as far as 100-150 meters ahead. For Level 3, the required range is 200-300 meters

...Velodyne LiDAR and Quanergy Systems, and Germany-based Ibeo Automotive Systems are the main vendors globally of LiDAR sensors, with the former two focusing on solid-state models to reduce LiDAR sensor sizes."

Qualcomm announces an expansion to the Qualcomm Spectra Module Program to incorporate a biometric authentication and high-resolution depth sensing for a broad range of mobile devices and head mounted displays (HMD). This module program is built on the 2nd generation Spectra embedded ISP family.

Now, the camera module program is being expanded to include new camera modules capable of utilizing active sensing for biometric authentication, and structured light for a variety of computer vision applications that require real-time, dense depth map generation and segmentation.

The low-power, high-performance motion tracking capabilities of the Qualcomm Spectra ISP, in addition to optimized simultaneous localization and mapping (SLAM) algorithms, are designed to support new extended reality (XR) use cases for VR and AR applications that require SLAM.

Denali-MC provides a 16-bit data path capable of producing 100 dB or 16-EV steps of dynamic range. Denali-MC HDR IP completely eliminates halo artifacts and color shifts, and mitigates the ghost artifacts and transition noise often seen when merging multiple exposures. This allows Denali-MC to capture up to four exposure frames from 1080p video at 120 fps, while merging and tone mapping at 30 fps in real time. For applications requiring faster output frame rates, Denali-MC also supports a two frame merge mode exporting at 60 fps. Furthermore, Denali-MC can support up to 29 different CMOS sensors, including 9 Aptina/ON Semi, 6 Omnivision and 11 Sony sensors, and 12 different pixel-level gain and frame-set HDR methods, and is said to be easily ported to the most widely-used logic platforms.

“At Fairchild Imaging, we have been very impressed with the Denali-MC ISP state of the art locally adaptive tone mapping (LATM) functionality,” said Vern Klein, Director of Sales and Marketing Fairchild Imaging. “I’ve seen first-hand how they can make a great sensor perform even better in its native WDR mode. Camera manufacturers will benefit from this technology, which provides high quality HDR functionality without requiring companion chips or additional hardware cost to support the algorithms.”

“Pinnacle’s new Denali-MC HDR ISP is a significant achievement addressing HDR video requirements in surveillance, monocular camera automotive markets and machine learning with its customization, artifact compensation, color accuracy and quantifiable high dynamic range of 100 dB,” said Paul Gallagher, image sensor industry veteran and futurist. “Camera system developers in these markets would benefit from utilizing these attributes of the Denali-MC ISP as a standalone ISP or integrate Pinnacle’s HDR IP blocks within their existing ISP.”

"In 2016, the night vision devices segment accounted for close to 55% of the total revenues due to high adoption of low light level imaging sensors by the defense sector. The increasing focus on reducing road accidents drives the demand for night vision systems. Night vision systems are offered as built-in systems by Audi, BMW, and Toyota. The market is expected to grow at a CAGR of close to 16% during the forecast period.

The global low light level imaging sensors market by cameras contributed 25% of the total revenues in 2016. These sensors are used in applications such as home security cameras, small business monitoring, and infrastructure security.

In 2016, the global low light level imaging sensors market by optic lights accounted for around 13% of the total revenue in 2016. These are widely used in lighting, decorations, and mechanical inspections of obscure things. Optic lights save space and provide superior lighting, and are therefore used in vehicles. Low light level imaging sensors are crucial components of optic lights as these sensors intensify the range of light."

BusinessWire: The upcoming LG V30 smartphone is said to have camera module with F#1.6 lens, said to be the brightest in smartphones. I wonder what is the effective aperture of the pixels in the sensor and whether it makes use of the so bright lens.

Wednesday, August 09, 2017

Q:Why did you decide to buck the trend in semiconductors to have your own foundry?

A:Simply there was or is no silicon CMOS technology available which offered high Quantum Efficiency in NIR in combination with high performance CCD. Our backside illumination technology OHC15L offers what is needed for powerful LiDAR, TOF and in general ultrafast gated imagers.

Q:Your session on time-of-flight sensors is about next generation technology – what’s different about it from existing ToF?

"During the second quarter, we exited the mobile image sensor market as the margin profile for that business was not comparable with our target financial model.

Furthermore, we monetize the value of highly differentiated mobile imaging technology, through an intellectual property licensing agreement with a third-party. We have excluded the gain of approximately $24 million related to this transaction from our second quarter non-GAAP results.

Second quarter free cash flow and operating cash flow included approximately $24 million from a licensing arrangement related to the mobile image sensor business.

For the second quarter, we again posted strong growth in our CMOS image sensor business for viewing and ADAS applications. We continue to gain market share in automotive image sensors and our design win pipeline for our CMOS image sensors for automotive applications continues to grow at a rapid pace.

We continue to see strong growth in machine vision applications with our PYTHON line of CMOS image sensors. As I indicated earlier, we are engaging at the very early stage with key players in artificial intelligence for machine vision and robotics applications."

"The goal of this work is to develop a novel CMOS camera sensor that provides frameless capture, and has significantly higher dynamic range, finer color sensitivity, and lower noise as compared to the current state-of-the-art sensors. The strength of the approach lies not in developing new types of photodetectors or amplifiers, but in the manner in which information is extracted from the pixel sensor, transported to the processing logic, and processed to yield intensity values. At the heart of the sensor is an asynchronous network to transport events from the pixel sensors to the off-grid processing circuitry. The asynchronous nature of pixel communication is the key to achieving frameless image capture."

The UNC Chapel Hill research group is looking for industrial partners who might be interested in the IP and in partnering with the group for further development.

Saturday, August 05, 2017

NovusLight publishes an article "Vision Inspired by Biology" based on the talk with Luca Verre, the CEO and co-founder of Chronocam. Few quotes:

"Based on the new technology concept, the company recently released a QVGA resolution (320 by 240 pixels) sensor with a pixel size of 30-microns on a side and quoted power efficiency of less than 10mW.

Because the sensor is able to detect the dynamics of a scene at the temporal resolution of few microseconds (approximately 10 usec) depending on the lighting conditions, the device can achieve the equivalent of 100,000 frames/sec.

In Chronocam’s vision sensor, the incident light intensity is not encoded in amounts of charge, voltage, or current but in the timing of pulses or pulse edges. This scheme allows each pixel to autonomously choose its own integration time. By shifting performance constraints from the voltage domain into the time domain, the dynamic range is no longer limited by the power supply rails.

Therefore, the maximum integration time is limited by the dark current (typically seconds) and the shortest integration time by the maximum achievable photocurrent and the sense node capacitance of the device (typically microseconds). Hence, a dynamic range of 120dB can be achieved with the Chronocam technology.

Due to the fact that the imager also reduces the redundancy in the video data transmitted, it also performs the equivalent of a 100x video compression on the image data on chip.

The company announced it had raised $15 million in funding from Intel Capital, along with iBionext, Robert Bosch Venture Capital GmbH, 360 Capital, CEAi and Renault Group."

International Image Sensors Workshop 2017 held in Hiroshima, Japan on May 30-June 2, publishes all 110 presented papers on-line: 61 regular papers, 45 posters, and 4 invited papers. Some of the papers also have presentation slides published. It's a very good read over the weekend!

Thursday, August 03, 2017

Yole Developpement releases "Uncooled Infrared Imagers market & technology trends" report: "Today the uncooled IR camera market is showing an 8% CAGR between 2016 and 2022 reaching almost US$ 4.4 billion at the end of the period. Only few players control this industry. In 2016, two leading companies, FLIR and ULIS, both with different market strategies and solutions, owned more than 75% of the total market (in volume).

Besides FLIR and ULIS, many others players are also benefiting from IR imaging market growth:

• SEEK Thermal has introduced its new, higherperformance RevealPRO, and CompactPRO products as the company moves from consumer products to more high-end products.

• Players such as BAE Systems or Leonardo DRS are benefiting from the defense market growth cycle that could still last for a few more years.

• Newcomers are introducing their products, for example, Teledyne Dalsa released its first Vox microbolometers in 2017.

• Many companies in China are developing their own microbolometers. They do not produce large volumes today but the domestic market has great potential.

• On the other hand companies like Bosch, long involved in the MEMS and infrared businesses, have changed their strategies."

"2016 was a good year for the microbolometer market. There were almost 900,000 uncooled IR camera shipments, worth $2.7B in revenues thanks to a dynamic commercial market and continued growth for military applications. Many commercial applications drove this growth, including thermography, surveillance, PVS and firefighting. In 2022, we estimate there will be 1.7M units shipped.

Thermography is still the leading commercial market by far, in both value and volume. We estimate that there will be 500,000 thermography units shipped annually by 2022. As camera prices continue to fall, with several new products below $1000, sales are growing.

Surveillance is another interesting market. Until recently, thermal cameras have primarily been used in high-end surveillance for critical and government infrastructure. New municipal and commercial applications with lower price points are now arising, including traffic, parking, power stations and photovoltaic planning. We estimate this market will grow at almost 17% over 2017-2022 to reach 300,000 units by 2022.

Night vision in cars, including autonomous vehicles, could boost the microbolometer market. China is already a large market for automotive night vision, absorbing 25% of the total number of systems produced. In coming years, China will continue to account for a high share of this market."

“The Company believes 3D sensing is among the most significant new features for the next generation smartphone. The Company’s SLiM product line, based on structured light technology, is a state of the art total solution for 3D sensing. Himax’s goal is to provide total solutions with performance, size, power consumption and costs all suitable for smartphones and tablets. Himax offers fully integrated structure light modules, with the vast majority of the key technologies inside the module also developed and supplied by the Company. These critical in-house technologies include advanced optics utilizing the Company’s world leading WLO technology, laser driver IC, high precision active alignment for the projector assembly, high performance near-infrared CMOS image sensor and, last but not least, an ASIC chip for 3D depth map generation. The fact that all of these critical building blocks are developed in-house puts the Company in a unique position. Himax is able to react quickly and tailor its solutions to customers’ specific needs.

It also represents a very high barrier of entry for any potential competition and a much higher ASP for the Company. While the Company prefers to offer a total solution, it can also provide the aforementioned individual technologies separately to select customers so as to best accommodate their specific needs.

Thanks to the Company’s absolute technology leadership, its progress made with the fully integrated structure light 3D sensing total solution module is very exciting. Himax is seeing strong demand for 3D sensing solutions from numerous tier 1 customers. The Company is in close collaboration with select leading smartphone makers and partners right now, aiming to bring its total solution to mass production as early as early 2018 to meet the customers' aggressive launch timetables. Moreover, given that the Company is offering highly integrated solutions with ASPs much higher than those of individual components, by the time the Company starts shipping its total solutions, they will be a major contributor to both Himax’s revenues and profit, and consequently create a more favorable product mix for the Company.

Himax continues to make great progress with its two machine vision sensor product lines, namely, near infrared (“NIR”) sensor and Always-on-Sensor (“AoS”). The Company’s NIR sensor is a critical part in the structured light 3D sensing total solution. The Company’s NIR sensors’ overall performance is far ahead of those of its peers in 3D sensing applications. Himax currently offers low noise HD, or 1 megapixel, and 5.5 megapixel NIR sensors and is planning to add more to further enrich its product portfolio. Himax’s NIR sensors deliver superior quantum efficiency in the NIR range, especially over 940nm band which is critical for outdoor applications.

The Company’s AoS solutions provide super low power computer vision, which enables new applications across a wide variety of industries. The ultra-low power, always-on vision sensor is a powerful solution capable of detecting, tracking and recognizing its environment in an extremely efficient manner using just a few milliwatts of power. The Company is pleased to report that it already has one major global brand leveraging its AoS in their new high end TV models, which have already hit the market.

For the traditional human vision segments, Himax sees strong demand in notebooks and increased shipments for multimedia applications such as car recorders, surveillance, drones, home appliances, and consumer electronics, among others"

A: Due to the emergence of safety critical application, basic performance requirements have changed. Sensors need to reliably work for long time. Reliable operating temperature range, functional safety, security, power, and heat are very important performance parameters that a safety critical system design need to address.

Challenge for an image sensor for autonomous driving is: Can a sensor still see well under all real world conditions (reliable working and always working).

Q: Why can’t we just use dashcam footage on YouTube for finding problems?

A: Dashcam footage can be a datapoint but autonomous driving system requires more real world data and real world simulation to get to the best, robust and the safest autonomous driving car.

NHK Open House held in Tokyo in May 2017 revealed the latest TV innovations:

3D-integrated image sensor with per-pixel interconnect (together with University of Tokyo): "We have managed to shrink the pixels from the previous size of approximately 80 × 80 µm² to approximately 50 × 50 µm²."

Fast 8K 240fps image sensor: "We developed a 33 megapixel image sensor capable of high-speed operation and constructed a prototype 8K high-speed camera supporting shooting at 240 fps, which is four times the frame rate in 8K test satellite broadcasting. This camera enables the shooting of fast-paced action such as that in sports in 8K ultrahigh-definition video."

And many other innovations including organic image sensor "with a charge multiplication photoelectric conversion fi lm in order to achieve highly sensitive 8K cameras" and "organic image sensors with three organic films that provide sensitivity for each of the primary colors."

EETimes article on fabs mentions Apple imaging group activity: "Or, take the example of the imaging-tech landscape growing rapidly in the Grenoble/Lyon area. Pierre Cambou, activity leader for imaging and sensors at Yole Développement, explained that imaging technology innovation often demands advancements in new manufacturing techniques. In return, it creates a tech-driven environment.

Reportedly, more than a dozen Apple engineers are moving to Grenoble to open an R&D center. This is happening precisely because the region has the expertise in image sensors and production — led by ST. “You need to have factories” to make an ecosystem, said Cambou."

Sunil Kumar Singh, a lead analyst at Technavio, reports: "The growing popularity of augmented reality and virtual reality devices, 3D scanners, and gesture recognition technologies and the high investment in driverless cars by automobile manufacturers such as Ford, Nissan, and Tesla are expected to drive the global ToF market.

The demand for camera-enabled phones has been on the rise in South America and will drive the market for ToF sensors in this region. The replacement of CCD sensors with ToF sensors in many applications will also have a major impact on the ToF sensors market. The US, followed by Canada and Brazil, is the leading revenue generating country in the region owing to the early adoption of the technology"

Global time of flight sensor market is expected to grow at a CAGR of 3% from 2017-2021. The consumer electronics segment accounted for close to 52% of the ToF sensor market share in 2016.

Tuesday, August 01, 2017

Digitimes Research believes that Sony took a 45% share of the global CIS market in 2016, while Samsung grabbed a 15% share of the market. The global CIS shipment value will grow to $11.2b in 2017 from $10.4b in 2016. The market is forecast to grow to nearly $13.8b in 2020.

"Sales increased 41.4% year-on-year (a 38% increase on a constant currency basis) to 204.3 billion yen. This increase was primarily due to a significant increase in unit sales of image sensors for mobile products, as well as the absence of the impact of a decrease in image sensor production due to the 2016 Kumamoto Earthquakes in the same quarter of the previous fiscal year, partially offset by a significant decrease in sales of camera modules, a business which was downsized."

The forecast for 2017 fiscal year ending in March 2018 has been updated too:

"Sales are expected to be lower than the April forecast primarily due to lower-than-expected image sensor unit sales for mobile products, partially offset by the impact of foreign exchange rates. Operating income is expected to be higher than the April forecast mainly due to lower-than-expected production costs as well as the positive impact of foreign exchange rates, partially offset by the impact of the above-mentioned decrease in sales."

Monday, July 31, 2017

Extended Depth of Focus (EDoF) techniques used to be a popular topic 10-15 years ago, as long as the mainstream camera phone resolution has not exceeded 2MP. However, EDoF companies were unable to scale their resolution beyond that point.

"A midwave infrared (MWIR) system is simulated showing that this design will produce high quality images even for large amounts of defocus. It is furthermore shown that this technique can be used to design a flat, single optical element, systems where the phase mask performs both the function of focusing and phase modulation."

"We discuss optical imaging capabilities and limitations, and present first prototypes and results. Modern 3D laser lithography and deep X-ray lithography support the manufacturing of extremely fine collimator structures that pave the way for flexible and scalable thin-film cameras that are far thinner than 1 mm (including optical imaging and color sensor layers)."

MIT Technology Review: Cheaper LiDARs may not deliver the quality of data required for driving at highway speeds:

"At 70 miles per hour, spotting an object at, say, 60 meters out provides two seconds to react. But when traveling at that speed, it can take 100 meters to slow to a stop. A useful range of somewhere closer to 200 meters is a better target to shoot for to make autonomous cars truly safe.

That’s where cost comes in. Even an $8,000 sensor would be a huge problem for any automaker looking to build a self-driving car that a normal person could afford."

Graeme Smith, chief executive of the Oxford University autonomous driving spinoff Oxbotica, told MIT Technology Review that he thinks a trade-off between data quality and affordability in the lidar sector might affect the rate at which high-speed autonomous vehicles take to the roads. Smith thinks that automakers might just have to wait it out for a cheap sensor that offers the resolution required for high-speed driving. “It will be like camera sensors,” he says. “When we first had camera phones, they were kind of basic cameras. And then we got to a certain point where nobody really cared anymore because there was a finite limit to the human eye.”

Sunday, July 30, 2017

I've prepared a list of image sensor companies genealogy, with a kind help of EF and DG. As one can understand, nobody's knowledge is complete, so please feel free to add more info and correct mistakes in comments. The link is also available in the left hand side links, next to the image sensor companies list.

Friday, July 28, 2017

Essential startup tells what does is take to tune an image processing pipeline for smartphone dual camera (RGB + Monochrome):

"Objective tuning is meant to ensure that each camera module sent to production is operating at an acceptable baseline level. It began with picking the correct golden and limit samples from the factory.

The golden samples are the modules whose characteristics most closely align to the average of our camera and the experience that most of our users will have. Once golden samples were collected, we used them to capture a series of images under various laboratory-controlled test conditions. The images from the golden samples were then used to train the ISP to recognize the unique characteristics of those modules. In other words, we taught the ISP to see the world in a certain way. We also tested other limit and random samples, which have different characteristics that are saved in the factory calibration data, to ensure that they are behaving like the golden samples in those scenes too. The objective tuning process lasted three months. By the end, all of our cameras were responding to the predefined lab scenes in an accurate and predictable fashion.

But even when a camera can repeat actions in a lab, it still needs to be taken into the field— because in real life a camera must be able to take the right picture in millions of different scenarios. Subjective tuning is what makes this possible. It is a painstaking, iterative process—but also one I find incredibly rewarding.

Our subjective tuning process began in January 2017, and during that time, we have gone through 15 major tuning iterations, along with countless smaller tuning patches and bug fixes. We have captured and reviewed more than 20,000 pictures and videos, and are adding more of them to our database every day."

"Mainstream techniques usually take a matching window around a given pixel in the left (or right) image and given epipolar constraints find the most appropriate matching patch in the other image. This requires a great deal of computation to estimate depth for every pixel.

In this paper, we solve this fundamental problem of stereo matching under active illumination using a new learning-based algorithmic framework called UltraStereo. Our core contribution is an unsupervised machine learning algorithm which makes the expensive matching cost computation amenable to O(1) complexity. We show how we can learn a compact and efficient representation that can generalize to different sensors and which does not suffer from interferences when multiple active illuminators are present in the scene. Finally, we show how to cast the proposed algorithm in a PatchMatch Stereo-like framework for propagating matches efficiently across pixels."

"This thesis introduces hardware implementations and algorithms that use inspiration from deep learning and the advantages of event-based sensors to add intelligence to platforms to achieve a new generation of lower-power, faster-response, and more accurate systems."

"We expect ASMPT’s AA machine sales to grow only 10% YoY in 2018 and stay flat YoY in 2019, after 56% YoY growth in 2017 (Figure 8). Most camera module makers should upgrade their AA machines in 2017. Notably, we believe Apple will not implement 3D sensing for 4.7” and 5.5” iPhones in 2018. This means Apple supply chain will not procure new AA machines for 3D sensing from ASMPT in 2018 (i.e., ASMPT is benefiting from Apple’s adoption of 3D sensing for 5.8” OLED iPhone in 2017).

We estimate camera module makers could upgrade their AA machines every three years due to rapid specs migration in dual cameras for smartphones. This is shorter than the normal duration of five to six years for a CIS (CMOS image sensor) machine. However, ASMPT’s AA business could still see a sales growth deceleration in 2018/19, even assuming a shorter duration of AA machines."

Thursday, July 27, 2017

Two weeks after Light L16 computational camera shipments start, there is still no single user review anywhere on the web. However, LightRumors notices that Light Co. has released few full resolution images on its web site. The images are processed using Light’s proprietary software, Lumen, which is powered by Light's proprietary Polar Fusion engine. The engine computationally fuse the many images captured by the L16 to create one high-quality image.

Wednesday, July 26, 2017

ST Micro reports Q2 2017 results. Regarding the imaging business, the company says "As anticipated, Imaging revenues in the second quarter decreased slightly on a sequential basis to $68 million, while we prepare for the ramp of new programs.

On a year-over-year basis, Imaging revenues increased 60% in the second quarter, and for the first half 2017 rose 83% to $140 million driven by ST’s innovative Time-of-Flight technology.

In the second quarter we continued to gain design-wins while delivering high volumes of our “FlightSense” Time-of-Flight proximity and ranging sensors to multiple smartphone OEMs. We now have reached cumulative shipments of over 300 million Time-of-Flight sensors and are in more than 80 smartphone models from 15 different OEMs.

In our Imaging business, we anticipate strong sequential growth, as the key new program ramps in Q3, followed by further revenue acceleration in the fourth quarter of this year."

EETimes speculates that the "key new program ramps in Q3" might mean ToF sensor in Apple iPhone 8.

SeekingAlpha publishes the earnings call transcript with a clarifying question in Q&A session:

Janardan Menon - Liberum Capital Ltd.

And just a brief follow-up on the Time-of-Flight, which is in your other division. After a big jump in the second half of last year, that revenue has sort of flattened out. But you are continuously reporting higher number of models and OEM on that particular product. And now I understand that from the second half, that revenue will increase sharply because of the 3D of the special program.

But just on the Time-of-Flight itself, can you give some reason why that revenue is not really rising as a number of model. Is that price pressure coming there? Or what are the dynamics which is happening there?
Carlo Bozotti - STMicro CEO:

I think on the Time-of-Flight, we have enormous number of customers in our end. Of course, we are also working on new technologies for the Time-of-Flight. So, there would be a new wave, but we are pretty happy that the growth is impressive in Imaging and we are investing a lot for the new initiative. This is visible of course in terms of expenses in the P&L, but we have now sort (47:46) the $300 million business of Time-of-Flight that we want to keep going and we have the opportunity. I think it's pretty good and it's a pretty good business. I would say it's very good business, but in parallel, we are investing on new things and this will make – will allow us to make another important step.

IMVEurope, Photonics: Prinston Infrared Technologies announces its first InGaAs SWIR camera to fall under the no ITAR restrictions. The 1280SciCam, features a 1,280 x 1,024-pixel image sensor on a 12µm pitch, having long exposure times, low read noise, 14-bit digital output, and full frame rates up to 95Hz. The camera is designed for advanced scientific and astronomy applications, and is now classified by the Export Administration Regulations as EAR 6A003.b.4.a for export.

The US government’s export control has been going through a process of reform, which began in 2009 as part of the Obama Administration's Export Control Reform (ECR) initiative. The technology from Princeton Infrared no longer falls under ITAR control, which is equipment specially designed or modified for military use, but now falls under EAR. This, in theory, makes it easier to export the technology outside the USA.

Bob Struthers, sales director at Princeton Infrared Technologies, says: ‘Our 1280SciCam has already generated sales and applications with leading research entities overseas. An EAR export classification will propel our ability to serve these customers promptly and efficiently. This will be very valuable to their upcoming projects and equally beneficial to the growth of our young company.’

IMVEurope: A year ago, Xenics SWIR cameras have been granted Commodity Jurisdiction (CJ) approval. This new CJ means that all SWIR cameras supplied by Xenics are now ITAR-free in the US.

Presseagentur: Framos and Pyxalis extend their custom sensor design cooperation. The companies have been cooperating for several years and now have entered into a formal agreement. This partnership provides Framos partners with fully customized, high performance sensors, including sensor specification elaboration support, sensor architecture, design, prototyping, validation, industrialization and manufacturing.

“We’re delighted to work with FRAMOS Technologies in Europe and North America. As a 7-year-old company supplying custom image sensors, we’ve built successful partnerships with customers in many applications from niche markets (aerospace, scientific, defense) to medium volume (industrial, medical) and consumer markets (biometrics, automotive). Thanks to this cooperation with FRAMOS, it is now time to reach a larger market and to provide our capabilities and technologies to a greater number of customers.” says Philippe Rommeveaux, PYXALIS’s President and CEO.

Monday, July 24, 2017

TechInsights keeps publishing parts from Ray Fontaine's presentation at IISW 2017. The third part reviews modern pixel-to-pixel crosstalk reduction measures: Front-DTI and Back-DTI:

Sony dielectric-filled B-DTI structure from the 1.4 µm pixel featuring a 2.9 µm thick substrate extends to a depth of 1.9 µm from the back surface, although it extends to a depth of 2.4 µm deep at B-DTI intersections:

ETH Zurich and University of Zurich also announces Misha Award for the achievements in Neuromorphic Imaging. The 2017 Award goes to "Event-based Vision for Automomous High Speed Robotics" work by Guillermo Gallego, Elias Muggler, Henry Rebecq, Timo HorstSchafer, and Davide Scaramuzza from University of Zurich, Switzerland.

The high-resolution, ultra-thin, 500 dpi flexible image sensor (sensitive from visible to near infrared) has unique advantages in performance and compactness. Its ability to conform to three-dimensional shapes sets it apart from conventional image sensors. The device provides dual detection: fingerprinting as well as vein matching. Due to its large-area sensing and high-resolution image quality, the device is suited to biometric applications from fingerprint scanners and smartcards to mobile phones, where accuracy and robustness as well as cost-competiveness are key.

Designed on a large area (3” x 3.2”; 7.62 x 8.13cm) plastic substrate, the flexible image sensor is ultra-thin (300 microns), therefore remarkably lightweight, compact and highly resistant to shock. Central to the 500 dpi flexible image sensor is an Organic Photodiode (OPD), a printed structure developed by Isorg that converts light into current – responsible for capturing the fingerprint. Isorg also developed the readout electronics, the forensics quality processing software and the optics to enable seamless integration in products. FlexEnable, the leader in developing and industrializing flexible organic electronics, developed the Organic TFT backplane technology, an alternative to amorphous silicon. This partnership between the two companies began in Q4 2013.

Techbriefs magazine publishes an article "CMOS, The Future of Image Sensor Technology" by Gareth Power, Marketing Manager, Teledyne e2v. The main trends in industrial and scientific sensors are said to be higher speeds and lower prices. There is also a diagram on image sensor companies spin-offs and mergers:

Some parts are not exactly correct here, like Avago has not been spun-off from Micron. Also, Far Eastern companies are not there, like no Toshiba-Sony, nor Siliconfile-Hynix, nor others. But as a first attempt to make such a diagram, it looks really nice.

"Almost all current C-ToF systems use sinusoid or square coding functions, resulting in a limited depth resolution. In this paper, we present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. Given a fixed total source power and acquisition time, the new Hamiltonian coding scheme can achieve up to an order of magnitude higher resolution as compared to the current state-of-the art methods, especially in low SNR settings."