French-language site Challenges reports that ST has signed a huge contract with Apple for image sensors, which will boost the workload at its Crolles plants, where fifty new machines are being installed. The influx of this and other new orders allows the Crolles 200mm fab to reach utilization rate of 100%, while Crolles 300mm plant utilization exceeds 80%. When questioned, ST group refused to comment.

Tuesday, November 29, 2016

"Reimagined from the ground up, MediaTek’s ADAS system will feature cutting-edge, decentralized Vision Processing Unit (VPU) solutions to optimally handle large amounts of real-time visual streaming data. MediaTek employs Machine Learning to increase the accuracy and speed of detection, recognition and tracking, making it more comparable to human decision-making performance."

Reuters, NYT: Intel will provide a SoC for autonomous vehicle systems that Delphi and Mobileye are developing together, Glen De Vos, Delphi's VP Engineering said. Delphi is testing autonomous driving technology in vehicles in Singapore. By the end of this year, Delphi hopes to choose a city in the United States to launch a test fleet of self-driving cars during 2017, De Vos said. The company is also looking for test site in a European city.

Delphi and Mobileye will demo their self-driving vehicle system at the CES in Las Vegas in January. That system will use current, electromechanical laser imaging technology, or LIDAR, that is too expensive for use in consumer vehicles, he said. Delphi is also working with Quanergy Systems, a maker of solid-state LIDAR systems.

In SLVS-EC, the clock signal is embedded in the data and recovered by dedicated circuitry on the receive side. Since the signals are then less sensitive to skew, the data can be transmitted at much higher data rates and over much further distances. Each of the SLVDS-EC channels can support speeds of up to 2.304Gbit/sec. The result is that a sensor that supports the new standard will be able to transfer data over eight links at a data rate of 1.84GBytes/sec (80% of full bandwidth due to 8b10b encoding).

I'm not sure what is the difference between MIPI M-PHY and SLVDS-EC, but Sony promotes its high speed:

Sunday, November 27, 2016

ARMdevices interviews Jem Davies, ARM VP of Technology, Imaging and Vision Group. Jem talks about imaging and vision strategy at ARM and says that Huawei will be one of the first licensees of its imaging ad vision technology in smartphones:

Jem Davies ARM Techcon 2016 presentation video posted on Nov. 22 has been updated, so that its end part is not cut anymore.

Saturday, November 26, 2016

AutoSens 2016 kindly permitted me to post a couple of slides from Softkinetic presentation "3D depth-sensing for automotive: bringing awareness to the next generation of (autonomous) vehicles" by Daniel Van Nieuwenhove. A good part of the presentation compares ToF with active and passive stereo solutions:

Friday, November 25, 2016

According to BusinessKorea sources, Samsung is contemplating to split its semiconductor business into a fabless and foundry divisions:

"Samsung Electronics’ System LSI business division is largely divided into four segments; system on chip (SoC) team which develops mobile APs, LSI development team, which designs display driver chips and camera sensors, foundry business team and support team. According to many officials in the industry, Samsung Electronics is now considering forming the fabless division by uniting the SoC and LSI development teams and separating from the foundry business."

Thursday, November 24, 2016

NikonRumors quotes Egami talking about Nikon patent application with 2-layered pixel array forming a cross-type PDAF: "Nikon patent application is to use the two imaging elements having different phase difference detection direction in order to achieve a cross-type AF."

"In general, the method that Hitachi employed for its lens-less camera uses a "moire stripe" that can be obtained by stacking two concentric-patterned films with a certain interval and transmitting light through them. The numerous light-emitting points constituting the image influence the pitch and orientation of the moire stripe. The location of light, etc can be restored by applying two-dimensional Fourier transformation to the moire stripe.

This time, Hitachi replaced one of the films (one that is closer to the image sensor) with image processing. In other words, one film is placed with an interval of about 1mm, but the other film does not actually exist. And, instead of using the second film, a concentric pattern is superimposed on image data."

Basler: After a successful conclusion of the evaluation phase and extremely positive customer feedback, Basler first ToF camera is now entering series production. The VGA ToF camera is said to stand out for its combination of high resolution and powerful features at a very attractive price. This outstanding price/performance ratio puts the Basler ToF camera in a unique position on the market and distinguishes it significantly from competitors' cameras.

The Basler ToF camera operates on the pulsed time-of-flight principle. It is outfitted with eight high-power LEDs working in the NIR range, and generates 2D and 3D data in one shot with a multipart image, comprised of range, intensity and confidence maps. It delivers distance values in a working range from 0 to 13.3 meters at 20fps. The measurement accuracy of the Basler ToF camera is +/-1 cm at a range from 0.5 to 5.8 meters, while consuming 15W of power.

Tuesday, November 22, 2016

IEEE Electron Devices Society publishes a list of this year's awards. Jaroslav Hynecek receives 2016 EDS J.J. Ebers Award for "For the pioneering work and advancement of CCD and CMOS image sensor technologies." The Award is to be presented at IEDM in December.

"Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD) cameras."

"Present-day photodiodes notably suffer from optical losses and generated charge carriers are often lost via recombination. Here, we demonstrate a device with an external quantum efficiency above 96% over the wavelength range 250–950 nm. Instead of a conventional p–n junction, we use negatively charged alumina to form an inversion layer that generates a collecting junction extending to a depth of 30 µm in n-type silicon with bulk resistivity larger than 10 kΩ cm. We enhance the collection efficiency further by nanostructuring the photodiode surface, which results in higher effective charge density and increased charge-carrier concentration in the inversion layer. Additionally, nanostructuring and efficient surface passivation allow for a reliable device response with incident angles up to 70°."

PRNewswire: According to CCS Insight's report on wearable tech, shipments of AR and VR headsets forecast to grow 15 times to 96 million units by 2020, at a value of $14.5 billion. The report also indicates that the AR segment alone is expected to grow into a $1 billion business in 2017. As the technology continues to evolve AR products are expected to become an enterprise market opportunity, unlike the consumer focused VR.

Apple is said to be jumping on the smart-glasses market train and may launch a product by 2018. Bloomberg states Apple has recently opened up the idea during meetings with possible providers for components of augmented reality glasses and "has ordered small quantities of near-eye displays from one supplier" for testing purposes. The device would connect to the iPhone and present images over the wearer's vision - a la Google Glass.

“We are committed to providing the best possible user experience to our customers, and for this reason we have partnered with Inuitive and Heptagon to create the most intelligent AR glasses available on the market,” said Chris Liao, CEO of HiScene. “The technologies implemented provide a seamless experience in a robust and compact format, without compromising on battery life.”

Inuitive’s NU3000 serves AR Glasses by providing 3D depth sensing and computer vision capabilities. This solution acts also as a smart sensors hub to accurately time-stamp and synchronize multiple sensors in a manner that off-loads the application processor and shortens the development time. “Inuitive’s solution allows Hiscene to provide the reliability, latency and performance its customers expect,” said Shlomo Gadot, CEO of Inuitive. “With Inuitive technology, AR products and applications can now be used outdoors without the sunlight interfering or damaging their efficacy thanks to cameras featuring depth perception.”

The new HiScene AR glasses feature an impressive array of cameras under the hood:

JCN Newswire: Hitachi develops a camera technology that can capture video without using a lens and adjust focus after the capture by using a film imprinted with a concentric-circle pattern instead of a lens. Since it acquires depth information in addition to planar information, it is possible to reproduce an image at an arbitrary point of focus even after the image has been captured. Hitachi is aiming to utilize this technology in a broad range of applications such as work support, automated driving, and human-behavior analysis with mobile devices, vehicles and robots.

Hitachi camera is based on the principle of Moiré fringes (that are generated from superposition of concentric circles)−that combines a function for adjusting focus after images are captured in the same manner as a light-field camera and features of thinness and lightness of a lensless camera which computational load incurred by image processing is reduced to 1/300. The two main features of the developed camera technology are described as follows.

(1) Image processing technology using Moiré fringes

A film patterned with concentric circles (whose interval narrow toward the edge of the film) is positioned in front of an image sensor, and the image of a shadow formed by a light beam irradiated onto the film is captured by the image sensor. During the image processing, a similar concentric-circle pattern is superimposed on the shadow and Moiré fringes with spacing dependent on the incidence angle of a light beam are formed. By utilizing the Moiré fringes, it is possible to capture images by Fourier transform.

(2) Focus adjustment technology of captured images

The focal position can be changed by changing the size of the concentric-circle pattern superimposed on the shadow formed on the image sensor by a light beam irradiated onto the film. By superposing the concentric-circle pattern by image processing after image capturing, the focal position can be adjusted freely.

To measure the performance of the developed technology, an experiment with a 1-cm2 image sensor and a film imprinted with a concentric-circle pattern positioned 1 mm from the sensor was conducted. The results of the experiment confirmed that video images could be captured at 30fps when a standard notebook PC was used for image processing.

BusinessWire: Heptagon announces ELISA 3DRanger, calling it the world’s first 5m ToF 3DRanger Smart Sensor. ELISA increases the previous maximum range of 2m by over 2x, more than doubling the sensor’s ability to measure distances under certain conditions. When integrated with a smartphone camera, ELISA enables applications like a virtual measuring tape, security features, people counting, augmented reality, and enhanced gaming. The range extension also improves auto-focus applications. Other new features include SmudgeSense, the company’s proprietary active smudge detection and resilience technology, and a 2-in-1 Proximity Mode.

The 5m distance range was achieved in normal office lighting conditions using high accuracy mode with the target object covering the full 29deg FOV. In Proximity Mode, distances between 10mm and 80mm are measured and the sensor provides a flag when a user-defined threshold is reached.

ELISA SmudgeSense uses patented ToF smudge pixels to detect high levels of smudge in real time, and alert the user about a problem through software. Additional proprietary algorithms are employed to dynamically increase smudge resilience using the data provided by the smudge pixels. This leads to optimized system performance, not only during manufacturing test, but in real life.

Imaging technology, which is currently mainly cameras, is exploding into the automotive space, and is set to grow at % CAGR to reach US$7.3B in 2021.

Infotainment and ADAS propel automotive imaging.

Imaging will transform the car industry en-route to the self-driving paradigm shift.

A mazy technological roadmap will bring many opportunities.

“From less than one camera per car on average in 2015, there will be more than three cameras per car by 2021”, announces Pierre Cambou, Activity Leader, Imaging at Yole. “It means 371 million automotive imaging devices”.

Growth of imaging for automotive is also being fueled by the park assist application, 360° surround view camera volume is therefore skyrocketing. While it’s becoming mandatory in the United-States to have a rearview camera by 2018, that uptake is dwarfed by 360° surround view cameras, which enable a “bird’s eye view” perspective. This trend is most beneficial to companies like Omnivision at sensor level and Panasonic and Valeo, which have become one the main manufacturers of automotive cameras.

Mirror replacement cameras are currently the big unknown and take-off will primarily depend on its appeal and car design regulation. Europe and Japan are at the forefront of this trend, which should become only slightly significant by 2021.

Solid state lidar is well talked about and will start to be found in high end cars by 2021. Cost reduction will be a key driver as the push for semi-autonomous driving will be felt more strongly by car manufacturers.

LWIR technology-based night vision cameras were initially perceived as a status symbol. However, they’re increasingly appreciated for their ability to automatically detect pedestrians and wildlife. LWIR solution will therefore become integrated into ADAS systems in future. From their side, 3D cameras will be limited to in-cabin infotainment and driver monitoring. This technology will be key for luxury cars and therefore is of limited use today.