Imaging Application Overviews

BiographyJean-Luc Jaffard has been graduated from Ecole Supérieure d'Electricité of Paris in 1979.
He started his career 1980 joining Thomson- Semiconductor Bipolar Integrated Circuits Division as Analogue and Mixed Designer for Consumer applications.
In 1987 after the creation of SGS Thomson Microelectronics (merger of Thomson Semiconductor and SGS Microelectronica) he became Video Division - TV Design Manager coordinating the development of Analog TV and VCR product family.
From 1996 Jean-Luc Jaffard paved the way of imaging activity at STMicroelectronics being at the forefront of the emergence and growth of this business
At STMicroelectronics Imaging Division he was successively appointed Research Development and Innovation Director managing a large multidisciplinary and multicultural team and later on promoted Deputy General Manager and Advanced Technology Director in charge of identifying and developing breakthrough Imaging Technologies and to transform them into innovative and profitable products
In 2010 he was appointed STMicroelectronics Intellectual Property Business Unit Director
In January 2014 he created the Technology and Innovation branch of Red Belt Conseil , bringing expertise in optimisation of complex and innovative solutions to develop competitive products.

09:00

Introduction

Jean-Luc Jaffard, Redbelt Conseil

09:15

Keynote

From pervasive sensing to operational efficiency a path towards internet of everything

From pervasive sensing to operational efficiency a path towards internet of everything

Pascal Brosset
Sr VP Innovation
Schneider Electric Industries

AbstractPascal Brosset, Chief Technology Officer of Schneider Electric will outline the roadmap of a products and systems company leveraging the combined opportunities of IoT, cloud computing and big data analytics to become an energy and operational efficiency solutions provider.

CV of presenting authorPascal Brosset joined Schneider Electric as Chief Technology Officer and SVP Innovation in July
2010, coming from a software and high tech background.
Prior to joining Schneider Electric, Pascal was Chief Strategy Officer of SAP AG, where he held
multiple business development and strategy positions over 10 years. His international career started
at Hewlett Packard followed by senior positions in consulting firms, leading large scale "technologyled
transformations" across multiple industry sectors.
Pascal holds a Master degree in Engineering from Ecole Polytechnique Fédérale de Lausanne(Switzerland) and graduated in executive development from INSEAD and Ashridge Business School.

09:50

Imaging and Telcommunications

Imaging and Telcommunications

Rahul Swaminathan
Senior Expert
Telekom Innovation Laboratories

AbstractImaging and Telecommunications are two complete and huge fields of research and development in their own right. Hence, in order to tie them together and juxtapose the role of imaging within Telcos we shall first provide an overview of Imaging itself. Imaging encompasses a wide area of technologies spanning decades of research and development. In this talk we shall primarily be focusing on computational imaging.
We begin with an overview of imaging sensors and challenge our basic notion of what a "camera" is. Thereafter, we present various applications of these imaging sensors, from image recognition to 3D reconstruction, which are all highly relevant to the telecommunications industry.
Various advancements in hardware, sensors, and algorithms have today enabled real-time computational vision systems to operate on mobile handheld devices. Cameras that started out as fancy ad-ons on mobile phones are now generating the most content on data networks.
Based on these observations, excerpts from a recent study shall be presented. The study looked into the future of such enabling technologies, the services, and challenges they bring to society. With the current trend of visual consumption and the growing appetite for the same, we must ask ourselves whether our systems are ready to deal with the challenges of tomorrow. More importantly, in what way can we as a Telco play a bigger role in the future of imaging and telecommunications?

CV of presenting authorDr. Rahul Swaminathan is a Senior Expert (Scientist) at the Telekom Innovation Laboratories of Deutsche Telekom since 2005. He obtained his B.E. in computer engineer at Pune University (India) in 1996. In 2003 he obtained his PhD in computer vision at Columbia University, New York. His PhD thesis titled "Non Perspective Imaging Systems" focussed on imaging geometries that go beyond the pinhole (perspective) model, developing new image formation models as well as designing new sensors for general catadioptric imaging. Over the last few years his research has spanned a wide are of research including camera networks for 3D reconstruction, Human Computer Interaction, Augmented Reality and off late Machine Learning, resulting in various publications and patents.

10:25

Coffee Break

11:00

Driving solutions - intelligent sensor systems

Driving solutions - intelligent sensor systems

Berthold Hellenthal
Robust Design, Semiconductor Strategy
Audi

AbstractTo facilitate piloted driving a car needs "supernatural" abilities beyond the human senses. IR/Nightvision, radar, ultrasonic, laser scanner and cameras to name some of the sensors necessary for mapping the surrounding and the driving situation. All these "new" senses are enabled by semiconductors. Still, it is not the semiconductor the OEM is looking for, but a sensing solution. Taking the growing possibilities of packaging, i.e. system-in-package, system-in-module, and the more-than-moore advances into account, intelligent sensor systems can be developed offering a new solution space. This miniaturization facilitates an advanced and updateable technology platform to take advantage of the latest semiconductor developments allowing new functions, faster time-to-end customer, in a smaller space, at equal or less cost. This new solution dimension changes the value chain as well as the competences needed to develop, validate and mass produce intelligent sensor systems. Semiconductor companies will need to understand the application and provide eco-systems and solutions as well as Tier1 companies need semiconductor packaging know-how and experience. The industry will change or be changed by new players. The presentation will explain the OEM perspective and semiconductor strategy using an imaging sensor example.

CV of presenting authorDipl.-Ing. Berthold Hellenthal joint Audi in 2008 as a member of the management team. Working in the Electronic Development Department, he comprehensively supports all Audi electronic development out of a competence center especially in the areas of hardware reviews, analysis and semiconductors.
Mr. Hellenthal is also responsible for the comprehensive Audi Semiconductor Strategy, the Audi Progressive SemiConductor Program (PSCP).

11:35

Imaging in ophthalmology: From eye astronomy to artificial retina for visual restoration in blind patients

Imaging in ophthalmology: From eye astronomy to artificial retina for visual restoration in blind patients

Serge Picaud
Directeur de recherche
Institut de la vision

AbstractImaging technologies are important to assess the progression of retinal diseases. Imaging data were recently accepted as endpoints in clinical trials . Medical devices aiming at restoring vision in blind or visually impaired patients also introduce new imaging technologies. However, at video rates, the sequential waves of visual information are not able to produce biomimetic visual information processing.
The presentation will illustrate the development of new technologies for restoring vision in patients by either medical devices or an alternative strategy based on archaic visual systems of algae and bacteria. In both cases, encoding visual information is required in an external medical device. To achieve biomimetic visual processing, we have used visual asynchronous sensors, which sample information along the X-axis (time) instead of along the Y-axis (light intensity) generating thereby a great time precision (µs to ms). The choice for the different forms of visual restoration is highly dependent upon the state of the residual retina. The presence of non-photosensitive "dormant" photoreceptors was for instance demonstrated in blind patients using optic coherence tomography. Other imaging technologies are emerging to assess the state of retinal diseases. For instance, adaptive optics, which relies on technologies developed by astronomers to compensate for atmospheric distortions, has been applied in ophthalmology to compensate for eye distortions. It allows a very precise photoreceptor cell counting, which was already used in a clinical trial. It can also document very precisely the front edge move in damaged areas reducing thereby the time to demonstrate drug efficacy.
Imaging technologies are therefore essential for the future of ophthalmology not only for assessing diseases and validating therapies but also for the development of innovative medical devices to restore or improve vision.
Supports: FAF, FFB, FRM, ANR, EC (NEUROCARE).

CV of presenting authorSerge PICAUD (Directeur de recherche, INSERM) is currently heading the team "Retinal information processing" at the Vision Institute in Paris. He aims at understanding normal vision and developing new therapeutic or rehabilitation strategies. He supported Pr Sahel for the creation of the Paris Vision Institute and is a founder in the start-ups Fovea Pharmaceuticals, Pixium Vision and GenSight biologics.
Serge Picaud and team members have characterized the physiology of photoreceptors and mechanisms involved in their degeneration. They solved the retinal toxicity of the anti-epileptic drug, Vigabatrin (Sabril) showing a taurine depletion in animals and patients. Recently, the team has moved to developing strategies for restoring vision in blind patients having lost their photoreceptors. They have proposed innovative high-resolution retinal 3D implant design by mathematical modelling and in vivo rat validation. New materials such as diamond and graphene are evaluated for their biocompatibility and electronic window. In addition, new visual information encoding systems were generated from visual dynamic sensors to mimic the high retinal dynamic. Finally, reactivation of residual neurones was demonstrated in collaboration with Dr Roska using optogenetic tools originated from algae or bacteria. Their ongoing studies are testing efficacy and safety of these optogenetic proteins in non-human primates prior to clinical trials.

2013 - 2018 Markets & Applications for CMOS Image Sensors

AbstractThe talk will review the current and future trends for markets and applications for CMOS Image Sensors. Compared to the market landscape a few years ago, the CMOS Image Sensors industry is growing fast with new applications coming while the industrial landscape is consolidating. We forecast CMOS image sensor market to grow at a 10% CAGR in revenue in the 2013 through 2018 period, growing from $7.8B in 2013 to $12.8B in 2018. We will review the different applications driving the integration of CMOS image sensors (consumer, automobile, medical etc.). If mobile handsets accounted for ~ 66% of TOTAL shipments in 2013, many other new applications are set to drive the future growth of this industry. For example, tablets will also significantly contribute to the market growth with a 17% CAGR over 2013-2018. Tablets are poised to boost sales of CIS in the consumer market because similarly to mobile phones the majority of tablets include one or two cameras. For automotive, incentives for upcoming regulations promoting greater safety through driver assistance, have car manufacturers planning to equip their cars with several image sensors, numbering from 1 to 6. That demanding market is expected to reach hundreds of US$M in 2018. We will discuss the other emerging applications setting to drive future growth of this industry such as wearable electronics (e.g. smart watches), tablets, machine vision, security & surveillance, and medical applications. These applications are likely to be in position for strong growth in the midterm. We will also present how this industry is re-organized with the emergence of major IDMs companies pushing other companies to a fabless / fablight strategy. This will drive the emerging of very specialized foundries in niche applications.

CV of presenting authorFrédéric Breussin is responsible for the MEMS and Sensors activity. He has supported many companies in their innovation and product development strategy in making the bridge between micro systems technologies and their applications in consumer, automotive, industrial, Life sciences, diagnostics and medical device industries. He holds an Engineering diploma from INSA Rouen & a DEA in fluid mechanics from University of Rouen.

12:45

LUNCH

Session 2

Imaging Technology Overviews

BiographyJean-Luc Jaffard has been graduated from Ecole Supérieure d'Electricité of Paris in 1979.
He started his career 1980 joining Thomson- Semiconductor Bipolar Integrated Circuits Division as Analogue and Mixed Designer for Consumer applications.
In 1987 after the creation of SGS Thomson Microelectronics (merger of Thomson Semiconductor and SGS Microelectronica) he became Video Division - TV Design Manager coordinating the development of Analog TV and VCR product family.
From 1996 Jean-Luc Jaffard paved the way of imaging activity at STMicroelectronics being at the forefront of the emergence and growth of this business
At STMicroelectronics Imaging Division he was successively appointed Research Development and Innovation Director managing a large multidisciplinary and multicultural team and later on promoted Deputy General Manager and Advanced Technology Director in charge of identifying and developing breakthrough Imaging Technologies and to transform them into innovative and profitable products
In 2010 he was appointed STMicroelectronics Intellectual Property Business Unit Director
In January 2014 he created the Technology and Innovation branch of Red Belt Conseil , bringing expertise in optimisation of complex and innovative solutions to develop competitive products.

13:35

Introduction

Jean-Luc Jaffard, Redbelt Conseil

13:40

Keynote

CMOS Image Sensors: Now and Future

CMOS Image Sensors: Now and Future

Eric R. Fossum
Professor
Dartmouth

AbstractThe talk will discuss the CMOS image sensor used in smart phone cameras, DLSRs, webcams, automotive imaging, medical imaging and numerous other applications. The operation of the CMOS image sensor will be addressed followed by special issues in fabrication. Trends in CMOS image sensors will described and the speakers' work in a possible 3rd generation image sensor technology - the Quanta Image Sensor - will wrap up the talk.

CV of presenting authorProf. Fossum is the primary inventor of the CMOS image sensor used in billions of camera phones, DSLRs, and many other applications while at the NASA Jet Propulsion Laboratory at Caltech. He co-founded and led Photobit to further develop and commercialize the technology which was eventually acquired by Micron. Subsequently he was CEO of Siimpel which developed MEMS devices for autofocus function in camera phones. He joined the Dartmouth faculty in 2010. He holds over 150 U.S. patents and has published over 260 papers. He was inducted into the National Inventors Hall of Fame and the National Academy of Engineering, and is a Charter Fellow and Director of the National Academy of Inventors. He co-founded the International Image Sensor Society and served as its first President.

14:15

French infrared technologies offering competitive edges to imaging sensors business

French infrared technologies offering competitive edges to imaging sensors business

David Billon-Lanfrey
CTO
Sofradir

AbstractLast year, Sofradir has extended its technologies portfolio by consolidating all IR technologies available in France. These different technologies, HgCdTe, InSb, GaAs QWIP, InGaAs and A-Si microbolometers, are complementary and are used depending on the needs of the applications.
The infrared (IR) detector R&D is driven by the same trends already experienced by the visible market: Decrease of pixel pitch and increase of array format, decrease of detectors prices thanks to High Operating Temperature (HOT) studies. This paper presents recent developments on HOT and small pixel pitch technologies supported by the long term R&D relationship with CEA-Leti.

CV of presenting authorMr. Billon-Lanfrey was appointed Chief Technology Officer at Sofradir in 2011, he formerly headed the R&D optronics characterization team at the company for five years. Before that, he served for 12 years as project manager for R&D and product development. Mr. Billon-Lanfrey is a graduate of optronics at Joseph Fourier University in Grenoble.

14:50

What's aside of Megapixel race: Imager & Photonics Process Development for Mass Production

What's aside of Megapixel race: Imager & Photonics Process Development for Mass Production

AbstractThis talk is to discuss about serving the new growing opportunities aside of megapixels mobile applications, disrupting with the needs for integrating more and more pixels on the same piece of silicon. Let's see how the advanced R&D work to optimize opto-electrical performances of small pixels can fuel the development and mass volume industrialization for dedicated sensors and cameras on specific markets, such as Automotive or Medical ones and how alternative photo-site technology like embedded SPADs can be more adapted for some dedicated applications.

CV of presenting authorKrysten is CMOS and CIS process manager for Imaging division at STMicroelectronics. He's got a Msc from Ecole Nationale Supérieure de Physique (ENSPG) de Grenoble in 2001 and from Joseph Fourier University with semiconductor physics specialty in 2002. From 2002 to 2007, he worked as test and characterization engineer for Philips Semiconductors focusing on analog behavior of advanced CMOS devices. In 2007, he joined ST and its Wireless Business Division supporting the silicon and package process development for Power and AMS products then for RF and Baseband applications. In 2011 he moved to Imaging Division, where he is currently in charge of CMOS and CIS process development in collaboration with Technology R&D group.

15:25

Coffee Break

15:55

Evolution of Design and Manufacturing of optical modules for mobile phone.

Evolution of Design and Manufacturing of optical modules for mobile phone.

Jean Pierre Lusinchi
CTO AOEther
Asia Optical Ether

AbstractMore than 80% of the lenses produced today are used in the cameras for mobile phones. The particular constraints they impose in term of physical dimension and performances are then the major driver in design and manufacturing techniques for lenses used in the visible spectrum. However, some emerging applications are extending the spectrum to the Near Infrared (NIR), and even to the Far Infrared (FIR), which require using different material for the lenses and different design techniques.
The quasi totality of lenses produced today is based on the Snell law on light refraction, and their performances are limited by this law, particularly the depth of field imposing the usage of autofocus, or alternative techniques like those implemented in Light Field cameras. Others limitations stem from the mere structure of the sensors which makes a sampling of the image, and from the pixel size and F#, which constitutes an unsurpassable limit to the resolution. Within this limit the demand for improved performances fosters many developments in image processing, made possible by increased computing power and fast access to large memory.
Nevertheless, before any post processing is applied, the lenses are more and more complex, using aspheric surfaces to correct geometric aberrations and sophisticated materials to correct chromatic aberrations and thermal drifts. Difficult tradeoffs are often necessary to arbitrate between performances and manufacturing constraints.
Another way to improve performances would be to make a curved sensor matching the field curvature.
We review these limitations in performances and discuss the different solutions, and we make a short overview of lenses manufacturing techniques in glass, injection plastic or thermoset materials, discussing their advantages in cost and performances.
Finally, we evoke the development of new kind of lenses, flat lenses or pinhole "lenses", addressing specific applications, with their advantages and present limitations.
JP LUSINCHI

CV of presenting authorBased at ARM's Cambridge HQ Tim is a graphics and GPGPU engineer working within the Media Processing Group. He specialises in all things compute with a particular focus on Computer Vision within the mobile and embedded space. His role encompasses working with developers new to the Mali GPU, helping spread the word about optimal heterogeneous software design on this exciting platform. Previously Tim worked as a producer for the BBC, leading a research and development team in the use of multimedia in television training. Tim is married with two children and lives in the Chiltern hills just outside London.

17:05

Wavelens - Shaped for Sharpness

Wavelens - Shaped for Sharpness

Arnaud Pouydebasque
Co-Founder and Product Development VP
Wavelens

AbstractIn the cameras and optics miniaturization trend (mainly led by Camera Phones), optical performances must not be neglected, quite the opposite. The image quality is increasingly important and complex optical functions such as Autofocus, Image Stabilization and Zoom are becoming essential. Wavelens is leveraging MEMS technologies to provide their customers with compact, slim and high speed solutions in order to help them to develop and integrate such complex optical functions easily. With their low actuation voltage and their high power efficiency, Wavelens' optical MEMS offer a cost effective solution to customers.

CV of presenting authorArnaud Pouydebasque received the M.Sc. and Ph.D. degrees in materials science and electrical engineering from the Institut National des Sciences Appliquées (INSA), Toulouse, France, in 1997 and 2001, respectively. From 2002 to 2007, he worked for Philips Semiconductors (currently NXP Semiconductors) in Crolles, France, where he focused on the integration of advanced CMOS technologies. In 2007, he joined the MEMS department of the CEA-LETI, Grenoble, France to develop optical microsystems such as variable-focus liquid lenses. In 2012, he co-founded Wavelens, where he is currently in charge of the product development.

17:15

Specialized Design House for High Performances CMOS Image Sensors

Specialized Design House for High Performances CMOS Image Sensors

Philippe Rommevaux
CEO & President
Pixalys

AbstractPyxalis (Grenoble-France) is a leading company in CMOS image sensors development, serving a wide range of demanding applications in markets like medical, machine vision, security, photography, aerospace and more. Today's challenges in high performance CMOS image sensors will be briefly reviewed during the presentation.

CV of presenting authorPhilippe Rommeveaux is Pyxalis CEO, and has co-founded the Company in 2010 together with colleagues formerly part of e2v Grenoble.
Graduated from the Ecole Supérieure d'Optique -Orsay, France- in 1993, he then pursued a PhD in display technology in collaboration with Pixtech, a start-up company. After graduation in 1998, he joined Thomson Specific Semiconductors where he worked on CCD image sensors development for earth observation. As the company successively became Atmel Grenoble and then e2v, he've managed the business development team in charge of new activities set-up, M&A and innovation. And in 2007, he was appointed general manager for Medical, Industrial and Emerging Imaging products business unit covering marketing, CMOS/CCD image sensors and systems development, product engineering.

17:25

Imaging applications based on organic materials

Imaging applications based on organic materials

Alain Jutant
President &CEO
Nikkoia

AbstractOrganic materials have been mainly used in Photovoltaics and OLED so far. A new sensor technology based on organic materials and thin film processes is available from NikkoIA to replace at lower cost some existing large area image sensors and to enable new CMOS image sensor sensitive not only in the visible and the near-infrared but also in the short wave infrared (beyond the cut-off of the silicon). This paves the way for new imaging and vision applications to be reviewed during the presentation.

CV of presenting authorBefore being President of NikkoIA SAS, Alain Jutant handled successively several positions as CEO and VP business development of a startup company, director of business development for Asia and product line director in stock-listed companies, director of Innovation and Strategy and industrial Marketing manager in subsidiaries of large industrial corporations.
He has 28 years background experience in semiconductors and visible, X-ray and infrared image sensors acquired on many different professional markets (medical, industrial, graphic arts and digital photography) as well as mobile phone, computing, automotive, security and consumer markets. Alain has developed a vast network in such markets and has also a strong knowledge in company operational organization and management by projects, in 3-5 years strategic development plan, in investor relationship management, in R&D collaboration program and strategic alliance set-up as well as in sales contract and legal negotiation.
Alain JUTANT graduated from Ecole Centrale de Lyon in 1985 with a degree in electronics and microelectronics.

AbstractMultiX is a technology start-up that designs, produces and sells advanced spectrometric X-ray detectors, used for the identification of materials in general, non-destructive testing (NDT) and the detection of explosives in luggage and packages in particular.
The company was created in October, 2010 by Jacques Doremus and Patrick Radisson, both from the Thales group, with the support of the French CEA (Atomic Energy Commission). MultiX supplies x-rays system manufacturers with ME100 x-ray detectors as part of a complete data acquisition system which upgrades current x-ray systems and allows them to perform better. The new Multi energy spectrometric x-ray detectors are also applicable to non-destructive testing(NDT) applications, such as food and waste product processing where system performance can be can also significantly improved bringing a quick return on investment.The MultiX x-ray spectrometric detector technology has proved that it brings significant improvements in performance to current x-ray systems and hence has gained acceptance within the security market. OEMs started development of x-ray security systems based on the technology..More recently NDT and in particular Food contaminant detection get opportunities with the capability to segregate efficiently raw materials.
The company benefits from important partnerships with x-ray system manufacturers and CEA-Leti, a French National Laboratory, in the field of spectroscopic imaging. MultiX is also active in various French and European research and development programs.

CV of presenting authorPatrick Radisson , Co-founder & CTO, Eng. Degree in electronics (ENST Paris) + MicroElectronics advanced degree + MBA (IAE). has 30 years experience in Detection and Imaging, Microelectronics and Micro- technologies through different positions in large companies and SME.
His background includes more than 18 years in solid-state infrared detectors field (SOFRADIR) through the management of the complete product cycle from development to production and through an active management of the development and industrialization of new technologies.
He also managed engineering and production in emerging MEMS/MOEMS field within a french start-up (PHSMEMS).
He was formerly head of Advanced Studies at THALES XRIS in X-Ray and THZ detectors field.

17:45

Networking Reception

Wednesday, October 8, 2014

Session 3

Consumer

BiographyBraeuer graduated in physics in 1978 (PhD) at University of Jena (Germany) in the field of Solid State Physics. Since 1986 he is working in the field of microoptics. He contributed to the investigation of linear and nonlinear effects in waveguides, mainly realized in polymers like PPV or ORMOCER's. His particular interest was devoted to waveguide arrays and the demonstration of specific dispersion phenomena. Later on he was more engaged in free-space microoptics.
He is currently head of the Department of Micro-Optical Systems at the Fraunhofer Institute for Applied Optics and Precision Engineering (IOF) in Jena, Germany. Main topics of research and development are new principles of LED and semiconductor laser based illumination systems using micro- and nanooptical technologies as well as new imaging microoptical systems. He is engaged in both, single channel microoptical imaging as well as new principles of insect-inspired imaging with multichannel systems. All these activities are devoted to applied research. He is author and co-author more than 120 papers in scientific journals and of more than 90 papers on international conferences.

09:00

Introduction

Andreas Bräuer, Director Micro Optical Systems, IOF

09:05

Wafer-level technologies for imaging and sensing applications in mobile devices

Wafer-level technologies for imaging and sensing applications in mobile devices

Markus Rossi
Chief Innovation Officer
Heptagon Advanced MicroOptics

AbstractMobile devices are benefiting from unprecedented innovations that deliver myriad benefits, including: radically reduced height and total footprint, enhanced image quality with computational cameras that incorporate greater depth sensing with video-stream information, and even entirely new use cases for smart phones, tablets, wearables and other mobile consumer electronics.
Advanced packaging concepts, including those originally developed for wafer-level optics products, are helping to address these continually increasing requirements of performance, functionality and miniaturization for mobile devices' opto-electronic sensor modules, with very flexible, efficient and highly precise packaging processes.
Wafer-level processes are ultra high precision and can achieve very tight tolerances - which in turn can enable entirely new functions. In addition, wafer-level processes enable a higher level of integration achieved through miniaturization, which are especially useful for devices with multi-sensing applications (such as medical devices), which may require proximity, temperature, gesture or humidity sensing capabilities in a single device with a limited footprint.
The presentation will include an overview of the basic process steps, typical tolerances and features of the Wafer-Level Integration (WLI) technology and processes, as applied to the production of a wide range of miniature opto-electronic modules for mobile devices - including light sensors, computational camera modules, illumination modules, MEMS devices, infrared beam shaping, and supporting gesture control in natural user interfaces, including time of flight systems and other motion sensing applications.
Examples of these various applications - such as small footprint camera modules, computational imaging arrays that enable both HD video and high-quality depth maps, dual LED flash systems to enhance image quality and new forms of sensing for natural user interfaces - will be presented.

CV of presenting authorFormerly head of CSEM Zurich Replicated Micro-Optical Elements, Markus became CTO of Heptagon after CSEM's microoptics division was acquired by Heptagon in 2000. He is an expert on fabricating diffractive and refractive micro-optic components for industrial applications in the European and US markets. Markus holds a Ph.D. from the University of Neuchatel, Switzerland and a master's degree in physics from ETH Zurich.

09:25

Multi aperture camera module with 720p-resolution using microoptics

Multi aperture camera module with 720p-resolution using microoptics

Andreas Brückner
Senior Scientist
Fraunhofer IOF

AbstractThe slim design of portable electronic devices (e.g. smartphones) causes a constant need for miniaturized camera systems. This trend pushes the shrinking of opto-electronic, electronic and optical components. While opto- and micro-electronics have made tremendous progress, the technology for the miniaturization of optics still struggles to keep up. The demand for a higher image resolution and large aperture of the lens (both driven by shrinking pixel size) conflict with the need for a short focal length and a simple, compact design. These conditions impose high demands on the fabrication technology, especially when considering that it has to meet one-hundreds of a percent relative accuracy. Wafer-level optics (WLO) fabrication for camera lenses is a promising candidate, enabling high-volume production with low cost. However, the resolution that is currently available with WLO-technology is limited to 1MP per lens due to material and process control issues.
We propose an alternative lens design using a multi aperture scheme which captures different portions of the field of view (FOV) within separated optical channels. The different partial images are joined digitally to reconstruct an image of the full FOV. The segmentation partly decouples the tradeoff between focal length and size of the FOV. The advantage is twofold: A short total track length is created and the microlenses are easier to manufacture. The realization of such multi aperture objectives is feasible with adapted micro-fabrication techniques such as diamond milling, step and repeat micro-imprinting and UV-molding. Alignment and assembly are partially carried out on wafer-level. The optical design, technological realization and test of such a multi aperture system is discussed for the example of a 2mm-thin camera module with 720p resolution.

CV of presenting authorAndreas Brückner graduated in physics from the Friedrich-Schiller-University Jena in
2006. Since then, he has been working as a researcher & optical engineer in the department of Microoptical Systems at the Fraunhofer Institute for Applied Optics and Precision Engineering (IOF) in Jena. In 2011, he received his PhD degree in applied optics from the Friedrich-Schiller-University Jena. He has been leading and participating in several research projects focusing on the miniaturization of imaging optics by applying multi-aperture architectures.

Abstractimec has pioneered for more than a decade industry-leading technology research in digital CMOS imaging with clear focus on reaching extremely high-speed, high QE, low power and low noise image sensor solutions. High speed ADCs, Hyperspectral filtering, backside illumination, UV imaging and embedded CCD pixels into CMOS circuits are new technology platforms with unique potential that imec has been recently bringing into reality. This presentation will give a high level overview update of the recent results and vast range of industries that can be served by these unique CMOS-based imaging technologies. From industrial inspection to aerial photogrammetry, security to spectroscopy, medical to astronomy, imec's imaging innovations allow our partners unprecedented capabilities, enabling new discoveries and competitive advantage.

CV of presenting authorMaarten Willems received the M.S. Degree in Electrotechnical Engineering in 1993 and subsequently the M.S. Degree in Artificial Intelligence in 1994 and an MBA, from the KU Leuven. After a career as a solution design engineer at Alcatel Bell, director of engineering at Keyware Technologies, and VP Professional services at GlobalSign, Maarten co-founded Hypertrust in 2000, an internet service company. In 2005, Maarten joined imec as market intelligence group leader. Since 2008, Maarten holds his current position as business director in the smart systems segment focusing on business development and sales of new sensor technology development and product marketing in the domains of imaging, healthcare and power electronics.

10:05

Imaging for companion humanoid robots

Imaging for companion humanoid robots

Rodolphe Gelin
Research Director
Aldebaran Robotics

AbstractCreated in 2005, Aldebaran Robotics designs, develops and manufactures the humanoid robot NAO. Today more than 6000 NAOs have been sold all over the world for research and education purposes. If NAO appears to be a very efficient and appreciated development platform for these academic markets, the final objective of the company is to make humanoid robot a real companion for domestic applications.
This objective of domestic applications brings severe constraints for the perception of the robot: it should detect objects and people in a rather unstructured environment under not controlled lightning conditions. Furthermore, the mass market envisaged for the Aldebaran's robot adds new constraints on the cost of the sensor equipment. Last, but not least, the humanoid shape of the robot limits the size and the number of the sensors and the computation power that can be embedded on the robot.
Because Nao is dedicated to the human-robot interaction, the first vision development made at Aldebaran has concerned people detection. Using its 1.5 Mpixels cameras and its ATOM 1.6GHz CPU, Nao is able to detect and track a face and to recognize a person. But 2D vision has also been used to recognize objects and to perform localization and mapping by looking for artificial or natural visual beacons in the environment. In order to go further in exploiting the vision sensing, Aldebaran is experimenting new technologies. A prototype of stereo-head has been developed for Nao to get 3D information from the 2D cameras. It made possible to have accurate 3D positioning of the user's face and to localize object accurately enough for autonomous grasping of small objects. But processing stereo signal is quite heavy for the CPU that is the reason why Aldebaran is looking for smart sensors (able to preprocess the signal sent to the main CPU), for 3D sensors (best solution to have gesture recognition) and even thermal sensor that can detect human beings in a very robust and efficient way.

CV of presenting authorRodolphe Gelin (1965) is engineer from the Ecole Nationale des Ponts et Chaussées (1988) and Masters of Science in Artificial Intelligence from the University of Paris VI (1988). He started his career at CEA (French Atomic Energy Commission), he has been working there for 10 years on mobile robots control for industrial applications and on rehabilitation robotics. Then he had been in charge of different teams working on robotics, virtual reality and cognitics. From 2006 to 2008, he was in charge of business development for Interactive System Program. He has participated to the European Coordinated Action CARE that supports the ETP EUROP on robotics in charge of the robotic roadmap for the European Community. In 2009, he joined Aldebaran Robotics as head of collaborative projects. He is the leader of the French project ROMEO that aims to develop a human size humanoid robot. Since 2012, he is Research Director at Aldebaran Robotics. He is member of the board of the directors of the euRobotics association.

AbstractThe traditional technology to achieve spectral filtering for CMOS image sensors includes a combination of polymer resists and external all dielectric multilayer thin film coatings. In this study, we investigate the suitability of metal dielectric interference stacks as a completely integrated solution. Silver and copper are the metallic materials considered for various applications ranging from colour imaging to ambient light sensing and time-of-flight in the near infrared domain. The compatibility with the CMOS process is shown through technological demonstrations. The performances and limitations of the on-chip filters are detailed, including robustness to process errors in the prospect of large scale manufacturing.

CV of presenting authorDr Laurent Frey got optics education from Ecole Supérieure d'Optique Graduate School (1996). He obtained a PhD in optics and photonics from Paris Sud University (2000), after a thesis in the domain of photorefractive materials. He joined CORNING European Research Center in Fontainebleau with activities focused on the development of optical components for ultra-high speed telecommunications (2000-2003) in close connection with the US teams. Since 2003, he has been working in CEA, LETI, MINATEC as research scientist and project leader for several photonic application domains such as holographic mass storage, superconducting detectors, silicon integrated photonics and imaging. He has also been responsible for roadmapping activity in optical systems for imaging. His current topic is focused on the development of spectral filters and nanostructures for industrial visible image sensors with improved performances. He is author or co-author of 12 publications and 19 patents including 2 pending.

10:45

Coffee Break

Session 4

Automotive

BiographyDr Eric Mounier has a PhD in microelectronics from the INPG in Grenoble. He previously worked at CEA LETI R&D lab in Grenoble, France in marketing dept. Since 1998 he is a cofounder of Yole Développement, a market research company based in France. At Yole Développement, Dr. Eric
Mounier is in charge of market analysis for MEMS & Sensors, visible and IR imagers (CIS, microbolometers), semiconductors, printed electronics and photonics (e.g. Silicon photonics).
He is Chief Editor of Micronews, and Yole Développement magazines: MEMS'Trends, Power Dev, iLEDS, 3D Packaging.
He has contributed to more than 150 marketing & technological analysis and 60 reports.
Eric is also an expert at the OMNT ("Observatoire des Micro & Nanotechnologies") for Optics.

11:15

Introduction

Eric Mounier, Yole

11:20

Automotive Camera Systems - Photons to Ethernet

Automotive Camera Systems - Photons to Ethernet

Tarek Lule
Camera System Engineer
STMicroelectronics

AbstractAutomotive applications are multiplying in coming years, and ask for image sensor systems with moderate resolutions but very high dynamic range and excellent low light performance while operating with low power consumption, and very high ambient temperatures. However, most CMOS imagers are optimized for high resolution consumer requirements. By combining process advancements from consumer application with automotive technology and design, the optimal automotive image sensor and processor chip set system is achieved, which transforms incoming light into an H264 image stream over Ethernet.
This presentation will summarize the various specific automotive camera demands and how they were covered by choices of architecture, design and technology. The constrained power envelope can further be leveraged in surveillance cameras for battery powered security applications.

CV of presenting authorT. Lulé is working for STMicroelectronics since 2003 on CMOS Image Sensors. His current research topics include HDR camera systems for automotive and security application. Before his activities domains included Autofocus mobile cameras, waferscale packaging, analogue design, Above-IC technology.
Prior to working STMicroelectronics he worked for Silicon Vision AG which he co-founded in 1996, and which amongst others already developed HDR automotive imagers in TFA technology.
T. Lulé obtained the M.phil. degree in Microelectronics and Semiconductor Physics in 1992 from University of Cambridge, UK, and the Bachelor in Physics 1990 from University of Siegen.

11:40

New Developments on CMOS Logarithmic Image Sensor

New Developments on CMOS Logarithmic Image Sensor

Pierre Potet
CEO
New Imaging Technologies

AbstractIn this talk, I would like to present some new developments on CMOS logarithmic image sensing devices. The logarithmic law image sensing devices have a lot of advantages over classic linear law image sensing devices. Since long time, logarithmic sensors suffered from high FPN, image lag and other drawbacks. We believe that logarithmic law sensing method, universally used by all the biological vision systems on this earth, could be improved and reach the same image quality as today's CMOS 4T active pixel based sensors. The solar-cell mode photodiode based logarithmic sensing technology developed by NIT has overcome some of these drawbacks. I will present theoretical and technical details of this technology, highlight the advantages and shortcomings, which include temperature effects, noise performance, etc. Finally I give a glance at some new developments and extensions around this technology inside NIT.

AbstractMany industrial, surveillance and scientific imaging applications require that the image sensor captures all pixels synchronously during the same exposure period. However, most CMOS image sensors use a rolling shutter to control exposure time, rather than the global shutter required for such applications. In the past, low noise global shutter capture was only possible with Interline Transfer CCD devices. But today several CMOS implementations exist with low noise global shutter, thanks to the combination of correlated double sampling and a global shutter pixel architecture with at least one in-pixel storage element.
This presentation will summarize the various implementations of CMOS global shutter pixels, and explain the trade-offs that are made in the design of such pixels. Applications can drive the selection of a certain pixel architecture depending on the key parameters that are important for that application. Both charge-domain and voltage domain global shutter pixels will be discussed. Charge-domain global shutter pixels contain a memory element in which the photocharge can be stored after image capture, through a charge transfer and charge storage element. This is realized by an in-pixel 3-phase CCD or an equivalent implementation. This offers low noise readout, with noise levels which are theoretically not higher than a rolling shutter pixel. However, it is difficult to shield the storage area from parasitic light and from photocarriers that diffuse through the silicon. Some shielding techniques will be discussed, which improve the efficiency of the shutter. However, for better shutter efficiency, a voltage domain global shutter pixel can be used. Such pixel samples the signal after charge-to-voltage conversion on an in-pixel voltage sampling stage. Correlated double sampling can be realized if two in-pixel memory elements are foreseen. Pixel implementations and specifications of such pixels will also be discussed.

CV of presenting authorGuy Meynants is founder and CTO of CMOSIS, an independent supplier of CMOS image sensors for professional and industrial applications.
Prior to founding CMOSIS in 2007, Guy worked at Imec and at FillFactory (later acquired by Cypress Semiconductor) on CMOS image sensors since 1994. His current research topics include smaller global shutter pixels, backside illumination and increased frame rate of CMOS image sensors.
Guy Meynants obtained the Ph.D. Degree in Electronics in 1998 from the Catholic University of Leuven in 1998 and the M.SC. in Electrical Engineering in 1994 from the same University. He is the author of 50+ scientific publications and inventor of 15 patents.

AbstractWe present our all-glass approach to wafer-level lens manufacturing. Glass remains the preferred material for imaging optical lenses for a number of reasons, including availability of a wide range of glass types, superior optical transmission as well as thermal stability. The added cost compared to injection-molded plastic lenses has so far prevented the wide-spread use of glass in cost-sensitive imaging applications including lenses for mobile phone cameras.
With our proprietary wafer-level glass lens technology, we can take advantage of the advantages of glass while reducing the cost compared to conventional glass molding and polishing technologies.
In this presentation, we present the key process stages in our technology including hard tool manufacturing, wafer-level lens molding and lens module stacking, and we present performance of the resulting wafer-level lenses for imaging applications

CV of presenting authorChristian Holme (45) holds a Bachelor degree in Math (1991), and a Ph.D. degree in Physics from the University of Copenhagen (1997). In 2001 he was part of the group founding Kaleido Technology, since 2010 part of AAC Technology. His main focus has been on developing ultra-precision machining and glass molding for manufacturing of wafer-level optics.

12:40

Custom image sensors for high performance application

Custom image sensors for high performance application

Benoit Dupont
chief designer
Caeleste

AbstractCMOS image sensors have become preeminent in camera design for many applications ranging from smartphone to security cameras and now scientific and space application. often in scientific and high end imaging, such high performance sensors are not available off-the-shelf and must be tailored to the particular application. In this presentation, we explore the performances that can be reached in custom design image sensor in the following domain:
*** Sensitivity
*** Spectral response
*** Dynamic range
*** Noise
Through design examples of existing products, we show the different tradeoff spaces linked to custom design of high performance applications. In particular, we show:
*** How to reach zero noise sensors ? and for what application?
*** How to design extreme HDR sensors ? up to 32M:1 linear dynamic range.

CV of presenting authorBenoit Dupont received his PhD Diploma in physics from the University of Paris-Sud in 2008 and an IC design engineering degree from ISIM, Montpellier in 2002. He worked as digital system engineer and cmos image sensor designer at FillFactory from 2002 and 2005. He made his PhD research in partnership with the LETI and ULIS Company, Grenoble, in the field of readout circuits for bolometer infrared image sensors from 2005 to 2008. He is now co-founder of Caeleste where he is chief designer.

13:00

LUNCH

Session 5

Industrial & Professional

BiographyGraduate from Ecole Supérieure de Physique et Chimie (Paris) and PhD in electronic and instrumentation (Université de Paris VI)
Bruno Mourey had different positions in relation with display applications from research to manufacturing in the Thomson group. He was general manager of Thomson LCDs for more than 10 years
Bruno Mourey joined CEA LETI in 2003 as Program manager for multimedia applications (display, optical recording***.),
He is currently Vice president at CEA LETI in charge of the Optics and Photonics Division

AbstractThis session will introduce participants to multi-sensor and multi-spectral imaging for security and operational applications. This talk discusses the differences between video for security and projects focussing on video for operational data. The ability to extract different types of real time data and information from different spectral regions is discussed. Video information combined with other sensors provides the ability to solve unique customer challenges and applications. Specific customer use cases and applications are presented.
The camera technology associated with these applications is presented along with current technology challenges. Technical approaches to techniques such as multi-sensor stitching and multi-spectral fusion are discussed. Technical challenges associated with these techniques and associated algorithms are also introduced.
Fundamental underlying principles and science behind this technology are explored, along with a brief summary of the development of these technologies. Looking forward, technology trends and roadmaps for multi-sensor and multi-spectral imaging are also presented.

CV of presenting authorDavid Dorn is the Applied Technologies Manager for Pelco by Schneider Electric. Mr. Dorn has been leading efforts with development of image sensors and camera systems for over 20 years. These cameras systems have spanned wavelengths from the IR to the UV for security, scientific, and medical applications. Currently, Dorn leads the engineering team developing thermal imaging cameras for Pelco by Schneider Electric. Earlier in his career, Dorn also led teams building visible and infrared camera systems for Hubble Space Telescope and interplanetary missions to Mars and Pluto. He has authored over 25 technical publications and has patents for CMOS image sensor and camera innovations.

14:25

High speed line and area image sensor for industrial and medical applications

High speed line and area image sensor for industrial and medical applications

Bernhard Schaffer
Senior R&D Engineer
CSEM S.A.

AbstractLine scan image sensors are widely used for industrial vision where the object moves perpendicularly to the line of pixels in the image sensor. Most high speed digital line scan sensors on today's market contain 1 or 2 lines x 1'024 to 4'096 pixels with a scan rates of up to 80'000lps (lines per second), though some newer devices have higher resolutions. These sensors have however two issues for colour imaging: first, they require 3 successive expositions with an alternating RGB light, which makes the illumination bulky, reduces the scan-rate by a factor of 3 and introduces colour smearing; and second, the sensitivity decreases when the resolution increases, because small pixels (for example 3.5µm in 16kpix sensors) cannot collect enough photons within the very short exposure time imposed by the high scan rate.
To circumvent this, we have devised a somewhat different sensor, which contains 320 x 4(RGBW) lines of rather large but highly sensitive 24µm pixels with selectable small (50ke-) or large (250ke-) full wells. The sensor can acquire in a single shot 4 (WRGB) lines at an unprecedented rate of 4x 200'000lps. Examples of applications for such sensors are high-speed motion processes control, high performance colour sorting systems, surface inspection of various material / pieces, general purpose high-speed machine vision process control, etc.
Whereas line image sensors are well adapted for applications in industrial environments, area high speed image sensors open the way to other kind of applications. We have developed high speed area image sensors able to acquire up to 4000 frames per second. Fabricated in a standard CMOS image sensor (CIS) process they use pinned photodiodes (PPD) in a 5T pixel with a 12 um pitch and column-based ADCs. They have been optimized for sensitivity and highest speed. One example of application is in an optical coherence tomography (OCT) system for real-time 3-D imaging of skin morphology.

CV of presenting authorBernhard Schaffer got a Ms in Digital Signal Processing/Information Theory from ETHZ, Zürich, Switzerland in 1995. From 1995 to 1999 he was a test engineer at Philips Semiconductors, where he was involved in development and implementation of highly innovative test concepts for telecom IC's in order to optimize test & production for cost and quality. In 2000, he co-founded e-vision, where he headed the digital design group. In 2007 he moved to CSEM, where he is currently leading the development of high speed line and area image sensors.

14:45

Imaging Devices in Space

Roland Meynart, Head of EO instrument pre-development, European Space Agency

Imaging Devices in Space

Roland Meynart
Head of EO instrument pre-development
European Space Agency

AbstractSpace missions use imaging devices since the beginning of the space exploration era. Imaging devices are implemented in instruments of space observatories, planetary exploration missions and Earth observation satellites used for research or monitoring. They are also used in monitoring cameras or star trackers onboard satellites.
The spectral range covered by "optical" detectors is very broad, ranging from X-rays to far-InfraRed. The presentation will focus on space requirements and applications of detectors using silicium technology fully or partly, includinng Charge-Coupled Devices, CMOS imagers, hybrid IR detectors and microbolometer arrays.
The presentation will illustrate the dichotomy in the imaging market, with the trends for small pixels with moderate performance for high-volume devices and the requirements for larger pixels of very demanding performance for space applications.

CV of presenting authorRoland Meynart is working in the Earth Observation Directorate of the European Space Agency, where is leading the pre-development of optical and microwave instruments for future space missions. His personal expertise principally lies in optical instrumentation and metrology, imaging spectroscopy, lidar systems, optical design and optical detection. He has been managing several developments of new optical detectors for Earth Observation from space.

15:05

Artificial Neuro-Inspired Embedded Vision

Artificial Neuro-Inspired Embedded Vision

Olivier Brousse
R&D officer
GlobalSensing Technologies

AbstractGlobal Sensing Technologies (GST), leader of NeuroSmart, aims at becoming the European leader of very high performance data processing solutions based on neural network technology. After many years of academic research works done by the co-founder of GST, expert of this disruptive ICT, many developments have been performed by GST since its creation in 2011, and some relevant R&D partnerships have been built with one of the world reference of this scientific area (CEA List) to design a very innovative architecture of neural processor and obviously users of the technology (Sagem,Vitec...) to design specific adaptations to different applications requirements.
The current trend in embedded systems is to make them surrounding the users, providing services thanks to a knowledge of their environment. These self-awareness and context-awareness are provided by numerous sensors, from different types. The main objective is to make systems smart. Additionally, the needed applications that use these information are based on different recognition processing, sometimes not easy to formalize with conventional algorithms. Processing chains using neural-based algorithms are promising approaches for solving these kinds of issues. In the presentation GST shows how we envisage and promotes Neuro-inspired embedded vision systems.

CV of presenting authorOlivier Brousse received the PHD in Micro-electronics and automatic systems from Montpellier2 University, and information systems form Lausanne University in 2010.
From 2009 to 2012, he was Research Engineer in the LEAD laboratory in University of Burgundy (Dijon) were he assisted the Prof Paindavoine in its researches in the field of neuro-Insipred systems.
He is now the R&D officer of GlobalSensing Technologies since September 2012. He manages the R&D team to perform research and development in neuro-inspired electronics architectures. He obviously also continue to conduct researches in the field and to give lectures at the University of Burgundy.
His main research topics concerns architectures for image acquisition and real time image processing. This with a strong orientation toward neuro-inspired and visual system inspired solutions.
Some Publications in the topics of the project :
O. Brousse, M. Paindavoine, and C. Gamrat, "Toward nano-device image processing : a neuro-inspired learning approach," June 2010.
O. Brousse, M. Paindavoine, and C. Gamrat, "Neuro-inspired learning of low-level image processing tasks for implementation based on nano-devices," in DTIS 2010 conference proceeding, 2010.
M. Paindavoine, A. Ngoua, O. Brousse, and C. Clerc, "Smart image sensor with adaptive correction of brightness," in International Conference SPIE Electronic Imaging, San Francisco - USA, January 2012.

15:25

Image sensors in organic and plastic electronics for Industry 4.0 and Internet-Of-Things

Image sensors in organic and plastic electronics for Industry 4.0 and Internet-Of-Things

Laurent Jamet
Co-Founder, Director Business Development
ISORG

AbstractISORG has developed disruptive technology of large area image sensors and photonic sensors in printed electronics based on latest developments of organic materials.
These sensors on plastic offer unique benefits for easy mechanical integration (thin, light, conformable), cost and performances (operating in visible and near infra-red).
Applications for inventory control, process control, equipment monitoring, user interfaces and x ray imaging will be exposed.
ISORG is the pioneer company for optical sensors in organic electronics, with their pilot manufacturing line already operating in Grenoble.

CV of presenting authorCo-founder and Director of Business Development of ISORG
Graduated from INPG Grenoble (electronics engineering school) and from Grenoble Business High School (DESS of international business).
He joined STMicroelectronics at Grenoble in 1990 as designer. Quickly he handled various positions within ST as Marketing Manager for the Analog Products dedicated to wireless terminals, and then he became Business Development Director for his major customer Nokia. At the same time he was managing the development of new technologies for Nokia, including technologies developed internally by ST and transferred externally by research organizations and international start-ups.
In 2007, he was named Business Development Director for Smart Textiles at SOFILETA (Bourgoin-Jallieu).
He joined the CEA LITEN in 2010 as project manager for ISORG creation.

15:45

Coffee Break

Session 6

Medical

BiographyMarc graduated in Masters in Business Administration in 1983. Marc started his professional career holding various positions in Marketing or Sales, in France and in Asia at Citroën, Philips TV and Thomson Semiconductors.
Marc's involvement in semiconductors started in 1985, when he joined Thomson-Semiconductors, then STMicroelectronics. Marc specialized in large volume consumer oriented activities, by holding various management position in the Video Division of STMicroelectronics and managing related external partnerships (Joint-ventures, common R&D centers and external acquisitions).
After the acquisition of VLSI Vision, one of the pioneers of the CMOS Image Sensors by STMicroelectronics in 1999, Marc became General Manager for the Imaging Division, in the forefront of the Camera Phone technologies and industry development.
In 2007, Marc joined Sensitive-Object, a start-up company developing acoustic touch technologies and products, as Vice President, in charge of Marketing, Operations and Business Development. The company was subsequently acquired by Tyco Electronics.
Since 2009, Marc is also the owner and managing Director of MVAssociates SARL, a consulting entity working in technology business models and high technology to mass volumes projects.
In STMicroelectronics since 2011, Marc currently holds the position of Director, Camera Modules Business Line at STMicroelectronics, and also manages the contracts and partnerships team for the Imaging Division.

AbstractThe medical objective of "Computer Assisted Medical Interventions" (CAMI) is to perform previously defined operative strategies more accurately and less invasively by use of guiding systems under intra-operative sensor surveillance. When we initiated this research in 1984, a methodological framework had to be developed, the technical feasibility had to be proven, the medical interest had to be established, and the potential for a real "market" was not obvious. Nowadays, the methodology we defined is widely accepted, and numerous clinical studies established the clinical added value of systems that are applied in a wide variety of clinical situations. Since the first efforts in CAMI, a major issue has been to bring Information Technology (computers, navigation devices, robots, ...) in the Operating Room, and to use them to enhance a specific component of a complex medical or surgical intervention. The challenge now is to "invert this movement": instead of moving the computer in the Operating Room, we should embed the surgeon (or at least his or her expertise) into the tools he or she uses, exploiting the possibilities of micro-nanotechnologies combined with real-time information processing. The objective of augmentation of the Quality of interventional procedures can be achieved by "augmenting the surgeon" (meaning augmenting his or her capacity of decision or of action, via efficient use of multimodal information and more efficient effectors). This is why we coined the term "Augmented Medical Interventions". We will see which role micro-nanotechnologies may play in this vision, by their capacity to augment intra-operative sensors and effectors, and also by their application to the design of implanted robots capable to scavenge their energy from the glucose of the patient and to compensate for failure of physiological functions.
This work is supported by French state funds managed by the ANR within the Investissements d'Avenir, ANR-11-LABX-0004, http://cami-labex.fr/

CV of presenting authorPhilippe Cinquin, 58, is Professor of Medical Informatics at Grenoble University (France). He heads TIMC-IMAG, UMR5525, a Research Unit of CNRS and of Université Joseph Fourier, CAMI (Computer Assisted Medical Interventions) Labex, and co-heads CIC-IT 803 (Centre of Clinical Investigation - Technological Innovation) of INSERM, Grenoble's University Hospital and Joseph Fourier University. He holds a PhD in Applied Mathematics and is a Medical Doctor. In 1984, he launched a research team on Computer-Assisted Medical Interventions (CAMI), which led to innovative surgical practice, benefiting to more than 100 000 patients, thanks to the creation of several startup companies. He recently turned on intra-body energy scavenging in order to power implanted medical devices. He was the recipient of the 1999 Maurice E. Muller Award for excellence in computer-assisted orthopedic surgery, of the 2003 CNRS Silver Award, of the 2013 CNRS Innovation Award and of the 2014 Ambroise Paré award of the French Academy of Surgery. He is a member of the French Academy of Surgery. He is co-lead inventor of the "Biofuel cell with glucose" patent, which is finalist of the 2014 European Inventor Award.

16:45

Development of Silicon Photomultipliers at FBK for nuclear medicine applications.

Development of Silicon Photomultipliers at FBK for nuclear medicine applications.

Claudio Piemonte
Chief Scientist
Fondazione Bruno Kessler

AbstractHigh-energy radiation imaging in medical diagnostic systems, such as Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT), is usually based on scintillation detectors. The energetic particle is converted to light in the scintillation material, which is, in turn, detected by a highly-sensitive photo-sensor. The information to be reconstructed are: the interaction position, the energy of the radiation and, depending on the system, also the precise arrival time. Up to ten years ago, the only sensor able to provide the needed information was the Photo-Multiplier Tube (PMT). Recently, a new kind of detector have attracted a lot of attention in this field: the Silicon Photomultiplier (SiPM). It is a solid-state device composed by several hundreds Geiger-mode Avalanche Photodiodes per millimeter square. The advantages over the vacuum-based counterpart are: low operational voltage, compactness, insensitivity to magnetic fields, better detection efficiency and better interconnectivity to the electronics.
Fondazione Bruno Kessler (FBK) (Trento, Italy) has been working on SiPMs for nuclear medicine since 2006.Two main development lines are active. The first one is on "analog" SiPMs, in which the sensor is manufactured on custom silicon technology aiming at the best performance in terms of both detection efficiency and noise. The second one is dedicated to "digital" SiPMs. In this case, the sensor is built on standard CMOS technology to include signal digitization on-board. In this presentation, we will give a comprehensive description of the main sensor properties and their functional performance coupled to scintillator crystals typically used in PET/SPECT. Finally, we will highlight pros and cons of the two approaches.

CV of presenting authorClaudio Piemonte was born in Udine, Italy, in 1972. He received the "Laurea" degree (M.S.) in Electronics Engineering from the University of Trieste, Italy, in 1997. From 1999 to 2002, he was with the National Institute for Nuclear Research (INFN), section of Trieste, Italy, as a fellow. In 2002 he joined Microsystems Area of the Fondazione Bruno Kessler (former ITC-irst), Trento, Italy, as a Research Associate. Since 2008 he has been coordinating the SRS group in FBK. His current research interests are focused on the development of silicon radiation detectors and low-level light sensors for high-energy physics experiments and nuclear medicine applications. Claudio Piemonte has co-authored more than 130 papers published in international journals and conference proceedings. In 2006 he received a "Certificate for outstanding contributions to the field of nuclear
radiation measurements" from the Radiation Instrumentation Steering Committee of the IEEE Nuclear and Plasma Sciences Society. In 2010, he co-founded the spin-off company AdvanSiD for the commercialization of advanced silicon detectors. Currently he is member of the board and CTO of AdvanSiD.

AbstractImaging being our core business we will cover advance imaging applications and the related technologies involved. Outstanding design technics and original developments will be highlighted aiming in a constant improvement research for reliability and effective performances. Finally , this talk will address routes for opening innovation and sustaining future businesses

CV of presenting authorMaster degree in Optoelectronics ( University of Saint-Etienne - 1987)
MSe Management Technology Innovation ( Grenoble Ecole Management - 2013 )
Started as a sales engineer for the french company Jobin-Yvon Instruments S.A. (1987-1989) and for Princeton Applied Research (USA)(1989-1991) developping sales activities in the research domain ( Spectroscopy and Signal processing ).
Joined Hamamatsu Photonics France in 1991 ( Business Developpement ) dealing with Imaging and High speed photonics research applications . In 2002, openned Hamamatsu Office in Grenoble (Meylan) to sustain business development of large equipments produced by the Systems Division of Hamamatsu Japan, adressing semiconductors manufacturers ( foundries ) and R&D process developments, especially in the reliability , failure analysis and novel technology bricks domains.
Expertise in customer relationships and innovation management , with a daily involvement with the main actors of the semiconductors industry in France and several European countries , including also nanotechnologies R&D fields and new organic applications domains.

AbstractThe merge of MEMS and O-MEMS based technologies with wafer level optics, CMOS image sensor technologies and wafer level chip scale packaging technologies allows the realization of full wafer level assembled micro camera modules with unprecedented size miniaturization. The large economy of scale introduced to the traditionally "artisanal" endoscopic equipment manufacturing allows for one time use equipment, mitigating operational cost and risks associated with sterilization. The availability of miniature size high resolution imaging modules, (having all dimension smaller than 1mm) at a controlled cost allows the realization of novel medical imaging applications on equipment and in procedures where previously visualization was not possible or existing visualization was strongly limiting the versatility of the tools and providing limited resolution only. This talk gives an over view of key enabling technologies and application potentials.

CV of presenting authorMartin Wäny graduated in microelectronics IMT Neuchâtel, in 1997. In 1998 he worked on CMOS image sensor at IMEC. In 1999 he joined the CSEM, as PHD student in the field of digital CMOS image sensors. In 2000 he won the Vision price for the invention of the LINLOG Technology and in 2001 the Photonics circle of excellence award of SPIE . In 2001 he co-founded the Photonfocus AG. In 2004 he founded AWAIBA Lda, (www.awaiba.com) were he is CEO. AWAIBA is a design-haus and supplier for area and linescan image sensors specialized on high speed and high dynamic range sensors and miniature wafer level camera modules for medical endoscopy and portable miniature vision. Martin Wäny was member of the founding board of EMVA the European machine vision association and the 1288 vision standard working group. He is member of IEEE and SPIE societies.

17:45

Fully integrated CMOS THz Imaging Solutions

Fully integrated CMOS THz Imaging Solutions

Andreia Cathelin
Senior Member of Technical Staff
STMicroelectronics

AbstractTHz systems with commercial viability will require portability, high integration-levels, video-rate speeds, low power-consumptions as well as room-temperature operation. Therefore, Silicon technologies are attractive system-on-chip alternatives to classical expensive Terahertz systems based on III-V compounds, micro-bolometers and others. In this talk we will discuss the ability of THz detection well beyond fT/fmax of standard Silicon-based transistors. We will then address the key design challenges and techniques for designing, operating and characterizing efficient focal-plane arrays of direct power-detectors for Terahertz video-rate multi-pixel imaging, as well as the trade-offs involving bandwidth, sensitivity and power-consumption, in view of various electrical and electromagnetic constraints. Full system integration capabilities will be demonstrated based on a recently reported work of a 1kpixel 65nm CMOS video-camera for active THz-imaging (0.6-1.1THz).

CV of presenting authorAndreia Cathelin (M'04, SM'11) started her electronic studies at the Polytechnic Institute of Bucarest, Romania and graduated from the Institut Supérieur d'Electronique du Nord (ISEN), Lille, France in 1994. In 1998, she received the Ph. D. degree from IEMN/ISEN, Lille, France regarding the work on a fully-integrated BiCMOS low power - low voltage FM/RDS receiver. In June 2013, she received the "habilitation à diriger des recherches" (habilitation) degree from the Université de Lille 1.
In 1997, she was with Info Technologies, Gradignan, France, working on analog and RF communications design. Since 1998, she has been with STMicroelectronics, Crolles, France, now in Embedded Processing Solutions Segment, Design Enablement & Services, as Senior Member of the Technical Staff.
Her major fields of interest are in the area of RF/mmW/THz systems for communications and imaging. June 2012, Andreia has been designated winner of the STMicroelectronics Technology Council Innovation Prize. She is member of the Technical Program Committee of ISSCC, VLSI Symposium on Circuits and ESSCIRC; she currently serves as chair for the RF sub-committee at ISSCC and secretary for the 2014 VLSI Circuits Symposium. In September 2013, Andreia has been elected on the Steering committee of ESSCIRC-ESSDERC conferences. She is as well member of the experts' team of the AERES (French Evaluation Agency for Research and Higher Education). She has authored or co-authored 100 technical papers and 4 book chapters, and has filed more than 25 patents. Andreia is a co-recipient of the ISSCC 2012 Jan Van Vessem Award for Outstanding European Paper and of the ISSCC 2013 Jack Kilby Award for Outstanding Student Paper.