CMOS sensor inventor Eric Fossum discusses digital image sensors

Image sensor engineer and primary inventor of the CMOS sensor, Eric Fossum, has given the second annual Victor M. Tyler Distinguished Lectureship in Engineering at Yale University. Fossum's talk: 'Photons to Bits and Beyond: The Science & Technology of Digital Image Sensors' covers a wide range of subjects, from the basis of the way sensors work to the potential risks to society of the ways technology can be used. He touches on noise, demosaicing and how 'the force of marketing is greater than the force of engineering.' Yale has put a video of the presentation on YouTube and it's well worth watching if you have any interest at all in the physics and engineering that make your camera work. (via Image Sensors World)

Mr. Fossum,I would like to get answer from expert as you on the following question - if the given size sensor pixel density would be reduced, lets say four times, from common 16 to 4 MP (enough for people who don't print their images) and made with the same latest CMOS technology, will this result to significantly:1. Lower noise2. Higher dynamic range3. Better tonal range4. Higher color depth5. Cheaper production costs6. Or other possible improvements?

Is the pixel size the same or larger? Same optics? Are we optimizing the pixel for any particular parameter? Sorry to answer your question with questions but these are also important. I will try to check in a day or two and see what you say.

*I am not expert in sensor technology, but I assume if density becomes 4 times less and technology is similar then it is possible to make pixel size roughly 4 times bigger.*Same optics.* We are optimizing pixel to such possible extent that production cost left the same or less and those 5 (or more) parametrs becomes as good as possible, although making priority on lower noise.

Let's assume 4x larger pixel and 4x larger full well but same operating voltages etc. So, conversion gain is reduced 4x. This analysis merits more reflection, but shooting from the hip on a Friday night...

1. Noise. Read noise would be 4x worse. Usually you don't have to worry about read noise. In shot-noise limited performance, SNR would be better at same digital output level. With lower conversion gain, the same digital output level corresponds to larger number of electrons and hence better SNR.

2. Dynamic Range. Probably the same. The max signal (in electrons) has increased by 4x. The read noise (dark noise) has increased by 4x. The ratio is the dynamic range, more or less.

3. Tonal Range. I am thinking this is limited by the ADC quantization error so it would be the same. But maybe something else limits tonal range.

6. I am sure there must be some, but I can't think of any at this moment.

Note that these answers depend on the assumptions you asked me to make, and assumptions I made. You should not generalize too much from this, but it seems that more pixels is better under this set of assumption since SNR is probably not a big issue for well exposed, low ISO (gain) photos. Maybe the SNR improvement of the larger pixels would be reflected by better performance at high ISO numbers.

Actually the smaller pixels will give better slightly better results if the image quality is read noise limited, since the noise will grow 2x if you add together 4 pixels. Also the color resolution will be better. Due to color processing, having at least a full Bayer kernel with in the Airy disk is better than each kernel element being roughly the size of the Airy disk since color processing reduces resolution anyway.

One of the initial assumptions seems dubious to me. If you already have a 16M process, then a 4M imager would use the same sized pixels, with 1/4 of the overall active area, and get the large boost in wafer yield that would make it profitable to sell the 4M imager. Upping the pixel area, is unlikely, particularly for a consumer imager, since the yield would be comparable to the 16M, and you'd have to sell it for about the same amount, which is a non-starter.

To the first approximation, the noise performance of a same-sized pixel 4M would be comparable to slightly better. Assuming the same frame rate means that the readout rate could potentially be reduced, which generally lowers overall read noise. Since there are fewer reads in the array itself, clocking and other related noises should be reduced.

As I understood if pixel size will be 4 times bigger on same sensor size you think that SNR will be better while other parametrs staying the same.I compared 3.1 and 3.9 MP old generation (2005) sensors with same optics & body and noise levels of 3.1 MP one is really much lower, although pixel density is only 1.2 times less. So I tought that with new technologies and 4 times lower density (e.g. 16->4) noise levels should be hugely lower which may attract many customers, cause most of the latest high megapixel P&S (and some higher class) cameras have visible/excessive noise even at base ISO in good light.

It looks like sensor pixels and gaps between them are pretty large compared to logic gates, so one can probably pack a lot of logic on the same chip in modern 28 nm CMOS technology. Will we soon see a camera on a chip, with sensor, processor, memory, etc. on a single die?

Pr. Fossum,Thank you for this very interesting presentation. In particular I find the concept of QIS most interesting. For me the challenge is probably more in the photodetector than in the associated electronics. You consider the data transfer as a big challenge, but dont you think possible to resort to on-chip data reduction (image formation) and on-chip networking, rather than moving out Tbits/s of data ? What is in your mind the timeframe for the realization of the QIS ?

Since most photographic devices today are those integrated into mobile phones, with tiny sensors but high frame rates, do you think for them it makes sense to use super-resolution algorithms to lower the noise and increase resolution?

Isn't it that super-resolution upscaling is already used by some DVD players and TVs ?

Theo, sure, one could interpolate more pixels but the real resolution is fixed by the number of real samples takes. Some low end cameras (e.g. web cams) have, in the past, done what you suggested. Sometimes it is still done.

Can somebody knowledgeable explain, why do we still have an ISO setting in cameras with CMOS sensors? As far as I understand from this presentation, CMOS sensors have separate amplifiers for each pixel. That means that the gain of every amplifier can be set for optimal signal/noise ratio for the amount of light the pixel received. In other words we won't have an ISO setting for a sensor, rather a camera can set an optimal ISO (gain) for each pixel. So we can get rid of ISO setting on one hand and get the best possible dynamic range from the sensor under each given light condition on the other. So why don't we see this implemented in modern CMOS sensors, or is it?

The existence of a signal amplifier per pixel does not imply they're programmable. Don't think of "amplifier" in the way your home stereo amp works. In the sensor the amplifier is just one transistor and two resistors.

obviously the outcome of this setup would be every pixel exposed to its maximum potential... hence the final image would probably be very very light gray... you need to set the amplifacation globaly or you will get no contrast!

"you need to set the amplifacation globaly or you will get no contrast" -- I don't think so. The wonderful thing (if that were possible), that the output of each pixel would be weighed (scaled) with individual amplifier gain, so the resulting raw file would have the best possible dynamic range the sensor is capable of. It would be like combining multiple exposures taken with ISO from the lowest to the highest -- no pixel overexposed, no pixel is underexposed.

So in this scheme I think you need two exposures. One to figure the right gain settings, and one to take the larger dynamic range image, and hope nothing bright or dark moves in between the two shots.If you are going to take two exposures there are many ways to achieve high dynamic range - like one short and one long exposure fused together.

Dear Eric, thanks for the very interesting lecture. Like it, especially the part about lens's diffraction limit. Can I ask you a question: Is it good idea to be inventor? Please, advise. Dream to work with you in a one team.

Thanks Boris. It is a good idea to be a good inventor. Luck and timing are helpful as well. I did not plan to be an inventor. It is just a natural part of being an engineer that one sometimes comes up with solutions to problems, and sometimes these solutions are new enough to be patentable.

Does Fossum suggest an optimum number of pixels on a 1/2.3" or ASP-C CMOS sensor? If one views the image on a 1920x1080 screen, isn't the optimum really only about 2MP, exept when cropping, in which case you'd need perhaps 4mp or 6mp? Can one strip away the "force of marketing" and assert anything on the matter? Just curious.

Thanks. But I suspect there has to be a point were the incremental resolution is offset by the incremental "noise." At any rate, that's what some people proclaim. The physics should set some boundaries which technology might try to dodge, but not overcome. Is there no "Fossum Uncertainty Principle"? There should also be some way to quantify or measure the upper limits of what viewers can distinguish under realistic display situations.

Cy, do you want noise or resolution? People cannot even agree on the weighting of components in an image quality metric. Let's say you did have a metric formula so you could compute IQ from all these factors. Still, the technology used to impact the sensor will impact noise, maximum signal, QE, etc. Perceived image quality does seem to get generally better with smaller pixels down to some size which is smaller than predicted just by the diffraction limit, and counter also to SNR analysis. So there is also a human perception factor that is not yet quantified.Nevertheless, at some point even with perfect pixels and IQ well defined, there is marginal return on IQ. I just have only a fuzzy idea where that might be, and I think most manufacturers try to find that point along with other technical limitations. Sorry to be so, uh, uncertain.

This issue, in part, reminds me of "vernier acuity" (also known as visual hyperacuity) in which the ability to detect visual "differences" is smaller than one would expect just from measuring visual angle of the target from the eye. In other words, the visual difference detectable by a person, at first glance (pun intended), appears to be within the visual angle SMALLER than a single retinal cell. I've not read that literature in quite a number of years, and last I recall the puzzling aspect of what looks like being able to detect visual differences within the diameter of a single cell may be explainable by inter-cell communications and light distribution differences across multiple cells (this is all a rather technical subject and I fear that the summary does the topic injustice). But in any event, it does not surprise me that perceived image quality exceeds the diffraction limit mentioned above.

Thank you very much for this Dr. Fossum. I was very excited to see you around in dpreview, and it's our chance and privilege. Several points;

-"Force of marketing..." is one of the best quotes I have ever heard, it explains a lot of things going on in technology driven industries.

-The UDTV with 33 MP, if it were available today, could make photography pretty much a thing of the past, considering that still digital photos today have so much resolution advantage over HDTV, but again there would always be a cheaper 60MP still image taker for any 33MP video camera!

-I suspect viewing a 33MP video at 60fps would be quite challenging for the brain. There is no doubt about the advantage in image quality, but some neural problems may arise but I guess it's too early-and certainly not for me to pass judgement on.

Finally, I would like to ask if you could recommend a "for dummies" type resource for understanding the deeper electronics of current photography technology?

-32 Mpixel @ 60 fps - looks more like real life so when I saw it at the Aichi Expo some years ago I did not see any viewing problems.-I am sorry but I just don't know of any intro-level books like that. Seems like there should be some general photography-audience books.

- 33MP challenging for the brain? I do not think so. Not more then a real life seeing anyway.

- still photography is not about the megapixels and a higher quality in comparison to TV, but the seeing something interesting, and then composing it. You can appreciate and contemplate a good photograph looking hours at it - a good quality is necessary for it, but it is only a tool. There is a big difference between still image and moving image. Considering that many photos taken with full frame DSLR are then transformed into something Facebook can handle... I would not worry about still image, even if TV had more megapixels...

-Most people live below their eyes' resolution capability all their lives, and some people who just start wearing glasses have difficulty adjusting to the clarity. In fact, optometrists prescribe less than perfect numbers on purpose because brain can have difficulty absorbing all the details. I'm talking about long term effects, and unless you are a neurologist Uaru, your guess is as good as mine.

-With a videocam with 32 MP output resolution shooting at 60 fps-oh yeah, I would be worried! Even if some frames were interpolated, a 30 fps shooting with 32 MP each would allow me to pick the best frame. Of course other adjustments such as ISO speed, aperture control, shutter speed etc would still be necessary, but hey--they can incorporate all these into video if they make a 32MP@60fps videocam. In any case, for any 32MP videocam there would still be a photo camera with at least twice the resolution, so the point is moot.

princewolf, and how do you think that UHD videocam shooting would look like, comparing to photo shooting, I mean in a practical sense? What about composing, etc?

It is a totally different process, to shoot a video, versus shooting stills, practically, different mindset and photographer (filmmaker) focus.

However, it might be true, that the camera itself might help with getting the most out of your shots, when "shooting" in a movie mode, and picking the best frame later on. But to a very little degree, IMHO.

Thank you Dr Fossom for rolling the rock of CMOS to the top of the hill.Concerns for the use put too is reminiscent of big brother on two levels.Identification: The act of knowing that which is contained in boundaries!So the means of recording is both borrowed by the state and individuals.The public office watches the individuals while the individuals watch them Watch them watch the individual who chooses to break the laws so made.This production of yours has identified one thing which is paranoia is fear.Now we know what can be done about proper use and wrong use of CMOS?Like Sisyphus felt it was fun to watch the rock roll down that hill again, again.

OK. Wonderful lecture. That means 4 micron is the right size to match what physics already knows. So there is your standard based upon the electromagnetic spectrum called visible light and when you multiple this pixel measurement by FF dimension, you have the practical limits of FF resolution before "shot" artifacts become an increasingly worsening phenomenon. OK. So Canon ID-X should provide the best possible rendition of light and resolution for FF and that if you need a larger printout you will need to resize up and accept the inherent loses or you will need to go up to a MF sensor solution. This applies to all FF manufacturers and not just Canon. Excellent lecture! Thanks!!

I think drawing the line at 4 microns may be premature, but certainly there is a diminished return on investment in resolution as pixels fall below the diffraction limit. Thanks for your nice comments.

I wonder why the bayer pattern dominates. Given that low energy red photons have the lowest QE, and my subjective impression is red chroma noise is most prevalent, would it make more sense to have two red sensors per cell?

One thing I'd be interested to know if consideration has ever been given to sensors which do not use regular grid patterns. The eye's rods and cones are certainly not layed out with the regimented regularity of a CMOS or CCD sensor, and neither is film. A pseduo-random pattern might disrupt things like moire patterns. Of course the changes required to image processing all the way through the stack would be fairly horrible to consider, but I'm pretty sure the brain's optical perception systems do not work through regular grids.

Good Morning Dr. Fossum. I should disclose that I am a proponent of high resolution but not unbridled. I suppose the corollary is that I also support small pixels by definition although I could care less about the size of pixels. I am financially confined to FF for the near future. Also, I am not technically educated on digital imaging.

By diminishing returns do you also mean degradations as is suggested by Techblast? The two ideas do not seem necessarily tied together. As for diffraction, I think of that as an evil that in small amounts is preferable to not getting the DoF one needs. How small can the pixel get (how many pixels on a FF chip) before unrecoverable harm is done? You and others have mentioned 4 microns. Does that equate to 18 MPs on a FF surface also as suggested by Techblast?

Rick, if you read Canon's release statements concerning their new 1D-X, they claim a 6.95 micron pixel size adn a 6.4 micron pixel size for the 5D MKII. If for example 4 micron does turn out to be the point where further shrinkage does not necessarily advance resolution detail when enlarged to 100% size, then you can surmize that a FF sensor will max out around 31 - 32 Megapixels, which is good. For me, it would be somewhat fruitless to pay $3K, $4k, $11K for individual lens only to distort IQ with undersized pixels. Canon has made it clear, at least to me, that pixel size provides a higher signal to noise ratio, a cleaner replication of light and a sharper image.

So, it would seem that once Canon gets to ~30MP on their FF sensor, that Canon, Nikon and others may have to ask the critical question as to whether to embark upon Medium Frame sensors in order to provide higher resolutions while maintaining Image Quality

It looks like Canon, Nikon and others FF equipment providers are either going to have to investing a lot of money in R&D in order to find new methods for photsensor device fabrication that can accurately capture light on smaller photosensor sites and hence can avoid MF sensors or will find themselves quickly approaching the limits that a FF sensor can deliver and face having to develop MF camera solutions for the high end professional market within the next 4-5 years.

Your calculations are faulty. You have performed a linear calculation when it's the area that matters. If 6.95 microns pitch gives 18.1 MPiX, 4 microns will give 3.02 x as many photosites, or 54.6 MPiX.

Oh boy, lots of great info on camera sensors (at least the parts my non-engineer self can follow, lol).To add to the social issues list, perhaps, here's a big one I've noticed:It seems camera companies have a deliberate unwillingness (still) to allow Point-&-Shoot cameras to perform in darker environments (like indoors), which likely contributes to this worldwide problem of people having dirt-poor perception of their self-image.

In other words, I believe the horrible results we get from frontal-flash, or underexposed, poor-looking photos, is a worldwide scourge on human self-perception. People already think they're ugly in GOOD shots, so, imagine what must happen when they see themselves portrayed even worse than reality.

Isn't it affordable for a company to just use a lower-megapixel version of even a 3 year-old D-SLR sensor (better low-light), or, for that matter, add a single screw to the onboard flash, allowing it to pivot upward, and become a beautifying bounce flash?

'The force of marketing is greater than the force of engineering'What I don't understand is, why Mr Fossum is now admitting that current sensor densities is more about marketing than actual performance.In previous discussions here in these forums I personally discuss this with him and he seemed to have a different position advocating for more/smaller pixels.Back then I only agree with him about the pixel size sweet spot.I, by no means, have his background but my observations come after years of real world product comparisons and I've always maintained the position that current densities are above the sweet spot for performance, due to market demand, as Sony always put it with each release of a new consumer sensor with more pixels. 'Due to market demand...'

I believe most of these discussions have been about shot noise, read noise, full well and image quality. I don't recall a lot of discussion of the diffraction limit. In any case, I still believe in smaller pixels.

Until you reach sub-diffraction-limited pixel sizes, there is a sweet spot for every technology generation. Once you are below this size, the return on resolution for smaller pixels diminishes.

Yeas and also diffraction. One problem is that it is assumed perfect conditions. Camera on tripod ,shooting a static subject, perfect lens from corner to corner and perfect focus. In practice all that blows away and it turns out that we can be served better with larger pixels for most common situations since the maximum resolution the sensor is capable is very rarely achieved.BTW Nice self portrait and thanks for taking the time to reply.

Thanks to Dr Fossum for sharing his knowledge with us. It is a privilege to hear from one of the lead engineers in the field. My favorite take away from this talk is the clear indication that more pixels is not always better and that all else being equall I would rather have larger area light gathering units on my sensor.

Paul, I did not say double bond, but a single electron can be detached from the bond by a photon with energy of a bit more than 1 eV (like, red green or blue). Usually we talk about such photon absorption using an energy band model but the chemical model is sometimes easier to use. I am not sure what you think is impossible so maybe you are thinking of something different thant what I tried to describe.

Thanks Professor! I understand the photo electric effect E=hv-p, but Si is non metallic, and it is practically inert; how can photon-electron reaction occurs? Perhaps you are talking about the metallic oxide that Si may serves as a substrate? Please advice!

professor should just admit he had misspoke about the covenant bond. The energy of visible photon can only bring an electron at the valence band to the conduction band. The conduction band is formed due to the periodic potential structure of the Si crystal. The electron can move freely inside the crystal, but can not escape from the crystal. Therefore, no covalent bond is broken.

Paul, I am talking about the optical generation of electron-hole pairs within the silicon crystal. It is a little different from the photoelectric effect.

Infosky's response is right and wrong. In the energy band model, an electron is optically excited to the conduction band leaving behind a hole. The hole, in essence, is a broken bond (absence of an electron). As it moves the "broken bond" shifts from atom to atom in the lattice with remarkable mobility.Thus, the localized point of absorption and the corresponding broken bond exists only momentarily, and then the broken bond moves with the original break being "healed" by electron motion in the valence band. Besides calling it a "covenant bond", saying that no covalent bond is broken is incorrect when considering the dual nature of electrons in a crystal - both as "waves" and as classical particles.

As an optical scientist, I had learned very little from this lecture. It was intended for students in college. If you are familiar with sensors, I would advice you to skip the video entirely. You won't learn much.

The speaker mentioned a few " new" ideas. But, I would not really think these ideas worth your time. If you are worried that you might miss something, just fast forward the video to 50th minute. You don't lose anything by skipping the first 50 minutes. The speaker talked too much on what was in his mind rather than what was really useful to the audience.

Very interesting talk, I enjoyed outlook on future technology the most! Some questions came to my mind:

1. Regarding QIS: What will be this paradigm's impact on sensor DR? Shouldn't it basically correlate with jot size and sampling speed? What kind of DR can one expect?

2.Is there any chance and research done to increase the water bucket depth (to stick with the football field analogy) of CMOS sensors in order to increase DR? Or will we just have to use larger area for that task? What about logarithmic readout?

3. Regarding RGBZ sensors: To my understanding, the eyes can't see depth, only the brain calculates it out of of parallax information provided by the eye pair distance (and subject motion / perspective change). Current 3D display technology utilises parallax. This sensor provides true depth data but no parallax information. Can the latter be calculated so that the human visual system can actually make use of the available depth information?

Hi Martin,1. The DR can be quite larger- far larger than conventional CMOS image sensor pixels. But, we can do this with slight changes to CMOS image sensors as well right now if the camera manufacturers wanted to, so technology is not the limiting factor for DR. 2. Full well depth is always a major design goal in any new generation pixel, but as pixels get smaller, it gets more and more difficult to increase the per-area-capacitance to compensate for loss in area.3. Parallax can be computed from range data, and some 3D TVs do this calculation inside the TV. There are some emerging standards for 3D TV signals but I am not intimately familiar with them.

With QIS would the period of a group of sample increase the DR?Then the slower the shutter speed the greater the DR ordoes the speed of light negate this in comparison.Ccomputation speed would also influence this, but not so muchwhen compared to shutter speed.

Interesting that increasing DR is alreday possible for a set pixelsize if they want to.

The diffraction discussion is a bit misleading, however. It assumes perfect sampling and that the Rayleigh limit is a hard limit. Neither is true.

If you include the effects of the real diffraction MTF=0 cutoff, large pixels versus infinitesimal pixels, Bayer mask versus monochrome, and the use of AA filters, you get than you can go a least a factor of four or so smaller than the numbers mentioned in the talk.

I've done real-world testing on this, at f/11. The Rayleigh criteria would indicate that 14.8 micron pixels would be all you'd need at f/11. But real-world testing indicates improvement in apparent sharpness right down to about 3.2 microns pixels. The improvement at the end is very, very small, but still there. It goes to zero (visibly) below that.

Thus, for the f/2-f/2.8 optics of cell phones, I wouldn't worry too much about smaller pixels being truly useless until you start to get below 1 micron pixel sizes.

I am not so disturbed that Eric has bestowed new powers of surveillance upon Big Brother as I am by the fact that "The force of marketing is greater than the force of engineering". When shopping for cameras I dread having to sift through hectares of semi-dysfunctional cameras that are really misleading marketing trinkets designed to appeal to the unwashed masses. I would prefer the best engineering available in a camera.

I don't trust camera manufacturers and I suspect they conspire against everyday consumers and photography enthusiasts alike. I would like to know what the actual difference in manufacturing costs are between an 18 megapixel APS-C sensor and an 18 megapixel Full Frame sensor. Am I being denied an inexpensive Full Frame camera by the the forces of marketing?

I would like to know what the actual difference in manufacturing costs are between an 18 megapixel APS-C sensor and an 18 megapixel Full Frame sensor. Am I being denied an inexpensive Full Frame camera by the the forces of marketing?

No

Go to http://en.wikipedia.org/wiki/Image_sensor_format it states: Production costs for a full frame sensor can exceed twenty times the costs of an APS-C sensor. Only about thirty full-frame sensors can be produced on an 8 inches (20 cm) silicon wafer that would fit 112 APS-C sensors, and there is a significant reduction in yield due to the large area for contaminants per component. Additionally, the full frame sensor requires three separate exposures during the photolithography stage, which requires separate masks and quality control steps. The APS-H size was selected since it is the largest that can be imaged with a single mask to help control production costs and manage yields.

And the cost of a 600D with the same sensor as the 7D is $750. However, the price of complete cameras does not tell me anything about the manufacturing costs of their constituent components. Rather than relying on Wikipedia or the suggested retail prices of cameras to speculatively interpolate the costs of sensors I was hoping that someone familiar with sensor manufacturing would spill the beans on what it actually costs to make sensors.

And what would one get with the knowledge of sensor manufacturing costs? The right to decide on a reasonable price for a camera? When making such a calculation, be sure to consider the cost of R&D, QC&T, licensing, return on investment, both in physical and intellectual assets, transport, packaging, storage etc...

Question: "And what would one get with the knowledge of sensor manufacturing costs?"

Answer: The deepest, darkest secret of the camera business!

The costs you mention (like R&D, licensing and transportation) are the same for an APS-C sensor and a Full Frame sensor. I want to know what the difference is. I want to know if a camera company is reducing the cost of a camera by $50 by giving me an APS-C sensor instead of a Full Frame sensor.

I would love to know how much it really costs in total to make these cameras and how much pricing is purely market positioning and greed. I have hard time believing it is necessary to charge $2500+ for a camera like the 5D mark II in order to make a profit. It is more understandable with a camera like the 1DX that has a completely new sensor and auto focus system with lots of R&D behind it. But the 5D Mark II used mostly existing technology and components. Heck some of the components were already previous generation technology at the time the camera was released.

I suspect that the actual differnce in manufacturing cost between a FF and a Crop sensor is significantly less than some would have you believe. It is probable that the only reason there isn't FF cameras for $1000 USD or less is because the camera manufactures want to be able to charge a premium for full frame cameras. I bet if you really knew how much it costs to produce say, the D3X, you would find out that there is a very high markup. The only explanation for Sony and other manufactures being able to sell cameras with the same sensor for significantly less money is that the sensor is not the major cost of producing the camera. If it was how could canon sell Rebels with the same sensor for over $1,000 less than the 7D or the 5D mark II for $2,500 When the 1DS Mark III has a SRP of $6,999.99. It is obvious the FF cameras are being kept at an artificiality high price. It is also obvious that the sensor is only a small fraction of the cost of producing the camera.

When there will be the first HD resulotion (2MP) sensor (1/2.3 or bigger size) made with modern technologies for social network users who don't want to print their pictures, but instead of that want better quality in low light conditions, use smaller capacity memory cards, faster processing, post processing & upload times and simply watch them on the high definition screen and pay less for such a sensor (camera)?Or that never will be made cause it is dangerous to worldwide megapixel race and megapixel myth?

I'm all for this idea. The mp race is only making it difficult to these consumers including me. I want fast shutter speeds and least possible noise in low light. Huge resolution sensors have their place in good light.

The thing with megapixels is that how many you need really depends on what you want the camera to do. For example if all you want to do is post pics on the web then any half way decent camera will do. But if you want to make large fine art prints, then you really do need as many megapixels as you can get. There is not a "megapixel Myth". Just a lack of understanding of what more megapixels actually gives you and what the tradeoffs are.

Well I definitely enjoyed the presentation. Glad you shared it with us. Look forward to the plenoptic and 3D imaging aspects for digital photography. As for the social issues that will rise due to this technology, well, anything can be used for both good and evil so I really don't have an opinion to share here w.r.t. that. :)

While credit should be given where it's due, Eric Fossum did NOT invent CMOS image sensors.

The invention of passive pixel sensors dates back to the late 1960s (it's generally credited to Peter Noble in 1968). As for commercialization, VVL had the first commercial CMOS sensor on the market around 1993 (if I recall correctly). This was a passive pixel device.

Eric's work at JPL was on active pixel sensors. This was a major improvement that allows the high-quality CMOS sensors we enjoy today. However, there was work in this area that predates Eric. Namely by Tsutomu Nakamura, at Olympus, and there's an even earlier claim by Hitachi.

Where Eric does have a good claim is to have invented the sensor that uses intra-pixel charge transfer and achieves real correlated double sampling, which is a big deal when it comes to noise reduction.

Jon, you are almost exactly right. However, nearly 100% of today's CMOS image sensors fall into your last paragraph and all are referred to just as CMOS image sensors. In the mid 90's I published a review paper that refers to all the historical works you refer to. Also note that my friend Tsutomu Nakamura coined the phrase "active pixel". I just made it welll known.

In the land of passive pixels, much credit is due to Gene Weckler. The technology is those days was MOS, not CMOS.

Eric, you've always given credit to those earlier works. I wasn't trying to imply otherwise.

Yes, without true CDS CMOS sensors wouldn't be where they are to day. This was vitally important work.

I wasn't attempting to take away from your contribution. Rather I was trying to show that there have been many important contributions made.

I believe that science and engineering badly represent themselves to young people by perpetuating the myth of the lone inventor. We are social animals and young people are likely to be turned off the idea of going in to science if they think it means working as a recluse.

Science and engineering (as you well know) are almost always highly collaborative affairs, rich in social interaction. If we are going to overturn the negative view that young people in The West have towards science and engineering as a career, this needs to be emphasized.

BTW, you always had a reference to an early charge transfer paper in your early papers. Was that by Meindl?

Carver Mead is a brilliant person and someone I like and respect a lot. You are probably referring to the 3-layer detector from Foveon. I think the primary inventor on that was Dick Merrill, formerly of National Semiconductor, and sadly now deceased. There was also some significant prior art as well (see, for example references in:http://ericfossum.com/Publications/Papers/2011%20IISW%20Two%20Layer%20Photodetector.pdf )Frankly I am not sure of Carver's role in the development of the Foveon X3 sensor. But, I am sure he had a little something to do with developing the technology and product as you say.

This is great. My first job out of college was to design CMOS image sensors arrays at Intel around 1996, largely based around his designs. Nice to see him talk about upcoming technologies as well.

The societal implications of technology are a concern, but as engineers, we just make first because that's what we do, and ask questions about it later. Really, in a 100 years, these digital image sensors are going to be everywhere, on walls, flexible fabric, packaging, everything, and are going to be networked. We're not even close to their potential applications at this point.

Meanwhile, back at Intel, we added a camera shutter noise to make people more comfortable with using digital cameras, as back then consumers didn't really know about the coming digital camera revolution. (digital cameras were largely only used by a few professionals).

Most readers who have heard of CCD/CMOS will assume the title refers to image sensors, not generic CMOS. And anyone who reads the first sentence will have the matter clarified for them.

Sometimes brevity trumps accuracy. The title also says "discusses digital image sensors", but technically, that isn't accurate either. He discusses much more than just image sensors (just as the "CMOS" in the title refers to much more than just "CMOS"). But it doesn't make sense to list *all* the things he talks about in the title, so some sort of short summary is made.

Words have meaning and the truth matters - perhaps not to everybody, but to people who understand what the words mean, the truth often does matter. I suspect it matters to Gene Weckler. Perhaps some day you will invent something - or perhaps you already have. In any case, I hope that you will get proper credit for it.

Correct attribution is an important issue. If you are writing articles that are read by millions of people then you have a responsibility to include factual data other wise the article is no better than the kind of "beer talk" you can hear in every bar around the country.

The RGBZ technology is interesting, as is QIS. Beside gesture control, I wonder if the Z component could be used some way in webcams, perhaps to give a 3-D effect. Samsung's vision seems to be 3-D everywhere, not just entertainment.

Ron, just to be clear, Gene figured out how to integrate optical signal on a PN junction and to read that out with a switch, now called a passive pixel. The technology was MOS, not CMOS, for those that care about wording. Gene is a friend and still very active in the image sensor community. He was very supportive of our work at JPL.

And since the invention of the CMOS active pixel image sensor with intrapixel charge transfer many many engineers around the world have pushed this technology to its high level of performance that we see today.

Eric, I pointed out above that you cited Gene Weckler. You have been clear about Weckler's contribution in your writing. You give Weckler credit for the basic passive pixel which begat the early passive pixel CMOS sensors. You have also been clear about the history of active pixel designs before this idea was applied to CMOS. I have no problem with YOUR scholarship and credit attribution on this.

I don't think it's correct for dpreview to say that you invented the CMOS sensor, or the CMOS image sensor, and I doubt you think that's correct either.

Ron, as I said above to Jon Stern, when people say CMOS image sensor today they are almost always referring to a CMOS active pixel image sensor with intra-pixel charge transfer yada yada yada since the earlier incarnations did not work so well. I am pretty comfortable with the short hand title and it is ok with me if you are not and prefer the longer title for clarity. To me they are one in the same these days.

Eric - I have elsewhere referred to you as the inventor of the "modern CMOS sensor" (with image implicit). I think this is not overly long or technical, yet still fair both to you and to those upon whose shoulders you have stood.

I wonder who created Eric Fossum (and the rest of the lineage). Perhaps they should get credit too along with all his school teachers and first girl friend etc. And perhaps while we are at it we should thank the Earth and the Sun for which none of this CMOS stuff would be possible either. And don't forget the countless animals that gave their lives to sustain Eric. It's also worth noting that in a few thousand years all of this will very likely be completely lost and forgotten in the never-ending never-beginning interdependent crucible of arising and subsiding phenomenon.

The lecture was terrific and I have memorized several points you made to impress my friends. I work in aviation journalism, taking videos of propeller aircraft, and the older CCD sensors have less rolling shutter problems than CMOS, correct? I am talking about shutters and that is not your expertise, I realize.

I do know something about electronic shutters. Generally we talk about either rolling shutters or global shutters. Full frame CCDs and frame transfer CCDs have some shutter issues like smear. Interline CCDs have less smear problem. CMOS sensors have no smear. But rolling shutter devices can have artifacts under certain conditions, and global shutter CMOS sensors usually have larger pixels and higher read noise. Recently some lower noise global shutter CMOS image sensors with small pixels have been made for R&D purposes and probably there will be commercial devices soon.

One of the questions I have is when was CMOS used in cameras, or was it first used in computer chips.

I had a discussion with SONY about 10 years ago about how they should use the same principal for using sensors as used for computer chips. A chemical process.

I was suggesting many new computer principals to Apple and IBM back in the mid 80's. Now they are the standard. Multimedia, multi directional, stacking the technology. Yes even multiply screens. All were not being used at the time. As Apple engineers left Apple and went to Commando to make the Amiga 1000. As I was told based on my principals. Then adopted by IBM. As I gave to IBM then. As some other things I gave both, but its not known.

It was not used by SONY yet, as far as I can tell. Now we see them everywhere. When was CMOS used in camera sensors?

Latest in-depth reviews

The Leica Q2 is an impressively capable fixed-lens, full-frame camera with a 47MP sensor and a sharp, stabilized 28mm F1.7 Summilux lens. It's styled like a traditional Leica M rangefinder and brings a host of updates to the hugely popular original Leica Q (Typ 116) that was launched in 2015.

The Edelkrone DollyONE is an app-controlled, motorized flat surface camera dolly. The FlexTILT Head 2 is a lightweight head that extends, tilts and pans. They aren't cheap, but when combined these two products provide easy camera mounting, re-positioning and movement either for video work or time lapse photography.

Are you searching for the best image quality in the smallest package? Well, the GR III has a modern 24MP APS-C sensor paired with an incredibly sharp lens and fits into a shirt pocket. But it's not without its caveats, so read our full review to get the low-down on Ricoh's powerful new compact.

The Olympus OM-D E-M1X is the ultimate sports, action and wildlife camera for professional Micro Four Thirds users. However, it can't quite match the level of AF reliability offered by its full frame competitors.

Latest buying guides

What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.

What’s the best camera costing over $2000? The best high-end camera costing more than $2000 should have plenty of resolution, exceptional build quality, good 4K video capture and top-notch autofocus for advanced and professional users. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing over $2000 and recommended the best.

What's the best camera for shooting sports and action? Fast continuous shooting, reliable autofocus and great battery life are just three of the most important factors. In this buying guide we've rounded-up several great cameras for shooting sports and action, and recommended the best.

What’s the best camera for less than $1000? The best cameras for under $1000 should have good ergonomics and controls, great image quality and be capture high-quality video. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing under $1000 and recommended the best.

If you're looking for a high-quality camera, you don't need to spend a ton of cash, nor do you need to buy the latest and greatest new product on the market. In our latest buying guide we've selected some cameras that while they're a bit older, still offer a lot of bang for the buck.

We've updated our waterproof camera buying guide with the latest round of rugged compacts, and we've crowned a new winner as the best pick in the category: the Olympus TG-6. That is, unless you happen to find a good deal on the TG-5.

Researchers with the Samsung AI Center in Moscow and the Skolkovo Institute of Science and Technology have created a system that transforms still images into talking portraits with as little as a single image.

K&R Photographics, a camera store in Crescent Springs, Kentucky, was robbed by armed men, who not only took thousands of dollars worth of camera equipment, but also injured the 70-year-old co-owner of the store.

The new Fujifilm GFX 100 boasts some impressive specifications, including 100MP, in-body stabilization and 4K video. But what's it like to shoot with? Senior Editor Barnaby Britton found out on a recent trip to Florence, Italy.

It's here! The long-awaited next-generation Fujifilm GFX has been officially launched. Click through to learn more about the camera that Fujifilm is hoping will shake up the pro photography market - the GFX100.

We've known about the Fujifilm GFX 100 since last fall, but now it's official: this 102MP medium-format monster will be available at the end of June for $10,000. In addition to its incredible resolution, the camera also has in-body IS, a hybrid AF system, 4K video and a removable EVF.

According to DJI, any drone model weighing over 250 grams will have AirSense Automatic Dependent Surveillance-Broadcast (ADS-B) receivers installed to help drone operators know when planes and helicopters are nearby.

Chris and Jordan are kicking off a new segment in which they make feature suggestions to manufacturers for the benefit of all photographer-kind. To start things off, they take a look at the humble USB-C port and everything it could be doing for us.

The Olympus TG-5 is one of our favorite waterproof cameras, and the company today introduced the TG-6, a relatively low-key update. New features include the addition of an anti-reflective coating on the sensor, a higher-res LCD, and more underwater and macro modes.

The Leica Q2 is an impressively capable fixed-lens, full-frame camera with a 47MP sensor and a sharp, stabilized 28mm F1.7 Summilux lens. It's styled like a traditional Leica M rangefinder and brings a host of updates to the hugely popular original Leica Q (Typ 116) that was launched in 2015.

We've been playing around with a prototype of the new Peak Design Travel Tripod and are impressed so far: it's incredibly compact, fast to deploy and stable enough for the heaviest bodies. However, the price may turn some away.