The sensitivity is impressive!! 0.03 lux = 0.002787091 footcandles. For those of you who know nothing of lux or footcandles, at ISO 100 a meter reading of 100 footcandles give you f/2.8 at 24 frames a second.

I don't think we'll be seeing this technology in a consumer product this year or next

So, this and the thing about the guy using video to capture stills.... Canon, don't forget photographers. Also, don't forget that photography is not videography. Crank out somrthing to tickle the video industry, then get back to stills...

Lol you never fail to complain when something you're not interested in shows up on here. Can't you just ignore it and be happy for the people that shoot video? And yes there were 2 articles about video, but what about the 10 photo related posts in a row before that? Oh right, you're not interested in it so therefore Canon has forgotten about photographers. You're assuming that one takes away from the other, when in reality they are separate divisions. The Motion Picture industry is just as big or bigger than photography, just because you aren't a part of it doesn't mean they don't deserve any new gear.

I'm curious as to what piece of gear you are looking for that you feel is holding you back so much, because clearly you are looking for something specific and not seeing it. So what is it?

I see a grim trend in a direction I do not like. As the old saaying goes, the squeaking wheel gets the oil. I remember a canon experimental camera from a while ago, a weird white spaceship looking thing, talking about the future of photography being the imager videoing the subject then selecting the best frames. This is NOT a direction I want things going; therefore, I make my voice known. Moreover, I am certain that I am not alone in this.

Why doesn't Canon actually announce a product rather than things in R&D?

I guess, Canon just want to make sure to us that they're fiddling with new tech. And on a sidenote: If this happens for video, I strongly hope there is something similar ready within the next five years for their 1Dx body update. And given that, the essential part of it surely should trickle down into 5Ds, 6Ds and 7Ds...Please tell me I am wrong, if my one year shy of fifty eyes are way too blueish...

I see a grim trend in a direction I do not like. As the old saaying goes, the squeaking wheel gets the oil. I remember a canon experimental camera from a while ago, a weird white spaceship looking thing, talking about the future of photography being the imager videoing the subject then selecting the best frames. This is NOT a direction I want things going; therefore, I make my voice known. Moreover, I am certain that I am not alone in this.

Same here, I still find it strange when I see someone using a DSLR as a videocamera. Ergonomically it's hopeless so I'm hoping for a spit in stills and video hardware. As for the original post: happy to see some report that Canon is continuing the development advancements in sensor technology, as we should expect.

I see a grim trend in a direction I do not like. As the old saaying goes, the squeaking wheel gets the oil. I remember a canon experimental camera from a while ago, a weird white spaceship looking thing, talking about the future of photography being the imager videoing the subject then selecting the best frames. This is NOT a direction I want things going; therefore, I make my voice known. Moreover, I am certain that I am not alone in this.

Same here, I still find it strange when I see someone using a DSLR as a videocamera. Ergonomically it's hopeless so I'm hoping for a spit in stills and video hardware. As for the original post: happy to see some report that Canon is continuing the development advancements in sensor technology, as we should expect.

I stronly support your opinion. And yes I hope, the new tech relates to still sensors as well and I expect them to announce it. Hopefully within this year...

Can anyone explain why DSLRs sensors are not square which would provide more viewing area?

Logged

The things you do for yourself die with you, the things you do for others live forever. A man's worth should be judged, not when he basks in the sun, but how he faces the storm.http://1x.com/member/chauncey43

I wonder if this is a supercooled sensor. As pixel area grows, so does the amount of dark current in the pixel, which means higher read noise. The best way that I know of to reduce noise from dark current is by cooling the sensor. There have been rumors in the past that Canon was working on some kind of active cooling technology...maybe this is the first glimpse of the future to come? If the sensor can detect 0.03 lux at high ISO, that has to mean a proportional drop in noise overall, which should mean, even though it is probably higher at low ISO...it would be much lower than Canon sensors today.

Can anyone explain why DSLRs sensors are not square which would provide more viewing area?

This has been gone over more times on this forum than I would ever count. Simple answer: a square sensor with a diagonal of 43.3mm (same as 36x24mm) would not work due to the extra height needed for the reflex mirror and the flange distance of the EOS system (the taller mirror would hit the rear element or mount of the lens). And as others have pointed out, not all lenses have round baffles to produce the full image circle (only to cover the portion of the image circle that contains the 36x24mm frame.

the sensor’s pixels and readout circuitry employ new technologies that reduce noise, which tends to increase as pixel size increases.

im not nativ english speaking but. that sounds wrong.. not?

more noise with larger photosites?

I believe the point they're making is that, though the noise per pixel generally goes down with larger sensors, the noise per unit of area generally goes up.

So, with your low megapickle large area per pixel sensor, at 100% resolution (pixel peeping) things will look cleaner, but there'll be more total noise in the image as a whole than with a high megapickle small area per pixel sensor.

I'm not sure I follow the "noise per unit of area" thing.

I guess we really have to think of this in the context of very low light levels and not as compared to our dslr sensors. Certainly, for the same level of illumination each of these 19 micron sensels would pick up more photons as compared to a 4.39 micron sensel (like the 7D), and have lower shot noise as compared to the smaller sensel. I'm assuming, (but don't know for sure) that you'd have the same amount of read noise for the two sensels, and hence a better signal to noise ratio for the bigger sensel.However, if you're pushing to record lower and lower levels of light intensity, then maybe what they mean is "as you try to read lower light levels (and use larger sensels), the shot noise becomes important and with lower light levels the read noise also has a bigger impact on the total noise.

Like I said, I'd be interested to see what neuro and jrista have to say.

I do not have the background to speak to how read noise scales with sensor size but, for the same illumination, photon shot noise will certainly increase for a larger pixel. Specifically, this noise will increase with the square root of the area. However, the signal will increase in proportion to the area, leading to a noise-to-signal ratio that decreases for an individual larger pixel (like 1/sqrt(size)), as conventional wisdom of internet message boards expects.

However, the noise(-to-signal-ratio) of an image is not that of an individual pixel. For reasons I and others have gone into before, if you want to compare the noise between two sensors with different resolutions, you need to divide the per-pixel noise to signal ratio by the square root of the number of pixels to get a figure you can compare between the two sensors. In other words, there are cases where a lot of low SNR pixels is a lot better than a smaller number of higher SNR pixels.

Long story short, if you have two sensors with the same overall sensor size, quantum efficiency, and a full well capacity and read noise (*) that scales (i.e. increases) with the photosite size, an image converted to a given resolution made from the two sensors will have exactly the same SNR and dynamic range, even if the higher-resolution sensor has a worse SNR if you only look at one pixel. (But, under the right conditions, the high resolution sensor can obviously give a ... higher resolution final image. If storage and processing is "cheap", then under these assumed conditions you always want all the megapixels you can get.)

Now, if this new sensor does something like allow large pixels with the same (or lower) read noise per pixel than small sensors, then we have something (remember in the analysis above, you got the same overall picture if the read noise per pixel increased with pixel area). But I will have to wait for someone more knowledgeable than me about that to chime in.

(*) There might be a square root of the photosite size missing in here; I haven't had my coffee yet this morning, and am too lazy to go looking for it.

We're already capturing 50% of the light that enters the camera. And noise and clarity under dark conditions are a result of quantum distribution of electrons. Meaning the noise you capture in a noisy photo is the result of noise from the light itself, not from the camera. You cannot capture less noise than exists in the incoming light, and you cannot capture more light than exists.

These videos seem to show a 4 stop improvement. My guess is that they are simulated by a marketing company and that this is designed to be misleading.

the sensor’s pixels and readout circuitry employ new technologies that reduce noise, which tends to increase as pixel size increases.

im not nativ english speaking but. that sounds wrong.. not?

more noise with larger photosites?

I believe the point they're making is that, though the noise per pixel generally goes down with larger sensors, the noise per unit of area generally goes up.

So, with your low megapickle large area per pixel sensor, at 100% resolution (pixel peeping) things will look cleaner, but there'll be more total noise in the image as a whole than with a high megapickle small area per pixel sensor.

I'm not sure I follow the "noise per unit of area" thing.

I guess we really have to think of this in the context of very low light levels and not as compared to our dslr sensors. Certainly, for the same level of illumination each of these 19 micron sensels would pick up more photons as compared to a 4.39 micron sensel (like the 7D), and have lower shot noise as compared to the smaller sensel. I'm assuming, (but don't know for sure) that you'd have the same amount of read noise for the two sensels, and hence a better signal to noise ratio for the bigger sensel.However, if you're pushing to record lower and lower levels of light intensity, then maybe what they mean is "as you try to read lower light levels (and use larger sensels), the shot noise becomes important and with lower light levels the read noise also has a bigger impact on the total noise.

Like I said, I'd be interested to see what neuro and jrista have to say.

There are inverse factors at play. Read noise is initially caused by dark current flowing through the sensor (with secondary downstream contributors as well). With a larger pixel area we have a larger photodiode, which means more area for current flow. That increases the contribution to read noise. By how much I can't say...depends on the materials used for the sensor, doping, and a number of other factors. I don't have enough information to offer specific numbers.

On the flip side, the larger sensor area means exponentially greater signal. The 1D X has a 90,000+ electrons in full well capacity (FWC). Assuming a 7.2x larger sensor area and the same Q.E., full well capacity should be somewhere around 650,000 electrons FWC. So, even at the lowest signal levels, there should be a far greater potential charge, simply because there is so much physical area for photons to strike per pixel. Assuming the sensor has a greater Q.E. than the 1D X sensor, then the potential for true sensitivity is even greater, however the FWC is fixed by area, so a higher sensitivity simply means the sensor saturates faster.

The interesting thing about dark current, the prime contributor to read noise at the time of readout, is that it doubles with every 10°C increase in temperature. Conversely, it halves with every 10°C drop in temperature. Assuming a "room temperature" sensor (~23°C), a 10° drop in temperature should improve read noise by a factor of two. Now, it is unlikely a sensor will operate at room temperature, their density and the amount of current used for readout will increase the temperature by a certain amount. Lets say normal usage increases the sensor temperature 10-20°. To get any real benefit, we would need to cool by at least 30° to double read noise performance. According to the specifications of scientific-grade sensors, which use peltier cooling on CCD sensors, by around -80°C dark current is ~200x lower than at normal operating temperatures. That is a drop of ~125°C, so the improvement in dark current is non-linear as you keep cooling (otherwise one would expect a drop of ~1000x in dark current.)

(Aside: For those who wish to test this fact, you can try it with night sky photography on a very cold night. Anyone who does night sky or aurora photography in the northern (or southern) latitudes, you probably know that while your camera's battery performance drops significantly at low (sub-zero) temepratures, your night sky photos have very little, almost no noise. That is all thanks to the fact that dark current is proportional to temperature.)

Dark current today is already mitigated by using CDS, or correlated double sampling, which samples the charge in each pixel when the sensor is reset, and subtracts that charge when the sensor is read for an exposure, effectively eliminating dark current. Analog per-pixel CDS circuitry seems to be a contributor to banding noise, however, which is what lead Sony to move to an on-die, column-parallel Digital CDS approach in Exmor. Regardless, it is possible Canon has developed significantly more efficient CDS circuitry, which, when combined with moderate active cooling to keep the sensor below room temperature, could produce some considerable gains in read noise performance.

That said, if Canon still uses high frequency off-die moderately parallel ADCs in DIGIC chips, I would still suspect the sensor still has banding noise problems. I guess the off-die DIGICs could be cooled as well, and/or the frequency of the ADCs lowered (which should actually be more than possible with a 2.4mp sensor), both of which should lower the banding noise contribution from A/D conversion.

However, if you're pushing to record lower and lower levels of light intensity, then maybe what they mean is "as you try to read lower light levels (and use larger sensels), the shot noise becomes important and with lower light levels the read noise also has a bigger impact on the total noise.

This is true...photon shot noise becomes a problem at higher ISOs (actually, photon shot noise is the primary cause of noise at high ISO...increasing ISO itself does not actually contribute more noise). Nowever, the ratio of signal to read noise is MUCH smaller as well, which is why reducing dark current in the sensor is important. By reducing dark current, you increase efficiency, which supports a higher Q.E., which means that a greater percentage of photons incident on the photodiode itself actually free and electron. By reducing electron contribution to the photodiode from dark current, you increase "true sensitivity", thus making higher ISO settings more effective, with less noise. Combine that with a larger pixel area, and for any given unit of time, SNR should be much higher than with any current Canon sensor, at all signal levels.

Oops, this sounds to me like Canon is trying to tell us: "Please don't jump to the dark side, although for the time being we are not able to sell you equal equipment. We are working on some great stuff, so bare with us."

We're already capturing 50% of the light that enters the camera. And noise and clarity under dark conditions are a result of quantum distribution of electrons. Meaning the noise you capture in a noisy photo is the result of noise from the light itself, not from the camera. You cannot capture less noise than exists in the incoming light, and you cannot capture more light than exists.

These videos seem to show a 4 stop improvement. My guess is that they are simulated by a marketing company and that this is designed to be misleading.

Actually, if something mentioned by TheSuede recently is correct, we are capturing only about 16-18% of the light entering a camera. We capture between 40% to 60% of the light incident on the photodiode. That means, 40-60% of the photons that pass through the lens, through the IR cut and AA filter, through the CFA, and actually reach the photodiode effectively free an electron. However, only 30-40% of the light that actually reaches the CFA makes it through...as the CFA is explicitly designed to filter out light of certain frequencies. So...50% of 35% is 17.5%...modern cameras are currently working with VERY LITTLE light. We have a long, long way to go before we are recording as much light as we can...and in a bayer type sensor, that would still be at most 40% of the light that makes it through the lens. The lens itself, assuming a multicoating, can cost as much as 15% light loss or more (depending on the angle to a bright light source). Nanocoating improves that, reducing the loss to only a few percent. The IR cut and AA filters cost a few percent as well.

The only way we could preserve more of the light that makes it through the lens would be to either move to grayscale sensors (eliminate the CFA), or use some kind of color splitting in place of a CFA. Combined with nanocoatings on lens elements and an efficient filter stack over the sensor, total light loss could drop to 10% or less, meaning the Q.E. of the photodiode itself determines the rest. 50% of 90% means we would preserve ~45% of the light reaching and passing through the lens on the camera.

As for "noise in the incoming light"...that is kind of a misnomer. Photon shot noise is caused by the random distribution of photon strikes on the sensor's pixels. With larger pixels, noise caused by that physical fact is reduced, as for any given level of light, each pixel on a large-pixel sensor picks up more light than in a small-pixel sensor. To some degree, assuming the same physical characteristics of the silicon used in both a high density vs. very low density sensor, the high density sensor will sense almost the same amount of light in total as the low density sensor...minus small losses due to a greater amount of wiring which reduces the total surface area that is sensitive to light (and yes, losses will occur despite the use of microlenses.) On a size-normal basis (i.e. scaling the higher resolution image down to the same image dimensions of the lower resolution image), the higher resolution image should perform nearly as well as the lower resolution image....assuming the physical characteristics of the sensors are otherwise identical (same temperature, same Q.E., same CFA efficiency, etc.)