Yes, that term is often used. It doesn't really describe what it represents, or what causes it though, so I use Black frame when there is essentially no (or as short as can possibly be set on the camera) exposure involved.

No problem, I'll explain. A Blackframe is supposed to be black because it received no exposure. However, when we analyse it there is noise. The noise is produced by the camera electronics. When we take precautions to eliminate as many noise sources (e.g. thermal noise doubles for approx. each 6 degrees Celsius rise) as possible, we could assume that the remaining noise is unavoidable and linked to the action of reading out the sensor data, hence coined "Read noise".

A Blackframe is typically produced by setting the camera to it's shortest possible exposure time (to counteract thermal noise build-up), using a body cap instead of a lens (to avoid light leaks, electronic noise from the lens, and camera gain adjustments at certain apertures), and covering the eyepiece of the viewfinder (to avoid light leaking into the mirrorbox though the back).

The signal that is still recorded is the lowest signal possible and is usually random with a Gaussian distribution. It changes with the ISO (gain) setting. It is not the same as a Darkframe, which is produced with a much longer (>1 sec. typically) exposure time, as used for Darkframe subtraction. By comparing a Darkframe and a Blackframe one can quantify the (mostly thermal) contribution.

Thank you. I was aware of the difference between a Black-frame (called an Offset by the Astrophotophotographer, I believe) and a Darkframe. I am interested in the Black-frame because I wish to use it to estimate the influence of ISO on the read noise of my camera.

Quote

The offset in most Canons cameras is part of the ADC quantization, so it is not added afterwards. That's why the noise has a Gaussian distribution centered at (usually) ADU 1024. There are also values below 1024 because of the Readnoise.

Still scratching my head here. By afterwards I meant after (or even simultaneous with) conversion of the (amplified) signal from the sensor into ADU. Thus 1024 will be added to whatever read noise is associated with a given pixel. And I assume that such read noise cannot take negative values. So I fail to see how one can get values less than 1024, nor how the mean of the read noise would be at 1024 (if you are talking about values obtained from a single image).

Quote

What you describe is a Darkframe (not Blackframe) noise reduction technique, commonly used in astrophotography where long exposure times are needed to collect enough photons to record faint signals. This is also why Canon cameras are often used in astrophotography, because the Readnoise improves predictably with averaging multiple frames and may reveal faint signals.

From my brief reading in the AstroP sites, the same method (i.e. average mean or median of several images) is recommended for determining a Offset (or what we are calling a Blackframe). How do you recommend obtaining Blackframe data? I assume it does not involve taking the difference between two images, as one would do for S/N analysis.

Quote

... When you subtract 2 noisy data sets with a mean value of e.g. 1024, then there is a 50% chance that an image has a value of 1024 or less. There is a equal chance of it being 1024 or higher. When we subtract an image with a higher data value from one with a lower data value we would get a negative number, which cannot be encoded in an integer number calculation, and thus result in a clipped noise distribution.

Therefore we add an offset to both datasets, which only changes the mean value but not the SD around that mean, and the result of the subtraction can be statistically evaluated. My choice of 1024 is not a must, one can use any number that doesn't add to the risk of integer value clipping, although it could also indicate an ADC problem. That's why I use the IRIS stat command after the subtraction, to check that there are no values that resulted in (probably clipped) zero despite the offset. If it would have a minimum of zero, then I redo the subtraction with a higher offset (for light exposure frames), but for Blackframes this is usually not needed (especially for lower ISO gain settings).

I am aware that when one subtracts two data sets, regardless of their respective means, there is an equal chance that any given pixel in image #1 will have a value greater than or less than the same pixel in image #2. Thus the possiblity of negative values. However I was under the impression that the Iris software can deal with negative numbers when calculating a mean and SD from individual pixel differences. Furthermore I believe that the offset is added after the calculation of these statistics is performed. I have tried this with two images, both with and without adding an offset. First subtracting image #2 from image #1 and then the reverse. The mean, median and SD (after subtracting the offset) were identical in value but opposite in sign.

OK. After rereading Dr. Martinec's paper I have managed to correct some of my misconceptions:

Black-frame (or read noise) voltages can (and probably do) exhibit both positive and negative values prior to A/D conversion. The negative values will be clipped to zero unless a voltage bias is applied prior to conversion. Further, it is these voltage fluctuations which constitute the read noise, estimated by the SD of these fluctuation (around a mean of zero plus whatever offset is added by the camera). Thus the SD is the pertinent statistic in estimating read noise, not the mean of the black-frame images, as I earlier presumed.

With this new found enlightenment, I still believe that (with Canon cameras) one should use an offset of negative 1024 with a black-frame image if one is looking for minimum values < 0 as a flag for clipped pixels. Alternatively one could use no offsetl and look for minimum values < 1024. What am I missing?

So next question: How does one estimate read noise based on black-frame images from a Nikon D700, which apparently does not apply a bias voltage.

So next question: How does one estimate read noise based on black-frame images from a Nikon D700, which apparently does not apply a bias voltage.

Cheers/Mike

Another option: Nikon has an area of 'masked pixels' on the sensor that are shielded from light, and for which a bias voltage is applied; apparently these pixel values are written to the raw file. The package 'libraw' can output these pixel values:

Another option: Nikon has an area of 'masked pixels' on the sensor that are shielded from light, and for which a bias voltage is applied; apparently these pixel values are written to the raw file. The package 'libraw' can output these pixel values:

Thanks emil. In my time I have written thousands of lines of C code, a skill now much diminished by many years of neglect. Even at my best I found it difficult to adapt others code to my needs, so I think I will pass on this challenge.

I think I will give Facey's method a try. It sounds like a fun project.

The first part -- increasing the exposure results in a cleaner image -- is correct. Increasing the ISO for fixed exposure will not add noise to the image, as may be seen in any of the examples I gave, increasing ISO at worst does not change the noise, and in many examples results in less noise.

Indeed! I did the test too, with a 5DMII. Used the meter in-camera to average an exposure for ISO 100. Shot the image then left everything the same and upped ISO to 800. On LCD and in Lightroom, as one would expect, this 2nd capture is much brighter appearing. Normalize using Exposure setting to match the ISO 100, examine both, ISO 800 is clearly less noisy. Fascinating!

So here is where I’m unclear about what’s going on under the hood. The exposure is identical in terms of aperture and shutter. ISO is higher. We expect doing so would make it appear brighter. What I would like explained further (for the non scientist) is how and why? The same amount of light (photons) strike the sensor. What is the ISO doing here to provide a better S/N ratio reducing the noise?

So here is where I’m unclear about what’s going on under the hood. The exposure is identical in terms of aperture and shutter. ISO is higher. We expect doing so would make it appear brighter. What I would like explained further (for the non scientist) is how and why? The same amount of light (photons) strike the sensor. What is the ISO doing here to provide a better S/N ratio reducing the noise?

Perhaps a significant source of noise is located after the analog amplifier, meaning that boosting the signal early on will not affect the additive noise that is added later on, resulting in an improved SNR?

So here is where I’m unclear about what’s going on under the hood. The exposure is identical in terms of aperture and shutter. ISO is higher. We expect doing so would make it appear brighter. What I would like explained further (for the non scientist) is how and why? The same amount of light (photons) strike the sensor. What is the ISO doing here to provide a better S/N ratio reducing the noise?

Andrew look at this simple model of your camera's capture pipeline:

Think of 2 sources of noise added to the useful signal (S): Npre and Npost. Npre is added before ISO amplification, Npost is added after ISO amplification. ISO amplification by itself doesn't alter the SNR, so the ISO amplification does't improve output SNR with respect to Npre. However with Npost the story changes, and the higher the ISO amplification is, we have higher useful signal vs Npost, so we are improving final SNR with respect to Npost.

In the real world, Npre would be basically the photon noise (inherent to light capture) plus the read noise (electronic noise) produced in the early stages, prior to ISO amplification. Npost would be the read noise produced after the ISO amplification, i.e. in the AD converter.

Hence on those cameras where a big portion of the total noise is added after the ISO amplification (Canons), the higher the ISO the higher the ouput SNR is for a given amount of photons reaching the sensor. Cameras with a very low read noise (and this consequently means very low Npost) hardly benefit from pushing ISO (Pentax K5, Nikon D7000). Rest of Nikons tend to be in the middle point, but definitively still worth pushing ISO on them like in the Canons.

Canon 350D (both captures at the same shutter/aperture, final exposure matched in pp):

ISO amplification by itself doesn't alter the SNR, so the ISO amplification does't improve output SNR with respect to Npre. However with Npost the story changes, and the higher the ISO amplification is, we have higher useful signal vs Npost, so we are improving final SNR with respect to Npost.

Excellent, thanks! It explains perfectly why it was mentioned that differing cameras may or may not show the results of the test images you provided (and I saw as well on my Canon).

Yes, see this Pentax K5 test where ISO1600 nearly didn't improve noise vs ISO100 at the same aperture/shutter, so it was best to stay at ISO100 which can be useful to prevent highlight clipping:

But does the AE modes optimally? If you set such a camera in P/Tv/Av, stick ISO to 100 and take an image, will it do e.g. 1/100 sec and underexpose badly (something that, according to your image can be easily fixed in pp), or will it bump the exposure time to 1/10 sec, resulting in a blurred image that cannot be salvaged?

Further, if you set it to manual but keep the auto ISO, will it embed a tag saying "please multiply this image by 10 in raw development", or will it multiply the image digitally in-camera, resulting in any highlights being unnecessarily being blown out?

It seems to me that the new Sony sensor is far better than Nikon/Pentax are at exploting its capabilities?

Yes you are essentially correct in what you said. Photon flux noise is independent of camera settings. But apparent noise is dependent on sensor type, filtration method, lens light loss, light scatter within the lens, and of course electronic noise from the amplifiers, and thermal noise as the sensor heats up. Some sensors such as CMYG collect twice the spectrum of RGB and so twice the amount of light - this reduces photon noise of course, and it is observable in the images. Some lenses are much more efficient and so more light reaches the sensor. A larger aperture lens naturally transfers more light etc. Small cameras heat up the sensor very quickly due to battery heat, ambient temperatures and hand heat. When the camera internally reaches 104 degrees F, it no longer meets specs. I carry around small cameras in an ice cooled lunch box if I am forced to shoot in low light - this makes a huge difference in mid-tone noise levels for some cameras.

Considering the basic camera design: For example I have two cameras ( I have many cameras) that produce the same mid-tone image brightness at the same exposure time when one is set to ISO 100 and the other ISO 800 and this is because of a difference in sensor type, and a difference in lens characteristics aperture and efficiency.

Another factor is the way that the photodiodes are coupled together at lower "resolution" to increase s/n, and this is proprietary information, and is a significant factor in image noise in the mid-tones. In the deep shadows (Zone 1) twice nothing is still nothing.

Increase in ISO doesn't necessarily mean a higher noise level in the image - as you said. The lower dynamic range as ISO goes up, pulls up the shadow areas - that is: what was the shadow noise at zone 2 of the scene at ISO 100, becomes the shadow noise at zone 5 of the scene in the high ISO image - the zone 2 detail is lost at high ISO and the dynamic range is compressed which masks what is happening.

I have noticed that the older Tessar lens (Leitz) produces images with small dynamic range i.e the shadows always have detail in them even at ISO 3200 using the NEX-5 APS-C sensor, so that the highlights are kept and the shadow detail is kept - this is a function of the lens design. Even the non Aspheric 35mm Summicron has this quality. But the more modern design Elmarit 28mm does not.

But does the AE modes optimally? If you set such a camera in P/Tv/Av, stick ISO to 100 and take an image, will it do e.g. 1/100 sec and underexpose badly (something that, according to your image can be easily fixed in pp), or will it bump the exposure time to 1/10 sec, resulting in a blurred image that cannot be salvaged?

Further, if you set it to manual but keep the auto ISO, will it embed a tag saying "please multiply this image by 10 in raw development", or will it multiply the image digitally in-camera, resulting in any highlights being unnecessarily being blown out?

Are you really asking a camera manufacturer to think of advanced RAW shooters who want to fully exploit their cameras' potential? you must be joking, we don't even have a f****** RAW histogram

In the real world, Npre would be basically the photon noise (inherent to light capture) plus the read noise (electronic noise) produced in the early stages, prior to ISO amplification. Npost would be the read noise produced after the ISO amplification, i.e. in the AD converter.

Cameras with a very low read noise (and this consequently means very low Npost) hardly benefit from pushing ISO(this image must have been linked in LL more than 10 times ).

Regards

Guillermo-These two statements seem somewhat contradictory, unless the second sentence refers only to the read noise produced post amplification. What am I missing?

These two statements seem somewhat contradictory, unless the second sentence refers only to the read noise produced post amplification. What am I missing?

Npost is never photon noise, just read noise. If total read noise is low (and it is in Pentax K5 and D7000), any read noise contributor considered individually must be low.In any case this is just a simple model that tries to explain the behaviour of a real sensor; I'm pretty sure a real sensor is something more complex.