That is true, but a read noise of 4 electrons for a small pixel has a quite different effect on measured noise than for a large pixel because of camera gain, which is the number of electrons collected per data number (raw value). Since the P&S sensors use 12 bits per pixel for their output, I will use this bit depth for comparison.

A small pixel camera such as the Canon S70 may collect 8200 electrons and has a read noise of 3.2 electrons. Assuming that the raw pixel value at full well is near the maximum 12 bit pixel value of 4095, the gain is 2.0 electrons / data number. A larger pixel, such as with the Nikon D3, can collect about 66,500 electrons with a read noise of 4.9 electrons and a gain of 16 electrons /data number. The data are from Roger Clark.

When we look at measured noise in the image, we are concerned with data numbers, not electrons. A read noise of 3.2 electrons for the S70 would translate to 3.2/2.0 or 1.6 data numbers. The read noise in terms of data numbers for the D3 would be 4.4/16 or 0.27 data numbers.

On rechecking my figures, I see that the read noise given by Roger Clark applies only to higher ISOs and can not be achieved by the D3 at base ISO because of noise in the electronics downstream to the sensor. Peter Facey has performed a detailed analysis of the D3 and the read noise at ISO 200 (the base ISO) is 4.5 14 bit DN and the gain is 4.13 electrons/DN. The read noise at ISO 200 is therefore 18.36 electrons. The 12 bit gain would be 16.5 e-/DN, and the noise in 12 bit DN would be 1.1 DN. At base ISO the D3 is not much better than a P&S in terms of read noise, but the S:N would be much higher since more signal was collected. Sorry for the confusion.

On newer CMOS sensors such as the Sony used by the D7000, read noise does not vary with ISO. For more details on the topic of sensor vs. camera performance, see Emil Martinec.

Where is the "cutting point" in exposure time when film generally give better results than digital?

That is a lot more complicated as film suffers from reciprocity law failure. So with a long exposure, you will not get noise with film, but you will not be capturing as many photons (or rather the photosites on the grain lose energy and unexpose, so to speak). If you have one stop of compensation with a calculated 15 minute exposure, then a 15 minute digital exposure would need to be compared to a 30 minute film exposure. The digital camera will not have a change in contrast either. At some point, film will give up completely and not be able to record information whereas a sensor can still be counting photons. I guess a film exposure can be much better in very hot environments. A digital back would need active cooling and as far as I know, only scientific cameras do that.

Since it seems that you have done some practical tests, and real world comparisons between MF and DSLRs are rare, it would be nice if you could share your findings.

- Do your findings match DxO-mark data?- In what area "trounces" Canon MFDB?- Obviously you prefer MFDB to Canon in low ISO. What benefits do you see?- The benefits that you see, can they be explained by sensor size (collecting more Photons, higher MTF for given feature size and better edge contrast due to lack of OLP filtering, etc) or do they involve "magic qualities"?- If you see "magic qualities" can they be explained?

Best regardsErik

Quote

And I've done my own little version, comparing my MFDB to my Canon 5DII. Surprise, surprise, the Canon trounced the MFDB. I still prefer the MFDB at low ISO and short exposures, and the handling of a MF camera.

Its true that temprature can rise up if people distort completely what you state and call you "confused", as for references, if my British MechEng BEng degree doesn't do, have a look at DPreview on any modern Dslr, they state the same. Regards, Theodoros. www.fotometria.gr

I feel that answer is a bit misleading. Aren't the keywords "high speed" here? You can find all kinds of instruments in professional telescopes, including some based on relatively exotic designs (we aren't getting Aladdin III In:Sb sensors in our cameras any time soon, I think). CMOS based architectures are of course widely used in fields where high speed is possible or desirable (photometry of occultations for example, solar observation etc...). But CCD still reigns in imaging applications. I was so surprised by the above statement that I double checked what current major observatories use as imagers

If the purpose is going deep and long exposures, everyone seems to still be using CCDs

Not that I disagree with the increased usefulness of CMOS based sensors in many fields in general - but do you have examples of CMOS sensors used for image acquisition in fairly long exposures?

Pierre,

What I said was "CMOS...is now beginning to displace CCDs in research instrumentation". Not "CMOS has displaced". The instruments you linked to are all relatively old; some were completed in the 1990s, the rest commenced development before the recent surge in CMOS performance. They generate science, not income, so they won't be replaced unless absolutely necessary. It's going to take time.

Also, there's a certain amount of inertia imposed by the different readout electronics involved with CMOS. Observatories like to standardize on and re-use existing CCD controller systems as far as possible, for multiple instruments or when upgrading a sensor in an instrument (google ULTRADAS and SDSU-II for example). Switching to CMOS will require a new system, more development time, new documentation, new training, and above all more funding, which is really hard to get.

Another reason is that we still await large low-noise CMOS sensors (the same reason why there are still no CMOS MF digital backs). Large research telescopes have giant focal planes to populate with mosaics of imaging sensors, so the availability of large CCDs keeps them at the forefront.

You are right insofar as CMOS usage in research at present is mainly in the high-speed or timeseries niche. Any scenario where you have to take many frames, is where their low readout noise per frame makes the most difference. Their other great advantage, low dark noise, is a bit moot in research where liquid nitrogen cryostats are normally used to take CCD dark current down to an acceptably low level. But modest (peltier based) cooling on CMOS is around the same level (google CentralDS for example), and I can imagine that moving away from the hassle and expense of cryogenics (not just in plant and materials, but also because technicians must be employed to refill dewars 2 or 3 times in every 24 hours period - the sensor must never be allowed to warm up) will be very attractive to observatories.

You ask if I "have examples of CMOS sensors used for image acquisition in fairly long exposures"? I presume you mean discounting amateur astro-imagers, who use both off-the-shelf and modified CMOS DSLRs for exposures running to hours net? Well in research there are areas like wide-field auroral monitoring which use similar setups.

Neither yours nor any other Canon turn their NR off when you instruct them to do so from menu, it stays on to some extend by default!

I've never seen any proof for that assumption, I did see proof to the opposite. A simple method to detect noise reduction is to shoot a well focused image of a "White noise" patch, and then display the Fourier transform. The gradual signal decline towards higher spatial frequencies (caused by MTF, finite sensel size, and AA-filter for suppressing Bayer CFA related aliasing) is pretty gradual, unlike the Fourier transform of a noise reduced image. The Fourier transform of the Read noise alone also doesn't show signs of noise reduction.

If you have any trustworthy proof for your assertion, a lot of people would be interested, I'm sure.

I've never seen any proof for that assumption, I did see proof to the opposite. A simple method to detect noise reduction is to shoot a well focused image of a "White noise" patch, and then display the Fourrier transform. The gradual signal decline towards higher spatial frequencies (caused by MTF, finite sensel size, and AA-filter for suppressing Bayer CFA related aliasing) is pretty gradual, unlike the Fourier transform of a noise reduced image. The Fourier transform of the Read noise alone also doesn't show signs of noise reduction.

If you have any trustworthy proof for your assertion, a lot of people would be interested, I'm sure.

What I said was "CMOS...is now beginning to displace CCDs in research instrumentation". Not "CMOS has displaced". The instruments you linked to are all relatively old; some were completed in the 1990s, the rest commenced development before the recent surge in CMOS performance. They generate science, not income, so they won't be replaced unless absolutely necessary. It's going to take time.Another reason is that we still await large low-noise CMOS sensors (the same reason why there are still no CMOS MF digital backs). Large research telescopes have giant focal planes to populate with mosaics of imaging sensors, so the availability of large CCDs keeps them at the forefront. You are right insofar as CMOS usage in research at present is mainly in the high-speed or timeseries niche. Any scenario where you have to take many frames, is where their low readout noise per frame makes the You ask if I "have examples of CMOS sensors used for image acquisition in fairly long exposures"? I presume you mean discounting amateur astro-imagers, who use both off-the-shelf and modified CMOS DSLRs for exposures running to hours net? Well in research there are areas like wide-field auroral monitoring which use similar setups.

On the age of instrumentation aspect, that's the reason I included the Gran Canaria Telescope - 2009. I could have included SALT as well (http://www.salt.ac.za/telescope/instrumentation/salticam/) which is being finished right now. And yes, I was discounying DSLR amateur exposures (although I have done quite a few of them) as they aren't really indicative of what's the cutting edge in real scientific instruments is. Anyway I don't want to turn this in a long argument because I am sure we agree on the fundamentals anyway.

Ray,1. On your first quote on me, I suggest that perhaps there was ....low oxygen in the place where you read my post! What are you talking about? It clearly has nothing to do with my statement! 2. On your second quote on me there was certainly much CO2 present in your room! Its just an answer to the quoter not a post on the OP! 3. Neither yours nor any other Canon turn their NR off when you instruct them to do so from menu, it stays on to some extend by default! Cheers, Theodoros www.fotometria.gr

Theodoros, clearly you are a man of many, and very strong, emotions. 1 short message, 9 emoticons!

Could you please identify the offending parts of my "first" and "second" quotes to you? As I am having trouble matching up your objections to what I had said.

On your 3rd point, if what you say were true, no-one would be able to produce a photon-transfer curve which obeys Poisson statistics, from Canon RAW files. So, how do you explain the fact that several Canon users have successfully done so?

When you say that Canon's NR cannot be turned off, are you perhaps thinking of Canon's on-chip double-correlated sampling? That is not noise reduction in the sense that everyone assumes; it is better described as noise prevention. In no way does it usurp the integrity, statistics, or spatial correlation of the signal, unlike true "NR". It's kosher. This is a good LL thread on that topic (page 2 is the important part).

Theodoros, clearly you are a man of many, and very strong, emotions. 1 short message, 9 emoticons!

Could you please identify the offending parts of my "first" and "second" quotes to you? As I am having trouble matching up your objections to what I had said.

On your 3rd point, if what you say were true, no-one would be able to produce a photon-transfer curve which obeys Poisson statistics, from Canon RAW files. So, how do you explain the fact that several Canon users have successfully done so?

When you say that Canon's NR cannot be turned off, are you perhaps thinking of Canon's on-chip double-correlated sampling? That is not noise reduction in the sense that everyone assumes; it is better described as noise prevention. In no way does it usurp the integrity, statistics, or spatial correlation of the signal, unlike true "NR". It's kosher. This is a good LL thread on that topic (page 2 is the important part).

Ray

1. You clearly quoted that I was confusing LONG EXPOSURE with HIGH ISO performance, while you are talking to a 30 years "quite well" known photographer that is at least respected for his work at more than one countries as if his knowledge was the knowledge of an ignorant kid! Clearly to take a part of the whole statement to "analyze" like if the the answer wasn't in front of you on the rest of the statement, is because: a) you intented to do so ....or b) there was not much oxygen in the room and you missed it from "eye blaring" 2. Your second INSULT, has been answered already 3. It has been answered already to another quoter and you PARTIALLY accepted it up there, ......I've no intention (although I can) to be drugged in such a conversation, There is NO DSLR that turns its NR OFF when its instructed by the user to do so and .....thats it!! If you have a proof for the opposite (PROOF NOT THEORY like ...if it was so... then.... so it should.... that article says....), then ....I can interfere for you to have a really good job in the industry, ......I will even help you in your photography if you want! Regards, Theodoros. www.fotometria.gr

Totally random noise will do, even Photoshop could produce something usable.A program like ImageJ has plugins to create various types of noise.

Shot noise is more or less random and one can test for filtering of the image of a raw file. Shown below is an image of the green 1 channel of a raw file from the Nikon D3 and the same raw file rendered in ACR, which involves noise reduction. Fourier analysis was done with ImageJ. The D3 does not filter the raw image. Readers could repeat the test with their own cameras. One can use Iris to separate the raw channels.

I've never seen any proof for that assumption, I did see proof to the opposite. A simple method to detect noise reduction is to shoot a well focused image of a "White noise" patch, and then display the Fourrier transform. The gradual signal decline towards higher spatial frequencies (caused by MTF, finite sensel size, and AA-filter for suppressing Bayer CFA related aliasing) is pretty gradual, unlike the Fourier transform of a noise reduced image. The Fourier transform of the Read noise alone also doesn't show signs of noise reduction.

If you have any trustworthy proof for your assertion, a lot of people would be interested, I'm sure.

Cheers,Bart

It has been answered to a previous quoter Bart, I'm sorry I don't intend, want, or am in a mood to spend pages (that is what is needed) to prove it. Although I have a British BEng in MechEng, I am a photographer and thus I find such conversations time consuming and useless, (its also that I haven't really practice my English for more than 20 years and thus its even more painful and time consuming for me). It is so though... all Dslrs have NR by default and if you happen to have some friends that work in the manufacturing of MFDBs or in sensor constructing (like I do), they would verify this to you! Please look at it..., you'll find that I'm right! Dslrs are constructed for the "enthusiast" or "advanced" user, (industry terms) which is considered by manufacturers to be a ....(dragged by the nose junkie) that will never notice the difference! This is exactly the reason why photo equipment "tests" are done by "experts" that don't have even one photograph or work published and why "sample photos" of these tests is bridges, or castles, or beaches, or flowers..... thats what the industry thinks of "target group"! Regards, Theodoros. www.fotometria.gr

Sorry, does not impress me. If that education was worth anything, surely they must have taught you what people mean when they ask for references?

-h

Who gives a ....whatever if you was impressed or not? DSLRs still don't turn their NR off! ....and what you are doing by erasing the rest of the sentence is at least .....immoral! You obviously did so to object people of reading it, although it was only a few more words... Cheers (well..... not really) Theodoros. www.fotometria.gr

According to this study by Emil Martinec, The D3 and D300 apply noise reduction at exposures of 1/4 sec. and longer which can't be turned off by the user.

Yup, Nikon are bad boys, but only for long exposures. If you search my posting history, I've often chastised Nikon for this. And also, for the way they subtract a bias level from the RAW, which makes it harder (but not impossible) to measure the readnoise, and impossible to fully subtract bias pattern noise from images like flatfields, which is of course nasty when one then comes to divide by a flatfield...as someone has noted here before, it's mostly the astrophotographers who moan about that!

It's nicely explained by Jerry Lodrigus here; scroll down to points 5, 6 and 7 under the "Nikon" heading.

Canon, emphatically, do not alter their RAW files in long exposures - or short ones, for that matter. Nor do they subtract the bias level. RAW means RAW with them.

Very interesting, so, according to another link referenced in the Jerry Lodrigus page, the way to avoid the noise reduction for long exposures from Nikon cameras is to turn on Long Exposure Noise Reduction and turn the camera off during the dark frame acquisition. I wouldn't have attempted to do that.