Your 12-inch mirror has a surface area of 291864 square millimeters.
My 71-mm lens has a surface area of 15837 square millimeters.

Divide one by the other and the 12-inch mirror has 18.43x more light gathering power than the 71mm lens. (I purposely omitted the area obscured by the secondary mirror.)

That means that your 8s exposure is equivalent to my 147s exposure.

What I am getting at is this: for the faintest parts of your nebula an 8-second exposure is not enough time for a 71mm lens to collect enough photons to overcome read noise. A 147-second exposure would be ideal but then I would saturate the bright stars so I need to lower the gain in order to increase my well depth. Ultimately I lose because my full well depth is 15ke- and yours is 63ke-. Such is life!

For the benefit of others, I want everyone to realize that when you see "8 seconds" that it depends on a LOT of factors.

Brian - you are working from an incorrect assumption in your calculation. It's not the aperture are that you should be comparing but the f-ratio of the two scopes. If your 71mm scope and Robrj's 12" Dob are both F5 then the photon rate at the sensor and the exposure needed to overcome read noise will be *identical* for the two scopes. Of course in the 71mm scope the image will be a lot smaller - it will collect less photons from the target but they wil be shared between fewer pixels and the two effects cancel out. If you account for the central obstruction in the reflector then you'll need *longer* exposures...

His scope is f/4.9 and mine is f/5.9. His pixel scale is 0.64 arcsec and mine is 1.43 arcsec. His total integration time is 5.3 minutes and mine is an estimated 30 minutes or more based on experience. With normal seeing conditions of 3 arcsec he is oversampling which means that starlight is spread over many more pixels than need be, leading to an overall image that is less bright than it could be. So I think it is a wash that his scope is faster than mine. Still he is able to capture the DSO in 5 minutes to my 30 minutes. Seems like the aperture is playing a huge role.

The point that I was trying to make is that every scope and camera combination has dramatically different capabilities. One must look at all of these factors when trying to figure out how much total integration time you may need to obtain similar results. I see my mistake in using 'exposure' for a replacement for total integration time. If I can use the same 8 second exposure as his to overcome read noise then I don't need to adjust my well depth. I just need to collect a LOT more sub-frames to make up for the fact that he has a much larger aperture.

And if I could add one more point. If you've ever read my musings here on the forum you know that I am inclined to use much longer exposures than most people. There are two reasons for that:

The first is my battle with raining noise. Not only do I need to *overcome* read noise, I need to *slay* it.

The second reason is a matter of taste. I do not like fat, saturated stars. Bright stars in a Newt have those attractive diffraction spikes. Bright stars in a refractor are just big blobs. Also with a small format camera like mine (1920x1080) fat stars are really unattractive. I might feel differently if I had a large format camera like robrj. In order to mitigate the problem of saturated stars I lower my gain so as to increase my full well depth.

In the CCD Photometry course that I completed one of the exercises was to determine the gain setting that produced the deepest well and greatest linearity. For my camera that is around 12000 electrons which puts the sensor in LCG mode. For some reason The Brain doesn't allow me to select a gain that low so I've learned to do things manually.

By default I approach astrophotography by setting a low gain and then adjusting the exposure so that bright stars come in just under the saturation point. I am very happy with this method. I think it has produced some stunning images, plus it has opened up many more DSOs that I can successfully image.