How do we quantitatively measure limits for our equipment and our skies? Can I take a set of exposures from a recent run and somehow measure the average FWHM?

For lack of a better method, I had Nebulosity grade a set of images. The only way I could get the grading data was to examine the file names, since Neb. prepends the HFR of the image to the file name. Here's what I ended up with. Is this an accurate measurement of my skies and tracking?

I really hope this thread isn't dead because here's what would be extremely helpful to me:

Should I purchase the mono or color version of a given CCD camera, taking into consideration my sky and mount limitations?

Mono is supposed to give improved resolution since each pixel adds to the resolution instead of being covered by a Bayer color filter, but if my skies/equipment won't support that high of a resolution, it would be much less work to use the OSC (narrowband imaging aside).

This is a very practical question that could be answered quantitatively based on an individual's skies and equipment, but how?

That's a fine way to get an estimate on your seeing + guiding error. If you like, you can try it on some shorter (say 10s exposures) to take guiding out of things.

Also, keep in mind, this is HFR. "HFD" (half flux diameter) is going to be 2xHFR (half flux radius). The FWHM is typically going to be between these two. So, if your HFR is 2.6", your HFD is 5.2" and your FWHM is going to be between them.

As for OSC vs. mono -- there's a lot more to consider than the resolution. Yes, it's better on mono, but it's not the 2:1 some people quote.

BTW, the exact translation between HFR and FWHM will depend on the star shape. A rough estimate will be FWHM = HFR / 0.6 or so. That would put you just a bit above 4" in those shots.

Try some shorter shots and see how things look. BTW, you can use ImageJ to get a nice read on the FWHM for any star (Neb can give you the HFR for any star in the Pixel Stats). Load your image (it can load FITS), selected a box around a star, then make a copy of just that area (Image, Duplicate. Then, Plugin, FWHM, plot FWHM. The plugin is from here: http://www.umanitoba...st/plugins.html

Well, for it to be futile, we'd just be saying that you're not going to get any more resolution than the seeing provides. This is going beyond that to say that this oversampling is coming at a cost of the SNR. You will need to make up for the loss of photons and the ensuing loss of SNR with more data when you sample at higher rates.

I think you may be mixing adages here. The adage I know is to have your sampling at about half the seeing, not half the limiting magnitude. The limiting magnitude will be tied to your light pollution and not tied to your turbulence or blur level. Bright steady skies can support higher magnifications (higher sampling rates) than dark turbulent skies.

I want to compare to setups resulting in the same number of photons delivered to each pixel on average. I am limited by the precision of my mount to a certain length subexposure for a given focal length. I can either buy a lens 1 stop faster or I can take twice as many subexposures. The resolution should be identical. Is the SNR lower for the slower lens? If so, is it because the subexposure is twice as long or because there are twice as many subexposures?

I think you may be mixing adages here. The adage I know is to have your sampling at about half the seeing, not half the limiting magnitude. The limiting magnitude will be tied to your light pollution and not tied to your turbulence or blur level. Bright steady skies can support higher magnifications (higher sampling rates) than dark turbulent skies.

Craig

I hate when I get confused this easily.
So how does one go about determining the seeing conditions? I always thought you based in on the faintest star you could see visually. For me that's the faintest star in the little dipper, which is just shy of 5th magnitude.

You can do it by measuring things like the FWHM in your images (grab a 10s image for example). Since I know you're a Neb2 user, you can get FWHM by taking the HFR reported, multiplying by 2 to get HFD (R=radius, D=diameter) and then by about 1.15 to get to FWHM (close enough for government work). Or you can use any of the programs that actually calc FWHM (I use an ImageJ plug-in). Metaguide will read it for you (make sure your exposure is good so as to not saturate and not have a black background either). This page has a pretty nice set of comparison shots at measured FWHM and rated Pickerings (there may be more sites that have this comparison):http://revans_01420....eingimaging.htm

Craig, Does this method of measuring the seeing work accurately with a DSLR since it has a Bayer matrix and an anti-aliasing filter? For example, the seeing is 2", but you are imaging at 2.5"/pixel. You won't be able to measure seeing below the image resolution will you?

Thanks Craig. I did as you suggested and came up with a seeing value of roughly 4.5, pretty close to my naked eye guesstimate. I guess I just got lucky. I prefer your method, except that it assumes one already has a camera.

What do you suggest for someone that is trying to figure out the right combination for their environment before they purchase?

I want to compare to setups resulting in the same number of photons delivered to each pixel on average. I am limited by the precision of my mount to a certain length subexposure for a given focal length. I can either buy a lens 1 stop faster or I can take twice as many subexposures. The resolution should be identical. Is the SNR lower for the slower lens? If so, is it because the subexposure is twice as long or because there are twice as many subexposures?

Gale,

Welcome to CN!

The spreadsheet posted with part 2 should actually let you run this kind of simulation. Exactly how the trade off works will depend a bit on your particulars. But, overall, you'll be better with the faster lens. You've got less dark current noise going into your stack and you've got less read noise (another way to think of this is that for a given sub-exposure length your faster f-ratio has higher SNR in each sub).

With 2x the photons going into a single sub the signal has gone up 2x. The SNR isn't up 2x as the noise part has gone up as well. Instead of sqrt(Target+Sky+Dark+read^2) we've got sqrt(2*Target+2*Sky+Dark+read^2). At worst, this will be 2/sqrt(2), aka sqrt(2), aka 1.414x better in terms of SNR. So, with no dark and read noise, we're at 1.414x the SNR. To make up for this with the slower lens, we need to take twice as many exposures.

Now, insert some dark current and some read noise and see what happens here... Try it! (Just fake some numbers for these and see what happens)

Craig,
Does this method of measuring the seeing work accurately with a DSLR since it has a Bayer matrix and an anti-aliasing filter? For example, the seeing is 2", but you are imaging at 2.5"/pixel. You won't be able to measure seeing below the image resolution will you?

The Bayer matrix will blur things a touch with a decent debayer but only a touch. The resolution you're imaging at is dictated by your pixel size and by the focal length, not by the fact that it's one-shot color or not. Of course, if you're trying to estimate a 3" FWHM and you're sampling at 30"/pixel, you'll never do it. Increase the focal length and you will.

You can also use a few tricks to get a mono image at full-res on a OSC camera. If you've got the raw data you can use something like Nebulosity's OSC Generic nebula filter reconstruction to balance the R, G, and B channels into a mono image without interpolation.

Craig, one thing you didn't cover in your essays (unless I missed it somewhere) is that it is possible to achieve a given SNR with a fixed aperture and exposure time by playing with one more variable--pixel size. If you are shooting with a telescope at f/9 with 9 micron pixels, that should be functionally equivalent to shooting with an f/5.4 scope of the same aperture using 5.4 micron pixels. Same spatial resolution, same number of photons per pixel per unit time. If the cameras have the same megapixel count you would even get the same field of view and therefor the same "information" content. Would you agree with this?

I guess the key is to balance spatial resolution against SNR keeping in mind the field of view desired. Obviously, you can't change pixel size at will (aside from binning, of course), but you can choose a camera that has a pixel size that is appropriate for your focal ratio, even if it is a fairly slow focal ratio.