Need help

PLU

Online

Joined: 2011-04-11

Posted: August 27, 2013 - 3:02pm

Hi,

I am new in DSLR photometry but I am very interested in it.

I have done the online tutorial with beginner data file and I have achieved satisfactory results. To measure the instrumental magnitude I used MaxIm DL 5.18 following the procedure suggested in online tutorial.

I would have a thousand questions to ask you but, for now, I will limit myself to the most important ones.

1:

I noticed a difference between the preprocessing procedure in Iris e Maxim DL tutorials. With Iris is suggested to separate the images into three channels R G B, while in Maxim DL there is no mention of it. Why?

2:

To do photometry is it necessary to acquire color images, or monochrome is ok?

3:

When I acquire an image in monochrome mode and RAW format, is it possible to separate channels R G B, or it is need acquire a color image?

4:

I have a DSLR Canon 600D with a 18-55mm lens.

In last years I have acquired several images with my camera for others purposes (nova patrol).

I take large fields pictures with my camera. I use this settings: RAW format, monochrome, focal length 55mm, ISO 400, f5.6, exposure 60 sec, no darks, no flats. I use a mount NEq6 for tracking.

Now, I got the idea to use these images to make photometry of variable stars in that fields. Is it possible?

And, if it can not, can you suggest me which settings are more appropriate to do photometry with my camera for the future?

I hope I will not take too much heat for my comments, but I am sinceree and have a great deal of experience with these issues.

I will try to answer your questions, but my first comment is a suggestion. While DSLR photometry can be done, I personally do not recommend it. Yes it is nice if you have the DSLR camera, but that is not what the DSLR cameras are designed for. I whole heartly recommend you invest in a monochrome camera such as an Orion Star Shoot. There are more expensive cameras available and they are better, but not needed. The Orion Starshoot G3 is about $400 and designed for astronomical use. Buy a filter while with a minumim of B and V filters. U band is worhtless with most CCDs so do not get that filter. The R and I bands are sometimes of interest, but basically just a lot more work. Most of the interesting and important data are obtained in the B and V bands.

Now to answer your questions. Much of the following pertains to both DSLR or monochrome cameras.

1: I noticed a difference between the preprocessing procedure in Iris e Maxim DL tutorials. With Iris is suggested to separate the images into three channels R G B, while in Maxim DL there is no mention of it. Why?

Answer: The only band close to the standard photometric V band is the green G band. The R and B bands do not fit well. The DSLR camera takes 3, sometimes 4, layers or images, one for eah of the RGB filters. Only the G layer is of interest for photometry. That band must then be calibrated to have it's respose close to the standard V response curve.

2: To do photometry is it necessary to acquire color images, or monochrome is ok?

Answer: While DSLR cameras seem to be able to take either color or monochrome, they really can only take color images. The intensities of the color layers are adjusted to make the final image look like a monochrome image. This is all wrong for photometry and should not be used. Now, as I said earlier, monochrome is better, but with a monochrome camera, not a color CCD camera. Color CCDs have color filters over each pixel. This not only reduces the sensitivity, it also reduces the resolution considerably. If you use a color camera, e.g., as DSLR camera, ALWAYS use a color mode with all other camera processing, other than exposure time, set for default. Then use only the G channel/layer.

3: When I acquire an image in monochrome mode and RAW format, is it possible to separate channels R G B, or it is need acquire a color image?

Answer: See above. Never take a monochrome mode image for photometry with a color CCD camera. It is not really a monochrome image. When the camera does processing it can change levels and responses. This will affect the photometric accuracy and is very bad.

4: I have a DSLR Canon 600D with a 18-55mm lens.

In last years I have acquired several images with my camera for others purposes (nova patrol).

I take large fields pictures with my camera. I use this settings: RAW format, monochrome, focal length 55mm, ISO 400, f5.6, exposure 60 sec, no darks, no flats. I use a mount NEq6 for tracking.

Now, I got the idea to use these images to make photometry of variable stars in that fields. Is it possible?

And, if it can not, can you suggest me which settings are more appropriate to do photometry with my camera for the future?

Comment:
For the best and most professional CCD photometry, images should be saved in 16 bit FITS format. That means the dynamic range of the pixels will be 65,535 levels. Most Color CCD cameras claim 24 bit imges, but in reality they are only 8 bit, 8 bits for each of the three color channels equals 24 bits. This means the dyynamic range will be only 256 levels. The other reason for FITs images is you can automatically embed the details of the image, e.g., time, date, equipment, exposure and more. Most DSLR cameras cannot save images in FITs formats. Almost all monochrome astro CCD cameras are 16 bit and save images in FITs format. They also come with acquistion software. AIP4WIN is usually the choice for data extraction. BTW, AIP4WIN comes with a very large book with excellent information for both CCD imaging and photometry.

Bottom line:
You can use the DSLR camera for photometry, but if you are serious you should consider getting a true monochrome Astro CCD camera. It will require the same effort, but with much superior results.

I would like to directly answer your questions and then offer a few amendments to Jeff's reply above. First to your questions:

1. Why does the MaximDL tutorial not mention splitting the image into channels?
When we (the DSLR team on Citizen Sky, for which I was the team leader and professional liaison) wrote the tutorials none of us had a copy of MaximDL. One of Bob Stencel's students at the University of Denver, Nick Long, had a copy. He and I put together a quick tutorial on how to reduce the images using MaximDL. If my memory serves me correctly, the version of MaximDL we were using did not support splitting a DSLR image into RGB channels. The tutorial should be updated, but I don't have a copy of MaximDL and nobody on the DSLR team does either... so it is quite outdated.

As an aside, if anyone wants to help update this tutorial, please contact me and we can revise it accordingly.

2. To do photometry is it necessary to acquire color images, or monochrome is ok?If you just want raw measurements of brightness without the ability to convert into a standard filter, monochrome is ok. But, as Jeff correctly points out, DSLRs never really take monochrome images. They always take color images and then combine all of the channels into a luminance image. I suggest you should always take RGB images in RAW format and analyze these as suggested in the tutorials. This mode provides scientifically useful data (i.e. at least V-band magnitudes once calibrated) which may be submitted to the AAVSO for archiving.

3. Separation of RGB channels in a RAW monochrome image.
As mentioned above, just take RGB images and pull out the green channels in post-processing.

4. Processing old images without flats and darks
One of the nice things about DSLRs is that you can more-or-less calibrate the photometry in an image based upon the image itself. The procedure for calibrating the data will take into account a zero-point offset (i.e. darks) and two linear components (airmass and instrumental response). In principle, it would be possible to characterize things captured in the flats (optical aberrations, non-linear response, etc.); however, this would be more difficult.

This is going to be the worst advice I've given in a while, so please read carefully. This goes against all of my best-practices, but I'm curious about the result (i.e. please post back what you find out). Try taking a set of flats and darks now and applying them to your oldest set of images. Then inspect the background. If your camera has remained stable (probably not the case), then the background should be flat and free of non-uniformities. If you find this works, try this on other images to see if it continues to work. If you do this there is one HUGE catch: you MUST thoroughly inspect EVERY image you reduce in this fashion as it is highly likely the set of flats and darks you take now will not work. Then subject the results to the highest level of scrutiny possible. I do not recommend that this be adopted as standard procedure (I'd never do it myself), but it will be a fun experiment.

5. Exposure and image settings
Your choice of camera settings depends on the science topic you are attempting to do. Take a look at the result of the DSLR team's paper on DSLR photometry. There we were using stacks of 10 twelve-second exposures on a single star field. One correction to Jeff's comment above, if the camera is saving in JPEG format, then it is 8 bits per channel (256 levels of gray per colored pixel). Most modern DSLR cameras save in 12- or 14-bit RAW format where this level of graduation is available to each color channel in each pixel.

Now I would like to make a few comments on Jeff's post above. Jeff and I have worked together on several epsilon Aurigae projects and I too share his admiration of dedicated CCD and/or PEP measurements. As Jeff mentioned, there are several cases where CCDs will outperform DSLRs; however, there are also many advantages to using a DSLR. The DSLR paper I linked to above is a perfect example. From 10 twelve-second exposures we were able to get 0.02 mag or better internal precision on 134 stars over 4.5 orders of magnitude (3.5 < V < 8.0) by only using a DSLR mounted on a non-tracking mount. The exceptionally wide field of view on DSLRs is perfect for monitoring star fields en masse. Several folks (see the references in the article) have also demonstrated how powerful DSLRs are for specific science cases (transiting exoplanets, eclipses, etc.).

To mirror Jeff's post, the bottom line is you need to decide what type of science you want to do. DSLRs are quite powerful devices and can compete with CCDs on bright stars. If you want to do photometry on 14th magnitude stars in specific astronomical filters, get a telescope with a (good) CCD (and filter set). If you want to do monitoring projects that produce calibrated V-band photometry on hundreds of stars with only minimal effort involved, go with a DSLR.

I've asked a few of the other DSLR team members to comment on this thread. Their advice will probably be more helpful than my own.

Good to see you are still active with photometry issues. Such fun and very rewarding was epsilon Aurigae. Are you still at the Max Planck institute in Germany?

One thing I have noticed with jpg images when opened in spectrometry programs or image inspection programs show a maximum of 256 levels or 8 bits. Now wonders never cease, but I have yet to see a jpg image with more the 256 levels per layer/channel.

While I am not a champion of taking flats, they can be important for CCD photometry. However, I think it is also very important that the flats be taken at close to the same time and with the same optical path as the photometric images. While optical path variations seem to be less important with spectroscopy, they can be important for accurate CCD photometry. With brighter stars this is not as signifcant as the SNR will be high, but for fainter stars, the optical path in addition to the CCD pixel variations become important.

Hi Brian,
thanks for all the advice you gave me.
I would like to try to understand better how to extract the three colors using MaxIm DL.
In MaxIm DL 5.18, in menu "Color" there is a voice: "Extract Bayer Plane".
When I click on it, appear a window in which it is possible choose one of four plans (see attachment figure).
I tried to extract the four plans using the tutorial's images. Now, I have four images, one for each plane.
How do I know which colors (R G B) are associated with the four images?
I know that the Bayer's matrix of my Canon is RGGB, at least so I've heard.
Maybe I have to think that the association is as follows:
Plane 1 -> Red
Plane 2 -> Green
Plane 3 -> Green
Plane 4 -> Blu

Is it correct?
And, if so, there are two plans G. Which one should I consider?

The position of the buttons corresponds to the position of the pixels into the Bayer cell. In Canon camera the top left (1) is the red, then (2) is one green, (3) at bottom left is the second green and (4) is the blue. In Nikon the red and blue are inverted.

I am not a user of MaximDL, I know it just a little as I had to help someone else recently.

Then you should use the sum of the two greens to get the best SNR. This is what I use in my own software but I do not remember how to do it with Maxim DL, sorry. There should be some function for addition of image like in IRIS. If you do not find it make the photometry to any, they are the same. You could get a better SNR making the photometry on each and then make the addition of the ADU count to report to the spread sheet.

The RAW image delivered by the 600D is coded to 14 bits, perfectlty linear (it's a CMOS not a CCD), by the analog to digital converter and delivered in a 16 bits word in the RAW image file. I don't understand why some people think the DSLR works only on 8 bits Jpeg. Today all DSLR code RAW image data into 14 bits of a 16 bits word. The 600D should have a code shift of the black level at 2048 (older had it at 1024). That means no light, the zero, is coded at 2048. This value shall be subtracted before applying a flat. With IRIS this is done through the offset, unclear to me with Maxim DL. Maxim DL is obviousely not well DSLR oriented, for this IRIS is better.

Never use Jpeg, not due to the said 8 bits format but due to the fact such image is processed by the DSLR to the electronic imaging standards before the Jpeg compression. Such standards include a color space conversion to some sRGB and a BT709 gamma, all those are very non-linear transformations that make the data absolutely improper for photometry. We shall use the RAW output format, that is just the sensor data in Canon cameras.

You can also use the B to get a B-V using the B/G adu ratio, this works very well for most stars. The only exception are the M6 to M10 spectral types that have an excess of deep blue not seen by the B channel of the DSLR. This could be corrected with a specific process but those stars are not common in your images, there are little chance you would get one. The B/G ratio shall be converted to mag like you do for G and then apply a classical transformation to get the B-V . The k coefficient is usually about 1.9 and a small zp could be needed. Just calibrate it on a good series of stars between 1.2 and -0.2 B-V. This B/G is very stable as it is extracted from the same image at the same time.

Then it's not possible to extract an Rc magnitude from R as it's too far from the Cousins red but you can extract a V-Rd (like done for the B-V) it will be specific to DSLR, not any standard, but very interesting. For exemple with the present nova in Del this shows very well the continum evolution of the spectra.

Your 600D is an excellent tool for photometry, it's very stable, linear in RAW mode, the noise is low and there is no problem with dark current pattern up to exposure of 1 minute at 20°C, no real need for dark process. The flat SHALL be used to eliminate the vignetting of the lens. At large aperture the vignetting is often as high as 40%. It is mandatory to correct it precisely for a good photometry. A good flat can be used for long time for a given lens at a given F#, it doesn't change. With a telescope we should be more careful as the position of the optical elements is not always so well recovered.

A wide angle lens is good for large FOV but a telephoto lens provides a much larger photon flux and high SNR. I often use a 200m F4 lens and can achieve SNR well above 500 resulting in standard deviation of series about a few millimag. At lower accuracy and using a equatorial it's possible to reach nova and SN at mag 14. DSLR are superb tools for photometry, I know there are AAVSO guys not believing it, but this is my own experience for more than 5 years now (I am an imaging technology engineer). CMOS sensors of the DSLR have made very fast and strong improvement within few years (it's a Moore's law item...) Now they are much better performers than classical CCD (those last are gone to disappear) and I am sure you have a very good tool to get a lot of fun with the stars (and generate excellent data).

Brian recommended our common paper on DSLR experiment, I would like to add my own paper on testing color correction technique at http://www.aavso.org/ejaavso402834 You will find there an evaluation of two methods for color correction/transform in two photometric systems.

welcome to DSLR photometry. I also use a Canon 600D camera but not the same lens as you. The kit zoom lenses sold with consumer level DSLR cameras are just ok for normal photography but not very good for photometry. The image quality is usually quite poor when the aperture is fully open. When I was using my 18-55 kit lens years ago it was necessary to stop it down to f8 for reasonable star images. Zoom lenses can change focal length when pointed up to the sky so I used electrical tape to stop that happening.

Like Roger, I now use a 200mm lens for all my DSLR photometry. It is an excellent lens and can be used fully open at f2.8 but I generally use f3.2 or f4 to reduce vignetting and improve star images at the edges of the frame.

I use MaxIm DL to control the camera for collecting calibration and science images. MaxIm DL saves these images in TIFF format (not the Canon RAW format) but the four color channels still need to be extracted and processes separately. As Roger said in Canon cameras plane 1 = red, planes 2 and 3 are green and plane 4 is blue. I process the two green planes separately then average the StarADU values. MaxIm DL has a batch process facility which makes calibration and color extraction very easy. I haven't written down the steps I use to do all this but it would be useful for my own reference and for others. I'll contact Brian Kloppenborg to see about updating the Citizen Sky MaxIm DL tutorial.

However, I don't use MaxIM DL for measuring star brightnesses from the calibrated single color images. The Magnitude Measurement Tool (MMT) in AIP4Win is much better for me. MMT can batch process all the calibrated images reasonably quickly and output a variety of reports. Extinction correction and transformation of instrumental magnitudes to the standard system is then done in Excel.

Keep posting questions to this forum, the collective wisdom/experience of this group should be able to provide the answers. Cheers,

I sometimes get the feeling that equipment differences are overrated and data reduction/observation technique is underrated. When you look at the early light curve of Nova Del 2013 for example, you could see lots of CCD observations claiming an error of a few millimags but having a scatter of tenths of mags and sometimes quite a big offset to more consistent visual observations.

Using old wide field exposures w/o dark and flat images

I would not dismiss this altogether. For wide field images as the one in question here, it should be relatively easy to test the feasibility experimentally: measure a few (say 10) comparison stars that are not variable, ideally of similar color but covering a wide range of magnitudes and all across the field, and then plot your measured instrument magnitude values against the catalog magnitude values. Ideally the values would be on a straight line, but they won't: at the bright end there will be a deviation from the straight line because of stars saturating the sensor. On the faint side, there will be deviations from the straight line because of a) blending of stars and b) low signal to noise ratio. The magnitude range where there is more or less a linear relationship between instrumental magnitude and catalog magnitude (if any such range exists), that is the useable mag range for this type of your exposures, with the observed scatter as your uncertainty. Bottom line: No need to speculate: just do and test ;-)

Darks and flats taken long after the observation:

Darks are very hard to get right long after the observation because dark current (as a rule of thumb) doubles every 6 deg Celsius of sensor temperature or so...recreating the right temperature (if you recorded it for the obs nights) would be a hassle. For short time exposures however, the dark current isn't that significant. Anyway the effect of dark current should be more or less a uniform offset to all pixels, and this kind of offset is removed pretty well by aperture photometry when subtracting the sky background measured in the outer annulus from the star pixel values. Hot and cold pixels need some attention to avoid them in the measurements,tho, as usual.

Lenses

I do wide field photometry with a fixed 50 mm lens for bright objects. The lens is a 1970s vintage SMC Pentax M 1:1.4, which gives unbelievable flat images when stopped down to f/2. Those old lenses were designed to illuminate a film area that was twice as wide as my four-thirds CMOS sensor, so vignetting is not so much of a problem as with some modern "digital" lenses. I see no problem per se doing photometry with this kind of focal length.

Agree with Heinz-Bernd and Mark on number of points, and in particular on the importance of the reduction and observation techniques ! It's key.

About flats: agree, they are even mandatory. Most lenses have large vignetting when used at their larger aperture (~50%) and at least 10% at next two or three stops after which it stays at 10% or so. This is what I get from several old 24x36 Nikkor lenses and an APS-C sensor. 10% is 0.1 mag...

About dark's: I generally agree but not 100%. In fact the master-dark technique (usage of a very good master for many observation sessions) is easy with DSLR, like it's done with high end astro camera. The internal (important ! ) temperature of the sensor is in the Canon RAW files and can be used (Mark uses it). Regretfully the needed software to catch it is not free... Then there is another (and better !) possibility which is to directly measure the dark current impulses (not the base dark level) into the images. Then knowing their level, we just have to rescale the master-dark to the right amplitude and subtract it. This technique is extremely accurate, much better than any based on temperature. The measurment is based on a correlation product between the master-dark impulses and the same pixels of the image. As there are typically 100000 of them, the result is statistically extremely accurate. This technique is available under IRIS.

The normal dark current bias, common to all pixels (having no impulses), is not accessible from Canon camera (others ?) The camera measures it and compensates for it, even under RAW mode, we can't get it. But the resulting shot noise is in the image ! in addition to the read noise.

Short focal lenses: some are certainly better than others. With the Nikkor 85mm F1.8 and APS-C (well an old 24x36) I had the same experience as Mark reports. I saw the basic Canon 18-55mm even worst at F5.6 ! To get as much photons than my 200mm F4 I have to use the 85mm at F1.8 . The vignetting is strong (~50%) but manageable. The problem is the large aberrations at periphery of the image, it's impossible to get a defocus being uniform enough. I am obliged to stop at 2.8 or 4, then I rarely use it. I would denote that the 4x3 format is certainly less sensitive to this, it's squarer and smaller, probably just fits the sweet spot.

The following may be taken as being negative, but I hope it is not as that is not the intention.

First let me say right off, DSLR cameras can be used for photometry. I have used one.

So why do I push the 16 bit monochrome CCD cameras over the DSLR cameras. To me it is the right tool for the job. While you can use your Cadillac Eldorado to haul concrete, there are certainly better ways. In fact if you can afford the Cadillac you probably can afford a truck. The same is true with the DSLR camera. DSLR cameras take great pictures and are easy to use, but astronomical photometry is not something they were designed for. Not only is the camera not designed for astronomical work, photometry in particular, the software that comes with these is not designed for it. A 16 bit monochrome astronomical CCD camera is an excellent choice for astronomical photometry and the software that comes with it supports the functions needed to preprocess images. The monochrome camera is also easier to use. With the proper adapter, it can be used with a telephoto lens or telescope just like the DSLR cameras. For a larger FOV, a faster telescope/lens and/or focal reducer can help.

There has been some discussion of amateur vs. professional. Most, not you Brian, of those who choose to use a DSLR for astronomical photometry are certainly in the amateur or novice class. I have seen just a couple of people using DSLR for photometry that have produced excellent work, but by far the majority DSLR observers have produced poor to very poor results. How can I say this, I have direct experience with several large international Campaigns. I was involved with the 1982 Epsilon Aurigae Eclipse Campaign as well as the last 2008 Campaign. During the first, there were no CCD or DSLR data. All data were taken with single channel photometers, photon counting PMT and PIN diode photometers. Almost all the data from a multitude of observers around the world, both amateur and professional observers were very close. During the last Campaign there were many CCD as well as DSLR entries of photometric data. With the exception on one or two observers the DSLR data was scattered all over. CCD entries were better, but still a lot of scatter. The best and most consistent were the single channel photometric data. Part of the reason the CCD data were not better was due to the brightness of the star. My feeling is if you have a DSLR and want to do astronomical photometry, consider it an experiment to get familiar with the process. If you want to get serious, get a 16 bit monochrome camera and do it right.

16 Bit vs. 8 Bit.

As I have said before, I have yet to see a jpg or even a tiff image that has pixel values greater than 255. While a DSLR may have a ADC that produces 16 bit pixel data, once it is saved as a .jpg it is 8 bit data. If someone can show me otherwise I would love to see how that is done. I have examined many color images with many formats (other than fits) and again, none have pixel values greater than 255. Serious/professional photometry uses 16 bit .fits images not .jpg.

CCD vs. CMOS

The following is taken from a section of my new book on spectroscopy, but equally applies to photometry.

A digital camera has a small chip that is a CCD (charge coupled device) or CMOS (complimentary metal oxide semiconductor) chip. The CMOS chips work well, but the CCD is the preferred type of sensor for most astronomical work. Both CCD and CMOS work by the photoelectric effect where photons knock off electrons in a cell. The CCD chips are more sensitive in the longer wavelength infrared region. This region is where a lot of work is done on the hydrogen alpha and helium I lines (VRI photometric bands). The CMOS chips are popular in DSLR and other non-astronomical digital cameras. They have a higher data transfer and lower noise that is needed for general camera use. The CCD excels for astronomical use with its better long wavelength response and higher sensitivity to low light conditions.

The CCD and CMOS chips contain rows and columns of pixels in the form of a matrix. Each pixel detects and reports the intensity of photons hitting it. New cameras can have over 1000 rows and 1000 columns for total number of pixels greater than one million. When photons hit the pixel, electrons are knocked off and fill a pixel well. The charge in the pixel well is then read and converted to an Analog to Digital Unit (ADU) count. The well is then emptied and ready for the next exposure. Color cameras have 3 or 4 different colored filers over the pixel matrix. For this reason color cameras have much lower resolution than that of a monochrome camera with the same total number of pixels. The filters also reduce the amount of light each pixel receives so the color camera is also less sensitive than a monochrome camera. Typically the maximum ADU count for a color pixel with a color CCD camera is 256 or 8 bits. Monochrome CCD chips use each pixel for a spot or point. This means the monochrome CCD chip has a resolution of 3 or 4 times that of a color chip of the same size. In addition, monochrome CCD cameras typically are 16 bit cameras and produce pixel ADU counts of from 0 to 65,535 meaning they have a much greater intensity dynamic range.

In addition, CMOS devices have ADCs for each pixel. As noted above, this results in a much faster downloading of data. CCD’s use a common ADC where all analog data is converted to digital data through the one ADC. The reason this is of concern is for calibration of the device. There are sensitivity variations between pixels. With the CCD the common ADC means there should be no variation due to it. However, the CMOS multitude of ADCs will have gain variations. Flat frames can be used to compensate for these variation, but CMOS just adds to the problem.

>As I have said before, I have yet to see a jpg or even a tiff image that has
>pixel values greater than 255. While a DSLR may have a ADC that produces 16
>bit pixel data, once it is saved as a .jpg it is 8 bit data. If someone can
>show me otherwise I would love to see how that is done.

Ok, but no one (hopefully) is doing DSLR photometry on JPEG images, for many reasons. DSLR photometry must be done (and is done) on vendor specific RAW file format images which store the pixel info in the native ADC resolution, so 12 bit for older models and 14 bits for most current models. Many astrophotography software packages support importing DSLR raw images in their native color depth, so after splitting the CFA color planes and lossless conversion to FITS (plus some meta data editing if needed), professional image processing software can be applied to DSLR image data in 12 or 14 bit depth.

Yes, DSLRs were not designed for photometry. But nor was the human eye and I would still not discourage visual observations, because you cannot beat the availability and ease of use aspect.