It's a pitty. When you buy a photometric filter you expect it to behave like one :( Does this mean that my measurements can't be used, or is there a way to determine what the difference is and compensate?

If this happens more often it could be interesting in the AAVSO format to add a column with a determination of the filterbrand being used. Maybe this explains also some other issues in the ccd lightcurve....

Yesterday the weather was bad here, by the way I had time to work my simulation (for more details about it have a look at: http://www.aavso.org/ejaavso402834 )
This time I compare the results of a 450D DSLR and a Nikkor lens against the theoretical pass-band of a V filter (expected including evrey things, filters, sensor, optics... ). The mag deviation in between is determined for various Pickles spectra and some from the nova. The color correction coefficients are optimized for the Pickles spectra as usual (110 of them, not for the nova, neither the M stars): either k for a classical transformation (zp is zero in this simulation setup) or "a" and "b" for my VSF technique. The deviations are shown in the attached table for:

The nice nova spectra are from O.Thizy, made with an Alpy 600 and a Tv85 instrument (noted OTmdd at end of the table)

My first point is that not only the Ha is a problem but also Hb. Comparisons have also been done by erasing either Ha or Hb or both from the spectra. When Ha and Hb are both erased the deviations are near zero mmag in any case.
At Ha wavelength the V pass-band has a 2.7% transmission and Ha tops 76 times stronger than the continuum at 550 nm ! Ha, after the V filter, is about 200 % as high as the 550 nm continuum. The DSLR has a transmission of about 1.6 % at Ha also resulting in a strong, even if smaller, G pollution.

Hb is somewhat different, less strong (tops at 20 times the 550 nm continuum) but it falls into the blue extanded pass-band of the DSLR G. The result is both Ha and Hb are 50/50 deviation contributors over the V pass-band. The V transmission at Hb is 9.5% such of G is 70% (from the pass-band top).

The simulation result (the attached table) is a deviation of Gcm=-170 mmag for the DSLR+VSF and Gtm=-85 mmag with a classical transformation (on Sep. 5th). The uncorrected output deviation Gm=-101 mmag.
With coefficients set for normal stars the VSF works worst than the classical transformation (usually it's well better) But the VSF coefficients can be adapted for a given comp "ensemble" that strongly reduces the nova mag deviation. There is no setting of the classical transformation that improves the nova G mag deviation.

Another point from the simulation is the fact the nova is at the same time very blue and very red ! The green continuum is very low, then the nova is magenta ! This results in Bm-Gm and Gm-Rm that strongly diverge from their usual relationship.

I am now looking at ways to proceed with my data and my priority is to continue to collect images each day it's possible. Will see later how to make the best possible reduction of it.

As said by others I have no good numerical table of the response curve of commercial filters, I just have seen small printed graphs, and that the Baader has an high transmission at Ha. I could digitize that curve but is that data accurate ? Sometime such commercial things are well inaccurate...

Thank you so much for the time and effort you have all invested to improve your photometry! I cannot tell you at this point exactly how all of your data will contribute to a better understanding of novae (and especially gamma-ray novae like V339 Del). But I can assure you that the AAVSO light curve will be scrutinized by professionals for years to come.

For example, we detected radio emission from this nova much later than expected (ATel #5382). That probably means that either: 1) the nova is much farther away one might expect for such an optically bright source; 2) there was a delay between the thermonuclear runaway that triggered the event and the expulsion of the ejecta from the system (as Nelson et al. 2013 [2012arXiv1211.3112N] suggested for T Pyx); or 3) for some reason the ejecta were initially too cold to produce strong radio emission. A detailed comparison between the optical and radio light curves could help reveal which explanation is most consistent with the data.

The link that you gave for the Baader filters is interesting, as it gives the filter transmission curves for their UBVRI filters. The plot is small, but it seems to indicate that the Baader filter has a longer red tail than does the Astrodon filter. The Baader filter appears to have 10% transmission at Halpha (656nm), falling to zero transmission around 720nm. Does anyone have a better plot with more resolution, or an actual table of transmission values for that filter? The Astrodon filter has near zero transmission at Halpha. If you look at Robin Leadbeater's plot earlier in this thread, the nova has a huge Halpha emission line, and that may be contributing enough light to make your observations significantly brighter than other observers.

However, I'm going to look at your images later today as a further check.

Arne

[/quote]

Currently, based on the spectra in the ARAS database, the H alpha line is contributing ~5% of the flux in the Johnson Vmag passband (response curve obtained from Virtual Observatory website)

Carlos Tapia, an optical specialist at the Universidad Complutense in Madrid, Spain, who works in the LICA optical components testing laboratory, has tested a wide sample of filters, some of them photometric. Test curves can be reached at

Carlos Tapia, an optical specialist at the Universidad Complutense in Madrid, Spain, who works in the LICA optical components testing laboratory, has tested a wide sample of filters, some of them photometric. Test curves can be reached at

It can be clearly seen that Baader has a "tail" in the red part of its response. Optec V Bessell seems to have a similar curve.

[/quote]

Wonderful! Thanks, Miguel, for pointing me to a reference that was unknown to me. The red tail probably explains much of Andre's brighter measures. It does NOT mean that filters with slightly different response cannot be used, just that pathological stars (like emission line objects) are difficult to measure correctly with wide-band filters. Higher-resolution, photometrically calibrated, spectra will give more consistent results. More later!

Unfortunately, as far as I am able to know,Carlos Tapia of LICA laboratory only issues the curves, without numerical table. This is because LICA tests optical components for external customers as a commercial service, wich include a complete information dossier. Curves issued in the web site are made as an spare time activity by Carlos.

I'm happy to know they are useful.

Cheers

[quote=Roger Pieri]

Thanks Miguel for the source ! It's very interesting, is it any way to get a numerical table of it ?

Anyhow the curves are accurate enough, I will digitize them and use it for a numerical simulation of the issue.

I digitalized the Baader V curve and I get a large deviation from my simulation -186 mmag, without correction. But this is without the sensor response, just a photonic correction. The classical transform doesn't improve it. It's possible to get a very good result - overall - with a VSF like process.

This curve has a larger extent into the red, not only at Halpha, and is also wider toward the blue than the classical Johnson definition. This is in fact just the difference between a said "Bessell" and Johnson. Baader just follows that said "Bessell" definition, Optrec had both. But I have other Bessell definitions that are not very different from the many Johnson "interpretations" that have been published. Apparently the related paper from Bessell is no more accessible ? I have to review the reasons for those large difference, probably integrating some CCD response, but CCD have evolved since then...

Miguel and Arne, thanks for the useful information. This makes a lot clear. In the future I want to do more of this kind of work so I will consider getting an astrodon B and V filter sometime. I just became an AAVSO member and love the vphot program btw. I just wondered if there is any idea about making a stand alone program for this? My files are 32mb and uploading dozens of them doesn't make me happy :) But maybe that's more for another topic.

Arne: I'm curious if my data for the rest is ok. Thanks again for your effort starting this topic and helping.

I highlighted MZK since his data points are lying "in the middle". I am not sure what happened to WGR, his data (top series) are untypically higher and show some "outbursts"... Did he, Gary, try a different filter?

My data are the large cloud underneath WGR's series. I created some (ugly) Excel graphs, that are kind of interesting (at least for me), see attachments:

NovaDel2013_JD2456544.6_Vmag+Cmag+Kmag_20130909.png is an overview of the whole run, displaying V, Comp and Check star. In the middle of the run some faint clouds came through (air plane trails?), but they did not affect the Vmag (as they shouldn't).

NovaDel2013_JD2456544.6_Vmag&Kmag-Cmag_20130909.png shows zoomed in Vmag and the difference Kmag-Cmag.

This was a mammoth run for me: It was the first time that I could monitor the data acquisition from inside my home. I was targeting 6 hours, but after 5 hrs. I admit I fell asleep in front of the monitor... How do you guys/gals stay up all night?

When I woke up (I had dozed off for only a few minutes) and checked the monitor I noticed immediately that something was wrong - the star formation had changed! Something seemed to eat our nice nova... Time warp, neutron star, degenerate matter...??? Lateron I followed up and checked the saved single images (more than 600, crazy!).

Call it "luck of the newcomer", what I found you can see in the last attachment...

I noticed the "outbursts" on 2456544.5+. Went back and looked at the images again. Saw nothing obvious, however the Check star was also affected. Was hoping that you/others got the same result. No such luck. Perhaps a passing contrail or cloud.

So basically, please correct me if I'm wrong, the difference is caused by the influence of the H-a and H-b lines. Using the Baader filter I catch part of these lines and get another magnitude.

So that's a handicap I have to deal with. I was wondering if there will be any way to correct for this somehow. On the other hand I was thinking if this will not have an extra scientific value. My time series will be different of the time-series by the other filters because of the H-a influence. So basically the difference is a measure for the strength of the H-a signal? Combined with spectra I can imagine this adds some info, or am I thinking the wrong way? Probably the spread of the measurements is too high to measure this, but still, the principle is interesting I think...

André, yes, it's more or less like the G channel of the DSLR except it's more H-a than H-b. As you suggest, in case of a single channel photometry, the only way is maybe to report non-corrected data and put into the comments the observation condition: references of the filter and camera. What Arne recommends ?

If you have the R channel you could make a compensation like: -2.5 log(Vn-k.Rn) , as Rn is essentially H-a it works very well to adjust the result to Johnson ( Here what I denoted Vn and Rn is the photon count or ADUs).

1/ We should use filters from only ONE manufacturer Certified by AAVSO.

2/ We should use the SAME Comp/Check stars.(Rekommendation stars)

3/ Automation could do ALOT like looking inte FITS header and Calculate saturation levels from
CCD Camera spec vs time and just kill images with saturated stars.
Or not allowing them to be measured if star is saturated.

3/ AAVSO sequences should NOT contain doubious stars like star 80( 000-BLC-955 ) wich has a lot of
fainter stars in near aperture size ring.
It shouldnt even be there and have I never used it as comp or Check star,

I also raise doubt of using BIG aperture photometry on bright objects like Nova Del.
I can understand the temtation but how abt the quality of sub second photometry??

For the guys reporting multiple filter data (transformated) i dont have much to say except that
we should use the same filter ( manufacturer and AAVSO Certified)

I suppose that its hard to Certify filters bec the sellers and manafacturers would rise hell but
if not ,,how can we ever have "perfect" data.

Else there could be ex a V(filter X) option instead of just plain V for the reports.

1/ We should use filters from only ONE manufacturer Certified by AAVSO.

2/ We should use the SAME Comp/Check stars.(Rekommendation stars)

3/ Automation could do ALOT like looking inte FITS header and Calculate saturation levels from
CCD Camera spec vs time and just kill images with saturated stars.
Or not allowing them to be measured if star is saturated.

3/ AAVSO sequences should NOT contain doubious stars like star 80( 000-BLC-955 ) wich has a lot of
fainter stars in near aperture size ring.
It shouldnt even be there and have I never used it as comp or Check star,

I also raise doubt of using BIG aperture photometry on bright objects like Nova Del.
I can understand the temtation but how abt the quality of sub second photometry??

r the guys reporting multiple filter data (transformated) i dont have much to say except that

we should use the same filter ( manufacturer and AAVSO Certified)

I suppose that its hard to Certify filters bec the sellers and manafacturers would rise hell but
if not ,,how can we ever have "perfect" data.

Else there could be ex a V(filter X) option instead of just plain V for the reports.

/Pierre

[/quote]

I agree with lots you state but don't quite understand the problem with the 80 star.

It is easy to avoid the nearby stars and is the closest comp star of reasonable magnitude and colour. This is why I chose it as my comp star.

The pic below has a 16 pixel aperture and a 10 pixel wide ring with no problems from contaminating stars.

Roger is correct, in that the Rc or DSLR-R channels are heavily influenced by Halpha, and so give a clue as to how to correct the V-band measures to account for any red-tail-Halpha contribution. Here are a couple of papers to read that discuss red leaks, filter responses, etc.:

Basically, if stars behave reasonably smoothly through a filter bandpass, then it is straightforward to transform the data onto a standard system. If there are sharp features, such as emission lines, it gets far more difficult.

One method of correcting for the red wing of some filters, like the Baader V, is to observe at V, Rc, Ic. Rc is affected by Halpha; Ic is not. If you convert the magnitudes into fluxes, you can see the increase in brightness at Rc, compared to a straight line between V and Ic (sorta forming the continuum). That tells you the amount of the increase at that date, but does not tell you how to correct the result. You would have to do this for several objects with differing Halpha contribution to create the data set for, say, a linear fit. I think it could be done, but I bet it is not simple. My expectation is that we can adjust the zero point between observers as long as there is sufficient overlap.

As Pierre suggests, we could "standardize" on filters from one vendor. However, for now, if your data appears to be brighter than the typical observer, then perhaps include a note indicating what filter vendor you are using or any other equipment details that might be useful to know.

I've uploaded ~600 BVRI datasets from BSM_Berry (on the roof of HQ) for the UT nights of 130906, 130907 and 130909. These are roughly transformed; I don't have good coefficients yet. For the target star itself, transformation makes no difference, but it does for the comparison stars. The BSM data overlay some observers but not others; they indicate the fainter observations are the closest to the standard system. I will redo these points once I get good coefficients, and also bin them to improve signal/noise and reduce the number of submitted observations. There are also an hour's worth of BSM_Hamren data (fully transformed) for 130908. All of these points are under the HQA obscode.

Gary's observations from 130909 are the recent outliers. Not only are they 0.2mag too bright, but they exhibit flaring activity which is not seen in other observers at the same time. I think those images should be looked at carefully. I see similar "flares" on his 130825 data too.

That said, I'm seeing nightly variations. It is a little suspicious that they peak around transit, but they don't peak exactly at transit and have the same shape and amplitude in all filters. Since other observers are seeing nightly variations (see for example 130906, where 5 separate observers show a fading trend), I'm inclined to believe that these variations exist - but when you see a new feature, it is always suspect and needs to be studied carefully.

Some other observations:

PXR 2456532 has a dozen points from 0.35632 to 0.35688, an interval of about a minute. These are given with an uncertainty of 0.02mag, and with a peak-peak scatter of 6.488 - 6.415 = 0.073mag. It would be far better to give a single datapoint under these circumstances.

There are several observers using unfiltered cameras. I really recommend that you observe some other target. Novae have all kinds of structure, like emission lines, that really mess up any kind of magnitude estimation with an unfiltered system. Your magnitudes are usually too bright, and can't be correlated with anyone else's. There are plenty of stars where unfiltered photometry is useful; just not bright novae.

The light curve is trending downward at about 0.05mag per day, after its 4-day pause. We are 25 days after the peak, and the nova is still bright!

I'm Carlos Tapia, the builder of the filter database. I'm looking to open the transmission values of some filters that are important for investigation (photometric only), like this work with the nova.

One comment to de Baader V filter, is physicaly composed by a stack of two filters, like the original Johnson photometric filters, use the same glasses. The new filters like Astrodon are interference filters.

I will be delighted to help with whatever AAVSO users need with filters.

First of all a lot of thanks to Carlos Tapia for the nice filter curves.

I combined the Baader and the Astrodon with a KAF3200ME efficiency curve and then compared it to the last photonic V definition from M.Bessell (2012 PASP paper). Curves and tables are in the attached PDF. I did find the 1990 Bessell paper in my own archives and its curves for the V channel are far from such of those two commercial filters. In fact the 1990 paper V curve is near identical to the modern ones (2012 paper and others).

Those filters are both far from the system (filter x sensor x optics) standard defined by Bessell and others (Pickles, Appellaniz, CDS...) Those standard recommendations from various sources are all near identical in the case of the V channel. I never had a look at the commercial filters as I don't use them and had no good curves. But now I realize they are not better than the G of the DSLR (since about 2007 types). The DSLR channels are even far better when used in proper combination !

I'm slowly working through the submitted images; please excuse my speed! Miguel contributed a set of 40 images that he took on August 25, 2013UT. There were 10 images in each of BVRI, taken with a 200m f/8 R/C telescope and a QHY9 camera (KAF8300). He purposely defocused the stars, which is fine - the peak pixel value in the images is under the saturation limit for the 8300.

In fact, Miguel used a 3x3 binning mode for this camera. I hesitate using such a mode with amateur cameras, as the readout node can really only handle about 2x full well electrons. Since the KAF8300 has a full well depth of 25K electrons, this means that you can only expose to about 50K electrons. Since 3x3 means 9 native pixels contribute to the signal, each one can only be filled to about 5K electrons before the readout amplifier gets saturated. You will get more dynamic range by either not binning, or at most binning 2x2.

That said, the images look fine. The stars occupy about 25 binned pixels with their donuts, so the measurement aperture has to be at least that large in diameter. Miguel used the 80 comparison, and that is a little troublesome, because it has two fainter companions to the E and NE that blend with the comp star when defocused. Therefore, the comp star will appear a little too bright, and the variable will appear a little too dim. However, I don't see any real problems with his processing.

Andre' contributed three images taken with his system, each one being a stack of five 4second exposures. The telescope is a TEC140, using a QSI-583 camera (KAF8300; they are popular sensors!) and a Baader V filter.

The 28th and 30th had reasonable seeing, with fwhm around 3-4 pixels. The 29th had poor seeing, with fwhm more like 11 pixels. I hope you changed the aperture size for the poor seeing nights! The peak ADU is fine; these chips are usually set up with a gain that digitizes the linear portion of the response curve. The maximum ADU was about 36K on the 28th, when the variable was about V=6.7.

The main thing I noticed with these images is that V339 Del is very much off-center. Was there a reason for this? I don't have the master flat that was used, but I notice that the noise characteristics in the sky background are worse on the corners, so there is some vignetting in the system. The Taks are usually pretty well baffled, so I don't expect scattered light, but I'd feel a lot better about this setup if the target were closer to the center of the field.

The other concern is that I measure V=6.71 from the 8/28 image, yet you report V=6.5 or so if I read the LCG properly. I was measuring with respect to the 80 comp star, and from your stack. Your reported photometry may be the individual 5 images, but even so, this seems like a big difference twixt your measure and my quick-and-dirty iraf measure. On the 29th and 30th, we agree pretty well.

We've already discussed the possible offset due to the red wing of the Baader filter, and that is probably the major contributer to Andre's offset from other observers.

The 28th and 30th had reasonable seeing, with fwhm around 3-4 pixels. The 29th had poor seeing, with fwhm more like 11 pixels. I hope you changed the aperture size for the poor seeing nights!

I used quite a big aperture, but it's a good idea to check and maybe do some reprocessing...

The main thing I noticed with these images is that V339 Del is very much off-center. Was there a reason for this? I don't have the master flat that was used, but I notice that the noise characteristics in the sky background are worse on the corners, so there is some vignetting in the system. The Taks are usually pretty well baffled, so I don't expect scattered light, but I'd feel a lot better about this setup if the target were closer to the center of the field.

The basic reason was that I could use compstar 79 also which was in the field of view. But you definitely have a point that centering is very well possible. I did use a TEC flattener, which gives a very flat field and make flats every time using a flat panel. In my experience this corrects very well. But maybe it's better for future measurements to put it more in the center...

The other concern is that I measure V=6.71 from the 8/28 image, yet you report V=6.5 or so if I read the LCG properly. I was measuring with respect to the 80 comp star, and from your stack. Your reported photometry may be the individual 5 images, but even so, this seems like a big difference twixt your measure and my quick-and-dirty iraf measure. On the 29th and 30th, we agree pretty well.

I will have to check this. Maybe it's because of the large FWHM? Could be that the aperture is maybe to small. I will look into this...

I'm very glad with your comments as I'm quite new in this and learned more about photometry in the last few weeks than in my last 3 years :) I will certainly continue this part of the astrophotography (and just became an AAVSO member :) )

The other concern is that I measure V=6.71 from the 8/28 image, yet you report V=6.5 or so if I read the LCG properly. I was measuring with respect to the 80 comp star, and from your stack. Your reported photometry may be the individual 5 images, but even so, this seems like a big difference twixt your measure and my quick-and-dirty iraf measure. On the 29th and 30th, we agree pretty well.

[/quote]

I took a better look into this. I think you have been reading the input from a day earlier in the LCG. On 8/28 I have data taken on the end of the day and the start of the day. The end (near 22.00 UT) was 6.7 while the early hours (0.00 UT) were 6.5. At least that's what I noticed in my data...

I just did a reprocess of all the data in Maximdl and noticed this only gave very slight differences with the Canopus processing. So I expect it to more or less ok like this...

Zap is a wonderful tool; thanks, Sara! Zapper is very similar for the non-staff to use; you should try it out. In particular, today we will talk mostly about 2456521 and 2456522, around the peak of the outburst.

If you want to see what you should be getting with CCD systems, look at the data from PGD on 2456521. He was using an Optec SSP-3 single-channel photometer, but we have a million channels - why can't we do this good? The data are precise, to within a millimag or so; there is no question of a fade during the time series. The only thing missing that I see in the observations is the uncertainty measure. In contrast, look at the slightly earlier data on that night, and on the following night, from DTTA. This observer is doing the same filters (BVR), but has scatter 10x what the PEP observer was doing. This is what I'm trying to achieve: if all CCD observations looked like the PEP observations, researchers would be ecstatic and would clamour for AAVSO data.

Back to DTTA's data: the observer quotes an uncertainty from 0.005mag to 0.05mag per point. Yet, the observations have peak-peak amplitude of 5.12-4.64 = 0.48magnitudes. This looks like saturation to me. The observations are also untransformed; BV basically follow PGD pretty well, but R is too bright (probably even worse saturation). Another clue is that DTTA also observed at U-band (very good!). The U-band observations have far smaller scatter, as U requires much longer exposures than the other bands and therefore is less likely to saturate. So several comments: (1) the error analysis is wrong; (2) the images are likely saturated; (3) with this poor of a dataset, there is no reason to be reporting with this cadence; (4) such datasets obscure the really good ones, like from PGD, when using the Light Curve Generator. If DTTA is reading, I'd like to see a handful of the images from that night.

We've also marked at set of V-band measures on 2456523 from MFB as discrepant (they were 0.6mag too faint), which won't show up on LCG. SAH has a BVRI set from 2456520 that look about 0.2mag faint, but only one set; the same for 2456521. Later nights, SAH's measures agree pretty well with others. I bet on saturation, which was corrected on later nights, but I'd like to see the images. HPIA and LMZ also have single V-band estimates on that date that look too faint. NRNA has a small group of BVR measurements that should have been averaged and a single point submitted. NRNA's error estimate is also far too small; the I-band measures, for example, span 5.00-5.12, yet are reported with 0.0085 error.

Now, the question is what to do with data like the BVR measures from DTTA. We usually go back to the observer and ask them to inspect their images and see if there is a problem. Perhaps the observer was imaging through clouds, and so a larger uncertainty should be applied. Perhaps the images are saturated, in which case the observer should remove them from the database (you can't recover good data from saturated images, especially if the sensor is ABG). It is hard for AAVSO staff to know whether a dataset is good or bad, so we have to rely on the observers to carefully examine their own data before submission (or fix it after submission). That "fix it after submission" is important - you can go back and correct your data, and that is far better for the researcher (and less embarrassing for you) than leaving discrepant data in the database. The more our data looks like PGD, the happier I will be!

After 6 days of clouds and rain I could finally do some photometry again. It was not too long, but I still could grab 150 images which I stacked per 5. The result was a very consistent series with a standard deviation of only 0.0075 mag. So I was quite happy with this. I reuploaded all data with a comment added that it was done with the Baader V filter and I made images today (as Arne suggested) where the nova was more centered in the FOV. I really hope my data can be scientifically used sometime, maybe with some correction afterward for the h-alpha leakage... and else it still shows a nice time-series :)

Neil sent me a set of images from 2013-08-30. He has a unique system, using a 50mm camera lens along with a filter wheel and an ST-7 ABG camera. He was running the camera at -16C, with 60second exposures at V. BRI are mostly in focus; V is out of focus due to the optical configuration. Neil, what focal ratio do you run this system at, and whose filters? Most 50mm lenses are pretty fast, and most interference filters don't like fast beams.

His data pretty much fall on the generic light curve for the nova. However, I have a few comments about the images. He takes multiple images and stacks them before submission - excellent. However, the variable is near one edge rather than being centered. He has enough field of view that there are plenty of comp stars with the target centered, so centering is a far better situation. The system also has vignetting, so centering reduces any residual effect. The out-of-focus V images are interesting; they have a reasonably sharp core along with a wide "base". At least, these will be easier to centroid with most software packages.

Besides centering, the other thing that bothers me are the flats. While Neil keeps the peak ADU in each science image low enough that the ABG effect does not impact the linearity of the photometry, the flats are exposed to a much higher level. I calculate the peak ADU in the V flat to be about 51K, for example. I would be very careful and determine that this is still within the linearity range of the camera before exposing to this level for your flats. It is usually better to take more flats at about 30K ADU to reach the same signal/noise, than to take fewer high-level flats.

I think this is all of the images that have been sent to me. I've requested others, but those observers have not responded. If I've missed any, or anyone wants a critique, let me know.

I have run this camera lens at f/2.0, which is as fast as it would go. I went fast to keep the exposure times lower, but this info regarding filters is certainly cause for change. All my filters come from Astrodon. What f-ratio would be a good stopping point?

The flats exposure issue is a face-palm moment for me. It makes absolute sense. This is an easy fix as is the centering issue. My previous measures were with the nova and comps more central, but on that night I repositioned to try different comp stars to see if there would be any difference and in the hopes of comparing my results with SAH's. It's interesting how such decisions can teach you something totally unexpected! Thanks to all for the discussion pointing out just how critical centering can be in wide-field set-ups.

To answer your question about observing in later post, I can say at least for me there have been more clouds lately, and my real-world schedule just hasn't cooperated with the fewer nights it has been clear.

To date, I've looked at the light curve daily and checked for obvious discrepancies. The issues that I've seen with the datasets:

- saturation. This is becoming less of an issue now that the variable has faded to V=7.5, but makes some of the early observations questionable.

- too many submitted datapoints. If you take a quick grouping of images on one night, you should be reporting the average and standard deviation, not every single image. I know it is extra work, but the researchers will thank you.

- incorrect settings. A couple of observers used the wrong filter; others placed the target away from the center of the field. Some used too small an aperture; others had problems with one comp star but not another.

- not looking at your data. Surely you can spend a few minutes to see how your measures compare with others after submission. That can catch some obvious errors very quickly. Those who are observing defocused will have more difficulty in looking at the images, because donuts are not nice clean stellar profiles. They are great ways to see how poor your collimation is, though!

- submitting with improper uncertainties, or times with millisecond precision. Observing unfiltered. Learn the limits of your equipment.

- no transformation. While Halpha does affect the later photometry as the continuum fades, you can get much closer to the standard system if you transform. The nova is basically colorless now (but not earlier), but the comp stars are much redder, and applying transformation corrects for their color.

- differences in filters. As shown in this thread, Halpha is the culprit with most novae. It is bright and wide, and contributes quite a bit of flux. Filters from different vendors have different passband shapes, with some having more throughput at Halpha than others. This will cause a basic offset between observers, even after transformation.

I would also like to suggest that observers consider doing an all-night time series, as this makes it MUCH easier to see how your data overlays with other observers. Finally, I notice that the number of observations is decreasing. People are getting bored, or there is more bad weather this week! The nova is still BRIGHT; even at 7th magnitude, it is the brightest nova we've had in years. Get out, keep monitoring, see how long you can follow it. I bet everyone can still get good signal/noise 6 months from now - wouldn't it be wonderful to have a high-accuracy light curve of a nova that is months long?

I would also like to suggest that observers consider doing an all-night time series, as this makes it MUCH easier to see how your data overlays with other observers. Finally, I notice that the number of observations is decreasing. People are getting bored, or there is more bad weather this week! [/quote]

Weather here in Europe (Netherlands) has been very bad lately. The weeks after the nova were very good. I will try to keep doing some time series in the coming future as I do like this kind of work, but have to divide my time with doing some imaging also.... :) Also I think that in 6 months the nova will not be visible because it's to close to the sun then... :(

First, I think that Arne's offer to review images and the many, many people who have engaged it is a tremendous, positive set of developments. There is a clear yearning and desire to make the most our of one's measurements and this bodes well for the precision of future data - especially of bright objects.

I don't have access to the images, so my additional comments may be completely at odds with the actual data people are taking. But I wanted to mention some potential additional sources of systematic errors:

There is a tendency to want to see that the sky is "flat" - that is, that is has a constant "DC" level across the image. Two issues complicate and potentially defeat that intention/reality:

- scattered light. The baffling of telescopes is far from perfact and especially if there is moonlight nearby, the sky gradient may not be flat due to scattered light. Scattered light is "additive" whereas pixel sensitivity is "multiplicative". One should never flatter out actual, real gradients in scattered light across the field.

- the optics of a system, especially a wide-angle system, can result in a significantly different pixel scale near the edges of a CCD chip (relative to the center). If you insist on produce a contant mean value of your flat in such circumstances, you are introducing a radial-dependent zeropoint error. I don't know how much the scale varies for most wide-angle camera lenses but I can tell you that the pixel scale changes on many of the mosaic CCD imagers at large observatories are enough to significantly affect the zeropoints of edge stars relative to center stars on the image. I would be surprised if such detectable pixel scale image effects were not present in wide-field DSLR images. Note that this is an entirely distinct effect from vignetting which is the focal plane seeing a smaller collecting aperture at the edge of a field.

The good news is that high-precision is achievable. The bad news is that, under certain conditions, one really needs to work to get the precision out!

Interesting question ! That's similar to what I sometime say about flat target.

About "light scatter": I don't see the problem at first order. It's additive, yes, but eliminated by the background subtraction, normally part of the photometry algorism. It shall be flatten when it's the image of the sky. But if some of such background comes from a different optical path like internal reflection/diffusion, its vignetting is different from the sky background vignetting and then you are correct it's a little bit more complicated !

But, ok, we could just consider that's an independent illumination that adds the same into the background measuring area AND the foreground measuring area (the star). The normal photometry process eliminates it, no problem, flatten or not.

"Pixel Scale" Here this is what we name "Geometric Distortion" in optics. Normally lenses are corrected for such distortion (the large front "field lens" you see in wide angle lens and most zoom, much larger than the true aperture). But I agree in modern zooms, it's far to be perfect ! Fixed focal length lenses are much better, and old ones are excellent (at that time there were no electronic corrections of it...) Those old lenses are often as good as 0.1% at image edges on APS-C format, absolutely no problem for us ! Anyhow we shall never use the edges of the image, the sensors are not best in that area (I eliminate the 100 edge pixels from any photometry in my software). Anyhow I would not use focal length shorter than my old Nikkor F 85mm stopped at F2.8 on a Canon APS-C format. There are number of issues with shorter focal lenght, not only the geometric distortion...

Then what the relationship with the flat ? In fact most lens flats are made with flat targets near the lens and often at front of the lens. They are not at the "object plan" but at the "pupil plan" ! This is very different !

At object plan this is the retro-projected pixel surface that determines the pixel illumination. At pupil plan this is all the pupil surface of the flat target that illuminates each pixel under the pixel corresponding angle. The pupil luminance uniformity is not important, what determines the pixel illumination is the overall radiation under the angle range corresponding to the pixel: the "Lambertian" character of it, the deviation from the angle cosine law across the pupil surface. This flat target position simulates very well the case of stars in the sky. Except there are few (no ?) target material surfaces that are well Lambertian ! The only way is to measure it and apply a polynomial correction to the flat (easily a couple of %)

Anyhow I strongly recommend to use longer focal length lens (classical 85, 135, 180, 200mm when used on APS-C). There are many such old Nikkor F lenses available at surplus for 100$. They fit the Canon bodies using a small adapter ring and are perfect for our photometry, providing a lot more photons, large dynamics at ISO 100, easy focusing and no such hereabove problems !

Interesting point about distortion of pixel scale in wide field DSLR photometry.

When plate-solving such images, the software must make a model of this distortion to do a good astrometric solution and the software that I use a lot for this, Scamp (meant to work together with SourceExtractor) has the nice feature to generate a check-plot of this distortion model. This is very useful, because with a single look on the plot you will instantly know when you made a mistake and the plate-solving failed to give a good result (e.g. you gave a wrong hint about the center position to the software), but of course it also tells the user a lot about the optics. I enclose an example distortion map for my SMC Pentax M 50mm 1:1.4 lens (from the 70s of the last century) for illustration. I'm sure other software offers comparable features.

Beautiful and a beautiful example of a fairly easily detected pixel-scale induced photometric offset. If I have read the image correctly, Pixels near the edge have a photometric zeropoint due to distortion which is the square of the ratio of the pixel scale different than the center. In this case, that would be between 3 and 4% - or more impotantly 0.03 to 0.04 mags!

Indeed, so in this example, if you want to limit the error in question to below (say) 0.01mag, you are left with the area of the image that is, in the color coding, redish to orange, ca 400 pixels in radius and therefore about 9 degrees in diameter in the sky at the given pixel scale. I had picked comparison stars in a 5 degree radius around the target for ensemble photometry, I guess I should better limit it to (say) 3 degrees.

A center to edge 2% distortion seems to me high for such lens used on a small four-three sensor. This is far from my experience with the Nikkor lenses and the current labs reports on various fixed lenses I know. With such distortion my astrometry would simply not work (I have not implemented the correction, only a check of it). The error my software records is most of the time at sub-pixel level. Then using only the very center of the image looks to me seriously annoying for guys who like to do wide field (many stars) photometry...

All optical aberrations have only the effect to spread the photo-electrons from a point source, like a star, but don't affect their total photon-count. We just have to increase the size of the foreground measuring area to get all. This is an issue as it decreases the SNR and increases the blending risk.

Then on surface sources, like background, flat, the geometric distortion (pincushion, barrel)on a side and the point spread aberrations (spherical, astigmatism, coma...) on the other side, do not have the same effect like Doug said. The first affects the background and flat, the seconds don't. Not something simple to correct ! That means our background measure is ok, but the overall photon-count of each star is wrongly impacted by that part of the flat. Doing it right by software is feasible but not simple...

First, I want to echo what Doug said: we're working together to improve the photometry submitted to the AAVSO. Many of you do just fine; others just needed slight modifications to their techniques, or needed to devote more time to examining their results. These are not huge changes, but reap huge rewards. I appreciate your effort!

As for scattered light: the problem here is in the flats, not in the science frames. If the moon is nearby a science field, you will get scattered light into your image, often in the form of a gradient or different illumination of dust. This is ok; it is an additive feature, as suggested by Roger, and normal aperture photometry gets rid of it. However, when scattered light is in the flat, then it modifies the assumption of uniform illumation at the front of the telescope. This scattered light contribution then gets divided into every science frame, and modifies the photometry. A large part of my CCD School is devoted to these kinds of systematic errors and how to get rid of them.

Likewise, optical distortion modifies the basic assumption that all pixels see the same amount of sky. This also changes the photometry. If you have coma, or your chip is larger than your corrected field of view, or if your corrector doesn't do a perfect job, photometry is compromised. That is one of the reasons that I highly recommend that you put your target and comps either in the center of the field of view, or at least at the same radial distance from the center.

For the most part, the effects mentioned above are small issues, made even smaller if you are working with the center of a field of view. The bigger effects are the ones I've mentioned before: saturation, wrong filters, scintillation, poor signal/noise, etc. However, as you correct the easy ones, you progress to the harder problems - it is the nature of being a good photometrist.

Keep up the good work! Lets continue to work on these stars and others to get the ultimate precision out of your imaging.

- make sure that you take flats with the aperture mask on the telescope, as your flats will be very different with the mask than without the mask.

There are a couple of fainter stars between the 80 star and the nova that can be used as comparisons, or as checks if you want to use the 80/98 as a small ensemble. These are about 10th magnitude and will become important as the nova fades.

I cross posted this on the Pulsations in Nova Del 2013 forum, and will also post on the PT/Data reduction forum.

Helmar's comments about keeping up with the data got me thinking. I hate to average the frames, because its a PITA to pull into Excel, and then manipulate, only to be rejected by WebObs for some format thing. But there is a better way, that I just tried. Long time AAVSO member Jim Jones has written a VBS script that stacks images in groups of x, where you just input x in a GUI. Its available on the Maxim page, as an "extra" at no charge. It does work only with Maxim as far as I know. JJ chime in here. Its fantastic.

I just took 386 images from last night, and boiled it down to 38 stacks of 10. I chose 10 at my option. Then redid the PT on the stacks. Saved as AAVSO extended format. Then I retracted my 386 observations and replaced them with the 38 from the stacks. Wah Lah. All done. It took about 1 minute for "StackFitsFrames" to produce the stacks. So its painless. Thank you Jim.

I usually am able to reduce the data from a long night by doing it while the camera is warming to room temp, and I am shutting down the equipment. This includes submitting via webobs.

A very different method for make precise photometric consist to take spectra scaled in physical unit (in erg/cm2/s/A, for example). Here the nova spectrum evolution from August 29 to Sept. 15 taken with a low cost spectrograph (Alpy 600) on a C11 telescope in suburban conditions:

The same plot in log scale (note temporal evolution of features):

From these data it is possible to extract the magnitude in a photometric system (Bessel BVR):

The filters are of numerical nature (!). Details of the method (standard in many professionnal observatory) is given here:

Contrary to appearances, the technique is not much more complex than the traditional photometry. And the content is very rich: the rich spectrum and accurate photometry, even when the spectrum is complex.

Christian has shown some good examples of converting calibrated spectra into equivalent broad-band photometric values. This works remarkably well, as long as you pay attention to the details; using a known flux standard, having a photometric sky, widening the slit to ensure that seeing, etc. are not a limit. However, those restrictions are far more specific than differential, broad-band photometry, where you gain a multiplexing advantage (measuring many stars across the field of view) while losing spectral resolution. It is similar to the difference between PEP and CCD observing.

More people doing spectroscopy should try flux-calibrating their spectra. It can be fun!

Martin Dubbs has formalised an alternative approach to spectrum flux calibration, combining AAVSO photometric data with spectroscopic data from contributors to the ARAS database to produce a rather nice animation showing the evolution of the spectrum (in absolute rather than the more usual relative flux)

The results compare well with Christain Buil's direct method, but perhaps with somewhat higher uncertainty due to the already covered problem of scatter in the pooled photometric data. More details in the thread here.

On 5/6 October (JD 2456571), I took some V band photoelectric measurements of the nova, using an SSP-3 photometer on a 24 inch telescope. Observers SRIC and EEY overlapped with me, and I wanted to post this comparison of our results. I appear to get significantly brighter values.

The four PEP observations were centered on JD 2456571.68229, .70694, .73333, and .75590, using 000-BCL-955 ("80") as the comparison star. The PEPObs data reducer does not support the nova, so I crunched the numbers on my own (PEPObs reductions for other stars match those from my home cruncher). The data were transformed using a B-V of 0.017 for the nova, yielding a delta B-V of -0.299. Below, I have linearly interpolated the two SRIC magnitudes (also transformed) bracketing each of my measurements.

JD fraction interp. SRIC CTOA CTOA error SRIC-CTOA

A: .6823 9.770 9.698 0.001 0.072

B: .7069 9.812 9.721 0.005 0.091

C: .7333 9.840 9.752 0.006 0.088

D: .7559 9.884 9.796 0.003 0.088

The B-V value for the nova was calculated by looking at the "best" B and V values SRIC took during this time. I took pairs of successive B and V values, for which both measurements had errors less than 0.030 (arbitrary). The individual B and V values were averaged separately, then subtracted. Working backwards in time, there were 8 pairs, with B-V of 0.017. Working forwards in time, there were 7 pairs and B-V of 0.015. The difference of 0.002 of the two methods had no effect on my transformation results (I suppose I should have used 0.016, to be fussy).

For those not familiar with PEP: a single observation takes place over about 15 minutes, and individual deflection times within the observation are only recorded to a one minute precision. The following JD 2456571 pairs of SRIC data were used in the bracketing:

JDs mags errors

A: .68012, .68771 9.767, 9.776 0.050, 0.053

B: .70573, .71192 9.810, 9.819 0.133, 0.047

C: .73200, .73453 9.846, 9.834 0.018, 0.030

D: .75489, .75743 9.887, 9.880 0.026, 0.019

Because my clock had one minute precision, and it was set from a clock with one minute precision, there is room for slop in my times, but not nearly enough to explain the differences between SRIC and CTOA (the maximum rate of change between SRIC bracket pairs was 0.0033 magnitudes/min).

My check star was 000-BLC-945 ("98"), and its computed magnitudes (A..D) were 9.862, 9.872, 9.871, and 9.844. The PEP algorithm does not transform the check star magnitude or correct for airmass difference relative to the comparison star.

Observer EEY was also active during this time, though his data are not transformed. Given his rapid sampling (cadence under 1 minute) and my time uncertainties, I did not try interpolation of his magnitudes. I selected and averaged three EEY values: the one closest in time to my own, and the ones immediately before and after.

EEY center JD EEY mean CTOA EEY-CTOA

A: .68218 9.736 9.698 0.038

B: .70690 9.756 9.721 0.035

C: .73315 9.784 9.752 0.032

D: .75583 9.834 9.796 0.038

EEY's errors were all 0.005. If I turn off transformations in my reduction software, my own magnitudes brighten by about 0.014.

In short, I saw NV Del an average of 0.085 brighter than SRIC and 0.036 brighter than EEY. I realize I have presented all these data rather scattershot; I would be happy to provide them in another format if that would help.

Got a real nice run last night (27 images in each color, 60-120 seconds) in B, V, and Ic. V and Ic turned out very nice. The B values were 0.15 mag brighter than V. I did not post to database as this looks weird. Images look fine. Taken in turn with V and Ic. Nothing weird in images. Its a new Astrodon filter with the good red blocking, supposedly.

Joe Patterson noted that soft xrays are being reported from this nova (ATel 5470) . In the last couple of days that Arto (Oksanen) announced a *possible* 3-hourish wiggle appearing in the optical light curve. That sets the stage: it's now time to make a strong effort to detect that periodic signal!

I did a run of 1080 images with 10 sec integration time, V filter last night. Previously, I used the '80' star as primary reference, however, it is now saturated at 10 seconde. For this set I used the '121' star with an ensembel to produce the AAVSO Extended Report. The large set images was condenced two ways:(1) the individual observations were "boxed by 5" to create a new AAVSO Extended report. (2) This images were also stacked in groups of five and photometry performed on the combined images. The light curves from the two methods would almost superimpose.

There are completely new sequence stars availabe at VSP and I used an ensemble that produced an export file for LesvePhotometry(Lesve Export file attached) : The V curve is about 0.1 mag brighter that WGR. I ran the series in AIP4Win, LesvePhotometry with the same results.

A very nice run indeed. I see Arne has posted some new sequence stars. If we are going to get some agreement on our runs, we need to do the same things. I can get to most of the new stars. I have to use an "F" chart, as our field is only 12-15 arc minutes. Perhaps other observers have this same restriction.

I propose that we migrate/congregate/croud source to using the 105, 121,122, and 125 as a common ensemble. I cannot get to these and the 98. I can get to the 80, but have to defocus to do so. So lets drop the 80 as you have done Richard.

I will reprocess the WGR data from two nights ago, and see what happens. What do you think? All are welcome to join this standard experiment. Arne, please feel free to chime in here and set another direction. The reduction can be redone, but once the data and images are taken, comp stars might be eliminated.

BTW: I am probably going to observe in V and I, as this seems most useful. With 10 second integrations, and so many images, this will be no big deal. Looks like its clearing for tonight, so I will do this, unless I hear otherwise.

I reprocessed the Nova Del 2013 data from JD 6581, using the Ensemble 105, 121, 122, & 125. I did not use the 80 in the Ensemble. The result was 10.713, while the result using the Ref Star 80 was 10.720. So it made it a little brighter. I could not get to the 98.

So Perhaps the difference is in the filter? I am using the latest Astrodon Interference Filters. I am also defocussing a lot.