1 Introduction

These notes outline the basic process for reducing images from a CCD
on an telescope. The primary purpose of this tutorial is to give a
small group of software developers some experience reducing actual
data. The examples are constructed more to clarify some of the things
that need to be done, rather than showing the most convenient tools
one might actually use in practice.

2 Online Data

2.1 The Sloan Digital Sky Survey

The SDSS Catalog Archive Server

The SDSS CAS is a front end to a database of object parameters and
spectra for objects detected in the SDSS. One of the easiest ways to
start exploring is through navigator. Other tools let users explore
the database schema and even make SQL queries of the database.

The SDSS Data Archive Server

The SDSS DAS serves most of the files read and written by the SDSS
data reduction pipeline in the course of reducing the SDSS data. It is
from files in the DAS that the CAS database is generated. The DAS
documentation provides a high level summary of the SDSS processing
pipelines, including file read and written by different programs.

The most interesting of these files are the corrected frames and the
final object catalogs, examples of which can be found, for example,
off of this page. These files, as with many in astronomy, are in FITS
format.

2.2 Others

Multimission Archive at STScI: MAST is an archive of data from the
Hubble Space Telescope (HST) and a number of other instruments.

NASA/IPAC Extragalactic Database: NED is an online database of data
and catalogs, mostly collected from papers.

The SAO/NASA Astrophysical Data System: The ADS is an on-line,
searchable database of papers. It contains abstracts for just about
any paper related to astronomy, and links to papers on arXiv, the
official journal site, or their own scanned versions. Its metadata
on specific papers often includes links to the data used in that
paper, if it is available on-line. (For example, a paper that uses
HST data will have a link to the relavant data set in MAST.

CDS Simbad: Simbad is an online database of objects, with functionality that
overlaps that of NED.

3 Data Formats

The simplest FITS file contains a single image. It consists of a
header and a data section.

The header and data section are each an integer number of 2880 byte
blocks long, and if the content of either does not naturally fall on
block boundary, it is padded to do so. This block length has no other
significance.

The header consists entirely of ASCII text, and divided into 80 byte
header cards. Header cards are not terminated (and are not permitted
to contain carriage returns); each one starts immediately after the
one that preceeds it, such that the first header card occupies bytes 1
through 80, the second 81 through 160, etc. This is an irritation if
one wants to view the FITS file on a terminal that is not exactly 80
characters wide. The following was recorded on a 78 character wide
xterm:

Each 80 character header card starts with a keyword of up to 8 upper
case letters and numbers. If the keyword has an associated value, the
nineth and tenth characters are always "= ", and the remainder of the
header card contains the value, represented in ASCII, and an optional
comment. The comment is separated form the value with a "/" character.

The first header card always contians the keyword "SIMPLE" with the
value "T" (true), indication that the file is a stardard FITS
file. All fits headers contain a BITPIX, the number of bits per pixel,
the NAXIS keyword, indicating the number of axis in the contained
array (usually 2 for CCD images), and a sequence of NAXISx keywords,
giving the size in each dimension. The last keyword in every header is
"END", which is followed by enough padding to bring the total length
of the header to an integer number of 2880 byte blocks.

The number of bytes in the data section can therefore be calculated
from the header. For example, for a 2 dimensional image, the size is
(BITPIX/8) * NAXIS1 * NAXIS2, plus padding to bring the data section
up to an integer number of 2880 byte blocks.

The data section immediately follows the header, and is a simple
sequence of pixel values. The type of the pixel values is determined
by the value of BITPIX, following IEEE-754 stardards. Negative values
of BITPIX indicate floating point values. So, a BITPIX of 32 indicates
4 byte signed integers, while a BITPIX of -32 indicates 4 byte
floating point numbers. The data in FITS is always big endian.

The initial ("primary") HDU may be followed by additional
("extension") HDUs. Extension HDUs follow most of the same rules as
the primary HDU, with a few exceptions.

While the primary HDU must be an image (or other regular array of
bytes), extensions may be of other types, such as tables.

an extension HDU begins with the "XTENSION" instead of
"SIMPLE". The value of this keyword gives the type of the
extension.

The most important extention type is the binary table. A binary table
extension still includes the BITPIX, NAXIS, NAXIS1, and NAXIS2
keywords; the BITPIX keyword always has a value of 8, and NAXIS a
value of 2. The NAXIS1 and NAXIS2 keywords give the length of each row
(in bytes) and the number of rows. This allows FITS readers that
cannot read FITS binary table to calculate the length of the data
section and skip to the HDU using the same calculation used for
images.

FITS binary table headers also contain a number of keywords for each
column in the table. For example, TTYPE4, TFORM4, TDISP4, and TUNIT4
contain the column name, data type, recommended display formating, and
physical units for column 4 of the table. TFORMn is the only one of
these that is actually required by the standard, as TFORM1 is required
to determine the starting byte of TFORM2, etc., but at least TTYPEn is
also usually present.

Note that cell values in FITS binary tables may be arrays themselves.

3.2 ASCII

ASCII tables, particulary white space separated value tables, are
pretty common in astronomy. Some older tools use them for
everything except images, while newer tools have moved to using FITS
binary tables for many things.

ASCII configuration files are still ubiquitous.

3.3 VOTable

VOTable is an XML file format for tables, designed so that they can be
directly mapped too and from FITS binary tables. Several newer FITS
libraries use a common interface for VOTables and FITS binary tables.

VOTables have the advantage that they can be streamed before
completion, while a FITS table must declare its length in its
header.

4 Data Analysis and Exploration Software for Astronomy

4.1 Plain old UNIX/linux

One can explore at least the header of the FITS file using standard
linux utilities. The header of the FITS file is plain ASCII text, but
has a few quirks; each line of the header is exactly 80 characters
long, but is not delimeted. One way to look at the header in linux
that accomodates this is to use hexdump:

ds9 can read gzipped fits files transparently; there is no need to
decompress the file explicitly. When initially opened, the scaling of
the image pixel values to gray levels is often unuseful. You can
change the scaling by choosing the "scale" tab. In the example images,
the "histogram" and "zscale" (under "more") options are good
choices. You can also fiddle with the scaling using right mouse button
on the scale bar at the bottom of the image.

You can center a point on the image by clicking on it with the middle
mouse button.

You can also look at the header of a fits image using ds9 using the
"File/Display Fits Header…" menu option.

For the example image, you can see that there is a sharp, vertical
discontinuity in the sky level of the image. This is because different
amplifiers read out the the right and left halves of the CCD.

4.3 ftools and fv

ftools is a collection of executables for manipulating FITS
files. These include extraction of data into ASCII tables, printing
headers, extracting or editing contents of either the headers or data,
plotting and arithmatic, validity checking, and more.

On sdsslnx33, make ftools easy to set up, you need something like
these lines in your .bashrc:

Then, before running any ftools commands, you need to trigger the
heainit script:

bash$ heainit

fv, technically part of ftools but available and often used as a
distinct entity, is an interactive GUI for exploring FITS files. It's
real strength is in dealing with tables, and it has hooks for using
ds9 for image viewing.

4.4 Python

There are several python libraries that support reduction of
astronomical images. Three of the most important are:

numPy: a library of tools for manipulating arrays of data, such as
images;

pyFITS: a library for reading, writing, and otherwise manipulating
FITS files; and

numdisplay: a tool for interacting with ds9 from Python.

On the EAG cluster at Fermilab, using UPS to setup stsci\python
provides python with these libraries:

The image and table data all get stored in numpy arrays, and can be
explored visually and mathematically using any of the many plotting
and math tools available for it (eg matplotlib).

In the example, the mags field returned a two dimensional array; each row of
the table was itself an array of numbers, in this case an array of
five magnitude values, one for each SDSS filter. The slice (mags[:,2])
grabs just the third element, which corresponds to the r filter.

4.5 R

R is still fairly obscure within the astronomical community, but it is
my personal favorite tool for plotting and statistics, and there is a
FITS reader in CRAN. R is available as part of most linux
distribution, and the FITS reader is easy to install. From within R:

> install.packages("FITSio")

downloads the FITS reader from CRAN and installs it. You need to load
the package before using it:

4.6 IRAF and PyRAF

IRAF is an environment and large collection of tools for reducing
astronomical data. The framework itself and many of its tools were
developed by NOAO. The NOAO collection has been supplemented
signifacantly by STScI.

IRAF is installed on the EAG cluster as a UPS package. The UPS setup
calls an IRAF environment setup script that only works in csh or
descendents, such as tcsh:

One quirk if IRAF is that the setup stript only works under csh or one
of its descendents (eg tcsh), and not sh or bash. Hence the execution
of tcsh above.

Early versions of IRAF were designed for interaction with the user on
a text terminal, and to show images on an IIS Model 70 display. As
monitor and display technology progressed, programs such as ximtool,
SAOimage, SAOtng, and now ds9 have replaced this external display, but
still interact with IRAF using a (now modified) version of the IIS
protocol either fifo pipes or UNIX or inet sockets. This interaction
continues to be a bit brittle, and sometimes the IRAF, the viewer, or
both need to be restarted.

4.7 Others

Many other tools for display and analysis of FITS images are
available. Some examples include:

Dervish and astrotools: Dervish and astrotools consist of a
collection of C and fortran libraries with tcl wrappers for analysis
of astronomy data. There tools were originally developed for the
SDSS. Dervish uses a similar display interface for images as
IRAF, allowing the use of ds9 for display of images. See
my notes on Dervish for more information on using Dervish
and astrotools.

Aladin: Aladin is a Java utility that lets one download image and
catalog data from a variety of archives and perform some basic
image analysis. The functionality overlaps ds9.

Tool for OPerations on Catalogues And Tables: TOPCAT is a tool for
exploring and analyzing catalogs and tables, with functionalty that
overlaps fv. (See Astrogrid below.)

Astrogrid is a collection of tools for doing astronomy with web and
grid services. The collection contains TOPCAT. Download the java
jar files from the Astrogrid download page, and start them with java:

5 The example data for reduction

The SDSS Photometric Telescope (PT) and its CCD camera is a reasonable
example of a "plain vanilla" astronomical insturment. As an example
exposere, this tutorial will use PT exposure 114174, taken on MJD
52008, the night of April 8, 2001. The raw data for this entire night
can be found in the SDSS DAS dirctory for that night.

7 Image correction

7.1 Introduction

The number of counts recorded in the image on a pixel is a linear
function of the flux falling on that pixel in the CCD:

\[
\text{counts} = a \cdot \text{flux} + b
\]

Both a and b vary by pixel, and may also vary be time. The time
variation is such that it is generally wise to recalculate them on a
daily basis, although, on the PT, in practice one can get away with
using values from nearby nights.

The value of b, the bais, can be measured using a combination of zero
second exposures and overscans. The linear constant, a, can the
measured using observations of a uniformly illuminated screen, or
perhaps a blank area of the sky.

The process described below is neigther the optimal or most convenient
appreach. Rather, it is designed to clarify what needs to be done.

This page only describes the most common image correction operations:
bias subtraction and flat fielding. A few others are sometimes
needed. Charge Transfer Efficiency (CTE) correction accomodates
inefficiency in the transfer of charge between pixels durning
readout. Dark correction removes the natural accumulation of charge
over time not due to hits by photons. The former is unimportant for
most uses of PT data, and the camera on the PT has a negligible dark
current, which can be ignored.

7.2 Bias subtraction

The Master Bias

The sample data directory contains a sequence of bias images,
exposures 114031 through 114041. These are all zero second exposures,
readouts of the CCD taken without opening the shutter. These images
may still, however, contain cosmic rays. A quick-and-dirty way to get
a bias frame is to median filter the images. I will use Python and a
few Python libraries: pyfits, a Python FITS reader; numarray, which
handles arrays such as FITS images; and numdisplay, which provides a
Python interface to ds9 for veiwing data arrays.

The numdisplay display quantizes the image to 256 gray levels before
sending it to ds9, and does not always choose useful transformations
automatically. It has a number of parameters that let you control
this. In the above example, I use the median function to get the
medians of the columns in the bias array to get an idea of what the
bias level, and force the min (z1) and max (z2) limits of the
transformation accordingly.

While the master bias frame takes into account pixel to pixel
differences in bais, the bias can change significantly with time over
the course of the night; the true bias of the data image will be
slightly different. These differences can be estimated using overscan
regions: pixels in the final image that have been read through the CCD
amplifier but that were not part of the exposed images. For the PT,
these are data values are obtained from the CCD by reading more pixels
per row and more rows per readout than are actually present on the
CCD. In other instruments, actual obscured pixels are used.

There are no officially standard FITS keywords for designating what
sections of the a readout are overscan, but the information is
generally present somewhere in the header. The PT data uses the header
keywords BIASSEC0, BIASSEC1, DATASEC0, and DATASEC1 to designate the
areas of the CCD used for the row bias and data sections for
amplifiers 1 and 2, respectively. There is also a column bias section,
but it is not explicitly specified in the header.

Bias variations over the night are often modest. We can check whether
use of the overscan region is necessary by looking at the statistics
of the overscan regions of the bias image, as debiased by the master
bias frame alone. If these are near zero with minimal variance, then
the master bias frame alone is doing a good job.

I start by reading the keywords from the header that tell me where on
in the image the overscan regions are stored, and then extract the
corresponding regiens:

Notice that there are signs that something funny is going on; when
zoomed out, you can see faint stripeing. Further exploration shows
that, although strange, the level of the signal involved is much lower
than the noise we expect in our final image, and so we ignore it.

Now we look at some statistics to see whether we need to change the
level based on the overscan region, and whether differences in pixel
to pixel variations are significant:

Note that the standard deviation of the bias section is less than than
expect from the read noise alone after the master bias is applied; the
master bias does a good job taking out the pixel to pixel variations
in the bias. So, we only need to apply the offset in bias level:

The "array" function turns a list of arrays of the same size and type
into a single array, in which the first axis corresponds to the index
or the list. The median command takes the median along the first axis.

Now extract just the data section of the flat field. Again, it would
be safer to determine the limits using the DATASEC0 and DATASEC1
keywords from the FITS header:

>>> dataFlat = flat[21:2069,40:2088]

Now normalize the flat field to have a mean of 1. The "mean" function
only takes the mean along the first axis, so I use the "reshape"
function to cast the entire 2-dimensional flat field image into a one
dimensional array.

7.4 Write the data image, with necessary changes to the header

We now have the data in our original image bias subtracted and flat
fielded. We now need to extract the part of the array which is true
sky data:

>>> ffData = ffImage[21:2069,40:2088]

and write it to a FITS file on disk. Most of the metadata in the FITS
header in the original exposure applies the the corrected image, so we
can copy the header from the original exposure FITS file and then
update or remove the invalidated FITS keywords:

8 Looking over the image, and estimating the sky statistics and PSF width

Once you see the circular mouse cursor hovering over your ds9 window,
imexam is up and running. Be careful with your window focus when
running imexam; it needs to be on the ds9 window for most of the
interesting commands to work, and yet under many window managers it
gets sent elswhere regularly.

There are many useful commands under imexam, including:

command

use

a

aperature photometry around pointed at pixel

m

pixels in a rectangular region around the pointed at pixel

z

print the 10x10 grid of pixel values around the pixel

h

plot a histogram of pixel values around the pixel

r

plot a radial profile around the pixel

Go around the image in ds9, and choose a few isolated stars. Hover
over each, and hit "r" to plot the radial profile. If it looks
reasonable, hit "a" to prints aperature photometry (and PSF fit
information) to the terminal.

After you have done this, find a few empty regions of the sky, and hit
"m" over each to get pixel statistics. In the end, your output might
look something like this:

From this, a rough guess at our sky is 417 with a standard deviation
of 10, and a PSF full width at half max (FWHM) of about 2.

9 Pixels with bad values

As an examination of the image with ds9 will make clear, there are a
number of pixels in the final corrected image with bad
value. Sometimes, these bad pixels are due to defects in the CCD or
obscuration of the CCD by other elements in the detector. Examples of
this in our sample are column 735, and the rectangular regions at the
top and bottom of the image.

Defects in the CCD can be picked out by hand, and are relatively
stable over time (although new ones may appear with time). Bad pixel
masks are often discovered and recorded manually.

The other major cause of bad pixel values are cosmic rays. While bad
pixels due to CCD defects persist with time, each image has a
separates set of cosmic ray pixels. When confronted with a single
exposure, the cosmic rays must be distinguished from astronomical
objects morphologically. Images taken through the telescope are
blurred slightly by the atmosphere and optics of the telescope, and
therefore do not have sharp edges. Cosmic rays, on the other hand, do
have sharp edges.

For example, see this image from our sample exposure:

Compare the two bright pixels on the left (at column 870, row 1854 of
the image) to the fuzzy blob on the right (at column 978, row 1894 of
the image). The later is a star, and has the appearance of a point
source; any true image of the sky must be at least this blurry. The
pixels on the left are cleary much sharper; they were caused by a
cosmic ray.

There are several appreaches to detecting cosmic rays
automatically. The IRAF cosmicrays command compares each pixel to the
statistics of nearby pixels. SExtractor can use a neural network approach
to distinguish cosmic rays from astronomical sources.

One of the best methods for detecting bad pixels is to use multiple
exposures of the same part of the sky. If these exposures are to be
used to detect CCD defects as well as cosmic rays, then the exposures
should be offset slightly in pointing so that given point on the sky
is seen by different pixels in each exposure. With these multiple
exposures, pixels that give wildly different values from other
exposures of the same part of the sky can be flagged as bad
pixels. Pixels that are bad in all exposures can be flagged as CCD
defects, while those that are bad in only one exposure can be
considered cosmic rays. This is the appreach used by the the IRAF
crrej task.

Once detected, the locations of bad pixels must be recorded so that
further steps in processing know to ignore them. Typically, this is
done either with a simple table of pixels, area boundaries, or with a
mask in which a bit in the mask marks the pixel as good or bad.

10 Astrometric calibration

When examining an image, we measure positions of object by measuring
the coordinates of the pixels on which the image falls. Positions on
the sky, however, are directions in space; the coordinates of a sky
position are the angular coordinates of a spherical coordinates
system. There are several such coordinate systems in general use by
astronomers, the most common of which is the equatorial cooridinate
system. An object's coordinates in this system are its right ascension
(abbreviated R. A., RA, and also referred to as α) and
Declination (abbreviated Dec., and also referred to as δ), which
correspond to the Earth's longitude and latitude.

Standard FITS uses a sequence of operations to transform pixel
coordinates to transform the x, y coordinates of a pixel to RA and
Dec. These operations are outlined by this diagram:

The red text shows the corresponding FITS keywords. A typical set of
these looks something like this:

Note that not all transforms use all keywords; the TYPESs given in the
above example mean that the PV, LANPOLE, and LONPOLE keywords are
unnecessary.

Several projects have found the simple linear transform insufficient,
and have used alternate transformations which use higher order
terms. For example, in addition to including the official FITS WCS
transform in image headers, the SDSS supplies a FITS binary table with
higher order terms. There is a proposed extention to the FITS
standard, but it has not been officially approved.

10.2 Determining the Astrometric Transform the Hard Way

The Strategy

I determine the astrometric transformation by fitting to a table of
stars for which we have both RA and Dec. and the x, y position on the
image. The process consists of several steps:

Obtian a list of RA and Dec coordinates of stars in our field. The
observatory supplied WCS should good enough to make this easy.

Detect star like features in our field, and measure good x, y
coordinates for their centers.

Match reference catalog RA and Dec. stars to detected stars.

Fit transformation parameters using this data set.

Store the result in the FITS header.

Getting a reference catalog

To get a catalog of reference stars and some idea of how good the
astrometry info in the header is, use ds9 directly on the FITS file
(not through PyRAF):

bash$ ds9 im-114174.fits

Go the the "Analysis" menu (upper right of the ds9 window), choose
"catalogs", then "HST Guide Star Catalag". You should then get a table
of guide stars, and their approximate coordiates should be marked on
the image. The distance these marks are from the locations is a
measure of how well the actual pointing of the telescope matches
with where the telescope thought it was pointed.

Save the catalog using File (in the catalog tool window), Save… and
choosing a file name (eg HSTguidestars.rawcat). Examining this file
with emacs, I see that columns 4 and 5 have the data I want, and
that the first 32 lines are header and metadata. I use awk to extract juts the
data I am looking for:

I made the HWHM estimate based on the results of Looking over the
image with imexamine. The threshhold of 1000 counts I arrived at
through trial and error, aiming for a level that resulted in detecting
stars bright enough to be in our reference catalog. The tvmark
commands above were very helpful for this.

You can make plots using IRAF plotting tools to examine our test
stars. Load the IRAF plotting package:

These plots show an oddity in the PT data: the roundness of the PSF
depends strongly on the position on the CCD.

Matching the found stars with the reference catalog

We need to establish a few starting match points by hand. Going back
to the ds9 image with the catalog superposed, we can see
a number of matches. Hover the mouse over the central pixel of a
matching star, and record the x,y. Then, read the ra, dec from the
circle near it. These are a few I get:

ref x

ref y

ra

dec

starfind x

starfind y

878.80963

1133.539

199.07577

+17.65164

866.344

1131.263

1320.5986

852.7561

199.16994

+17.79276

1301.906

849.755

1258.9945

1207.396

199.05097

+17.77308

1243.331

1201.481

Now get the exact centroids of these by finding the corresponding
entries in im-114174.starfind and HSTguidestars.cat. We need to enter
these into a text file, which I called anchors-114174.obj:

These keywords provide the coordinates of a reference pixel (CRPIX,
CRVAL) and the derivatives of each celestiol coordiate with respect to
each CCD coordinate, in degrees per pixel.

Load the image in ds9 (outside of PyRAF), and load the HST guide star
catalog again. The marks should land on the reference stars.

We can see that the points near the corners do not fit well. This is
because the PT is badly distorted, unusually so for a professional
instrument. However, the stardards WCS standard for FITS headers
includes only linear terms.

10.3 Determining the Astrometric Transform the Easy Way

There are several online tools that can accept a FITS image with an
approximate WCS coordinate transform (such as that generated during
observation) and refine it automatically. Examples include:

If we were doing this properly, we would split the file into two, one
for each amplifier, and run SExtractor twice, once on each half, using
the values appropriate for each. The differences are small, so well
will push ahead.

We alse need to supply a filter for matched filter detection. The
optimal matched filter is an image of a typical object near the
detection limit. After looking at the image in ds9, I decide to use
one from the default library: the gaussian with FWHM of 2.0:

Note that while the list of things SExtractor can measure can seem a
bit intimidating, it is in fact fairly limited. It is much smaller,
for example, than that provided by SDSS's photo.

12 Examining the object catalog

There are many ways to look at the resulting analysis table. One of
the most convenient to use interactively is fv:

bash$ setup fv
bash$ fv cat-114174.fits

This will present you with a GUI for exploring the FITS file. To see
the contents of the catalog, choose the "All" button (under "View") on
the "LDAC\OBJECTS" row.

This then gives you a window with the values in the table. You can
also use the features under the "File" menu to save the contents in a
text file (CSV, constant field width, or delimited with some other
character). The "Tools" menu lets you plot values, make histograms,
and do other simple table operations.

You can plot the objects found by SExtractor over the image itself
using R:

13 Photometric calibration

13.1 Magnitudes and the photometric equation

Astronomers usually measure the brightness of objects using
magnitudes. The diffirence in magnitudes between to objects is
defined to be
\[
m_1 - m_2 = -2.5 \log \frac{f_1}{f_2}
\]

Historically, the star Vega has been defined to have a magnitude of 0,
providing an absolute relation between flux in physical units and
magnitudes.

This system was designed by Pogson so that the brightness measure as
derived from physical units roughly matched up with the system used by
the ancient greeks.

Note that the higher the magnitude of an object, the fainter it
is. Also note that 5 magnitudes corresponds to a factor of 100 in
flux.

If the number of counts measured for an object is proportional to the
physical flux from that object (as astronomical CCDs are designed to
be), then we can relate the counts detected from an object in an image
to its magnitude using this equation:
\[
m = m_0 - 2.5 \log(f)
\]
where m0 is the photometric zero point for the image.

(This equation only holds for the "natural" photometric system of the
instrument and filter used. If one is attempting to measure the
magnitude in some stardard system, one must take into account
differences in the wavelength sensativity between the natural system
and the reference system, which introduces color terms in the above
equation.)

Photometric calibration consists of measuring m0 for the image in
question. This can be done by measuring the flux for several sources
with known magnitudes in the reference system, and solving for m0 in
each case. Typically, one measures the photometric calibration using
careful observations of non-variable sources with well known
magnitudes in the reference system, and then derives m0 for the
observation of the target using m0 as measured from the reference
image and differences in exposure time. In observations from the
ground, one must also correct for differences based on the airmass of
the observation, or the amount of atmosphere the light from the target
passes through before it reaches the telescope.

13.2 Photometric calibration of the sample image

In our example, we will use much simpler approach, and choose a number
of sources in our data image are references.

The join matches columns from SDSS to columns from SExtractor, the
first awk generates columns with SDSS magnitude, RA offset in
arcseconds, Dec offset in arcseconds, and the resulting zeropoint for
this object.

From these plots, it looks like the photometric zero point is about
24.78. This zero point is obviously only good to within maybe 0.05
magnitudes. If we did this more carefully with proper standards, we
could do much, much better.

14 Obtaining colors

14.1 Reduce the g exposure

Image correction

To obtain colors for our objects, we must repeat the above procedure
for another exposure in a different filter. The g exposure in our
sample sequence has PT exposure id 114173. A sloppy image correction
of this exposure can be done using this python script, which I call
reduce\g.py:

Note we can load both the previously reduced r image and the new g
image into ds9 at once.

bash$ ds9 im-114173.fits im-114174.fits

Experiment with setting the scale so that both display the data well
and with a similar appearance. I find setting both to "log" scaling,
the g limits to 480 and 680, and the r limits to 400 and 600 works
well. (You can set the scaling limits using "scale parameters" under
the scale menu from the menu bar.)

Because the two exposure were taken in quick succession without
slewing or offset, they are well aligned. Blinking between the two
exposures is particularly useful. Choose "Frame" from the ds9 menu
bar, then "blink frames". Note that the a number of cosmic rays in
each image become apparent, but the CCD defects remain the same across
images.

We use imexam to check out the results as before.

Astrometric calibration

We can reuse the same guide stars from the r image, which we saved in
HSTguidestars.cat. We do need to get xy coordinats using estimates
from exposure 114173:

The join matches columns from SDSS to columns from SExtractor, the
first awk generates columns with SDSS magnitude, RA offset in
arcseconds, Dec offset in arcseconds, and the resulting zeropoint for
this object.

This looks more reasonable; the blue horizontal branch is clearly
visible. It still doesn't look very impressive.

15 Refining the object catalogs

Once these initial object catalogs have been created, the catalogs of
stars used for photometric and astrometric calibration can be filtered
and limited to the best sources for the specific images in
question. One can also generate models of the sky background and the
point spread function, both of which will vary over the field of
view. With these refined calibrations and models one can repeat the
object detection and measurement for better results.

16 Additional steps

16.1 Image registration and coaddition

For a multitude of reasons, it is useful for all exposures on a
particular region of the sky share a coordinate system. In other
words, all of the images should have the same transformation to world
coordinates.

Accomplishing this requires generating a new image with the desired
astrometric transform, and interpolating all of the pixel values for
the coresponding input image. There are several utilities for doing
this, including drizzle, IRAF's imalign task, and the Terapix SWarp
utility.

Once different exposures are on the same coordinate system, several
operations become possible. One can create color images by assigning
images through different filters to r, g, and b channels on the color
image. You can look for variable objects by scaling and subtracting,
and you can obtain a single, higher signal to noise image by adding
the images together.

16.2 Object detection on multiple images

One of the most common techniques for detecting low signal objects
using multiple exposures is simply to detect objects in the weighted
addition of the coadded exposures. Improved techniques that can more
optimally combine exposures in different filters and with different
PSFs are being developed, but are not in widespread use.

16.3 Exporting the catalogs

Once the final catalogs have been generated, they need to be provided
to users in a useful way.