Main menu

Beware of Silently Assuming Linear Intensity in Astronomical Images

This article points out the danger of assuming that astronomical images are encoded using linear intensity. It is meant for the many people that are performing astronomical observations using regular cameras (i.e., cameras not specifically meant for astronomical observations). Not because there’s something wrong with that, but because those cameras are optimized for “normal” photography and video, not for numerical calculations on their images. The illustration below demonstrates the issue. The left photo looks normal. However, if an image is recorded using, say, a digital (still) camera or a surveillance video camera and shown without any correction, it looks like the middle photo. The photo is too dark, although the black and the white level are still correct. The left photo looks normal because it is actually stored in its file encoded like the photo on the right, i.e., compensated for the darkening effect of the screen.

Demo of the effect of screen gamma

The cause of this effect is that computer screens are nonlinear devices. Originally, this is because the cathode ray tubes (CRTs) that were used in television sets and computer screens had a nonlinear response. This nonlinear response has now been standardized for all display devices, even those that are based on completely different technology, like flat screens and beamers. The response follows a power law,

\[I_\text{out}=I_\text{in}^\gamma,\]

where the greek letter gamma (\(\gamma\)) is usually used for the exponent. This \(\gamma\) has been standardized to a value of 2.2.

There are two potential problems with this nonlinear behavior. The first one appears when a linear device (e.g., a CCD behind a telescope) is used to record an image, which is then displayed “as is”. The displayed image will be too dark. However, this is not necessarily a problem, since heavy image processing is normally performed anyway before displaying an astronomical image. The second potential problem is more serious. Most imaging devices compensate for the nonlinearity of computer screens by performing a correction on their output. They do this by raising their output to the power of \(1/\gamma\), which is about 0.45. The result of this is that the image will be displayed correctly, but that numerical calculations that are performed on it will give incorrect results. The following sections show two examples of calculations going wrong. All images are assumed to have intensities in the range 0–1.

Example 1: Magnitude Estimation

The two figures that follow, demonstrate what happens for the practical task of determining the magnitude of stars in an astronomical image. The first figure shows six stars, varying in magnitude from 1 to 6. Each star is a single pixel. The magnitude of each star can be calculated from the brightness of the pixel, if that brightness is encoded linearly. The estimated magnitudes are shown below the figure, and they agree exactly with the expected value.

Linear encoding

In the second figure, the brightness of the pixels has been compensated for the nonlinearity of the screen by raising their values to the power of \(1/\gamma\). The visibility of the dimmer stars has improved, but the calculated magnitudes are way off!

Gamma corrected

What went wrong here? The magnitudes are determined using the well known formula

\[m_1-m_2=-2.5\,\log_{10}{\frac{p_1}{p_2}}.\]

This formula calculates the relative magnitude of two stars by looking at the ratio of their intensities. However, this ratio has been altered by the gamma compensation, and, using a hat (^) for pixel values from the image,

The solution is of course to undo the gamma correction before calculating the magnitude, by setting

\[p_i=\hat{p}^{\gamma}_i.\]

Example 2: Image Downsampling

Image downsampling is a common operation. For astronomical images, it might for example be used to reduce the amount of noise. The simplest way to downsample an image is to simply average pixels together. For example, four pixels can be taken together to reduce the size of the image by a factor of two in both directions.

The following figure is a bit of a doctored example. It is real, though, try it in your favorite photo editing program, most of those take the “naive” approach that is described below. This example serves to show that the effect of ignoring gamma compensation can be quite dramatic.

Original image for downsampling demo

Below, this image is shown downsampled by a factor of two in both directions. For the image on the left, the gamma compensation that is present has been taken into account. The image looks convincingly like a small version of the original. For the image on the right, the downsampling has been performed “naively”, so by just averaging the pixel values. The image is uniformly gray!

Correctly downsampled (left). Naively downsampled (right)

What went wrong this time? The value of a downsampled pixel is calculated using

\[p=\frac{1}{n}\,\sum_{i=1}^{n}\,p_i.\]

However, as before, the value of the pixels in the image has been compensated for screen gamma, and

Practical Considerations

The conclusion of this article must be that it is essential to be aware of the gamma of your images, whatever their source. It is certainly not necessary to make sure that the imaging device itself has a gamma value of 1. It is, however, necessary to make sure that the system gamma (the gamma of the complete imaging chain) is 1 before attempting numerical calculations on the intensity values.

In practice, due to the standardization of the screen gamma at a value of 2.2, most imaging devices use a gamma compensation value of 1/2.2≈0.45. So their output is raised to the power of 0.45. To compensate for this, raise the image to the power of 2.2. These operations will cancel one another, which results in a gamma of 1, and in a linear representation of the intensity.