Science vs Consumer Detectors: Thank you, Mary.

I have been trying to figure out recently why people don’t understand what IR detector data looks like, given that we have examples and studies and in a few cases specifications of what we’ll get from JWST, and examples and analysis of NICMOS, Spitzer and some Keck data. My wife explained it to me, and then I looked at the numbers. She’s right, and I wanted to publicly thank her.

We have some simulated data, a cutout of which is shown here, that gives you the general idea of what the data will look like if it’s pretty good. Jay Anderson (STScI) did this simulation.

Simulated NIRCam Data with 1% bad pixels

That picture is 1 percent bad pixels, which is pretty good. For the roughly 4 megapixel NIRCam, you’d expect to get 40,000 bad pixels, before you add any cosmic rays. That means one in every hundred pixels is a bad one. It could be one in 50. And still, that’s pretty good.

This is after we’ve done all the standard calibration: Flat fielding, dark correction, geometric distortion and whatever else we can figure out. Most of the cosmic rays will get removed, but maybe not all of them.

Most people look at that picture, and they are shocked. The pictures you see on the front page of the newspapers, or in glorious color magazine covers, or in big wall-sized displays don’t look like that. But to me, they look fine. Good, even: There are no ghosts, no big artifacts, no column bleeds like you see on CCDs. It’s very nice. I didn’t get it.

Mary explained it to me.

Most people’s experience is of normal optical cameras with very short, almost noiseless exposures against very bright targets, and those targets are “extended sources.” Things like people, or cars, or trees under a bright sky. These exposures are measured in hundredths of a second, and there are plenty of photons falling on the detector, and the exact efficiency of the detector isn’t all that important. Defects are very rare.

For displays, people are even more demanding. Dell’s specification for a defective monitor is 0.0005% bad pixels. That is, about six on a typical 1.3 megapixel display. Even that number is controversial: Some users demand replacements of monitors with a single bad pixel, and if they game the system correctly, they get it.

Even if you do have some bad pixels, say from noise in a low-light situation, you probably don’t notice them. If you get a hot pixel from a reflection off somebody’s eye, it probably gets corrected away when you do red-eye correction. If you have a few dead pixels in the corner of an image, you never look at them, because you’re looking at the whole picture, not the individual pixels in the corner.

So it’s likely that most consumers never see any bad pixels, so if you say images have some bad pixels, but they’re pretty good, they figure you’re talking about a handful, and if they are there, you won’t see them. And they don’t understand the numbers, because you’re talking about 98% or 99% good pixels, and 98% is an A in any grading scheme.

For the HST detectors we currently use, and for the JWST detectors we’ll have in a few years, the underlying assumptions are different. Exposures aren’t hundredths of a second, they are hundreds of seconds. We spend literally minutes, sometimes hours, trying to hold the camera still on a target that probably isn’t visible at all from Earth, because it’s too faint.

The scene we’re looking at isn’t bright, it’s mostly dark. (If you’re reading this at night, try this: Take your cellphone camera outside, point it at the sky and take a picture. Unless you manage to snag the moon, I expect it’s completely dark.)

Finally, we don’t look at the whole picture anyway. Even the HST observations of Jupiter don’t fill the whole camera. We zoom way in, as far as we can and still see something meaningful. We may be looking at a section of the image only a few hundred pixels across, so a single pixel is two tenths of a percent of the width of the image. That’s 400 times the size of the defect Dell says they’ll replace.

The detectors are very efficient, and very susceptible to noise. They’re in space, so they live in a nasty radiation environment. They start with lots of bad (either dead or hot) pixels, and they get worse (to varying degrees) as time passes. Most of the bad pixels will be in the corners, but some will be scattered all over the place. If only one in a hundred is bad, we’ll be happy. But for JWST, that means more than 40,000 pixels will be bad.

That’s fine, though. We deal with that by taking pictures in pairs, with very small offset on the sky between them. We stack the second picture on top of the first, and when we find a bad pixel, we replace it with the data from the other. Often, we average the “good” pixels to improve the overall image quality. Unless you get terribly unlucky (which can happen) there is a good pixel for any part of the sky on at least one of those images, and when you add the two pictures together, fewer than 100 remain. If you use multiple pairs, say from three or four colors, and combine all those images, even if a bad pixel remains you’ll never notice it.

The insight Mary gave me was that while “1% bad pixels” means tens of thousands to me, for most people “1% bad pixels is pretty good” actually doesn’t mean anything to most people. They hear “pretty good” and think of their camera, or their monitor. Understanding that I needed to unpack the numbers, and provide a picture like the one above, helped an awful lot. I appreciate that, and want to publicly say “thanks.”