Forgive my ignorance, but shouldn't the resolution just be a simple calculation from the field of view, the distance to Titan, and the dimension of the camera in pixels? I realize SAR isn't quite the same as a simple picture, but can it be THAT different?

The discussion of how accurate things need to be reminds me of something a senior astronomy major told me when I was a freshman at Caltech back in the 1970s:

Mathematicians insist on exact answers. Pi is NOT 3.1416Chemists and Engineers generally settle for a few parts per thousand error.Physicists are happy to get the order of magnitude right.Except Astro-Physicists, who just want to get the order of magnitude of the order of magnitude right.And Computer Scientists only need to know if it's zero or not.

Forgive my ignorance, but shouldn't the resolution just be a simple calculation from the field of view, the distance to Titan, and the dimension of the camera in pixels? I realize SAR isn't quite the same as a simple picture, but can it be THAT different?

Short answer: yes.

The real problem, though, and one that is quite common, is that what you're describing is the pixel scale. It is not the resolution. Resolution is the smallest thing that you can resolve, which is always larger than a pixel. In fact the theoretical minimum is 2 pixels. Experience with HiRISE is that about 3 pixels is what it really takes to resolve an object.

Take Cassini ISS looking at Titan, for instance. You can do the calculation that you describe, and calculate the pixel scale. But no matter how small the pixels are, you can't achieve a true resolution better than 1 km. The atmospheric haze scatters too much to do any better.

Which is why, even with Ralph's discussion, I am still convinced that with 300m properly-sampled RADAR "pixels", the RESOLUTION is ~750m.

Which is why, even with Ralph's discussion, I am still convinced that with 300m properly-sampled RADAR "pixels", the RESOLUTION is ~750m.

The definition of the SAR resolution from Doppler bandwidth etc (giving 300m) is physically equivalent to that which defines the resolution of an optical telescope as ~1.3lambda/D. I stand by my original number. The pixels are 175m, which as you say is irrelevant.

QUOTE (Jason W Barnes @ Jul 12 2010, 06:21 PM)

The real problem, though, and one that is quite common, is that what you're describing is the pixel scale. It is not the resolution. Resolution is the smallest thing that you can resolve, which is always larger than a pixel. In fact the theoretical minimum is 2 pixels. Experience with HiRISE is that about 3 pixels is what it really takes to resolve an object.

On this Jason and I might agree. The original definition of resolution is all about the diffraction patterns of point objects overlapping - so to separate 2 objects X apart, you need a resolution of 'better than X' and reallya pixel scale of better than X/3 (so you see some dark space between your two bright points). But you can still do science at much less than the pixel scale (e.g. fitting a point spread function, so you can determinethe position of an object to not only much less than the resolution, but also much lower than the pixel scale - 1/10 of a pixel is not uncommon). But this sort of thing (and 'super-resolution' techniques) rely on well-characterizedpoint spread functions, and high signal to noise data. Which brings me back to my original point that 'useful'resolution depends what you are trying to do and on the signal/noise.

I think I see. Part of my confusion is that, in the computer biz, when we talk about the resolution of a screen, we always just mean the pixels. So, if I understand correctly, when you guys talk about resolution, you include all the factors that could degrade the image: the pixel scale, of course, but also atmospheric noise, diffraction, probably even noise in the electronics themselves. Beyond a certain point (all other things being equal) increasing the pixel scale will not improve resolution at all. And so the dispute you two are having is not over the actual hardware being used but over the effect of these other factors?

Ralph: When you talk about doing science below the pixel scale, are you talking about making repeated observations of the same thing and computing a higher-resolution model from that? That is, you have to depend on having a static target. Or do you mean something more complex? (I may be guilty of seeing Bayesian and Markov Networks everywhere these days.) ;-)

Properly speaking, resolution should be interpreted in terms of what you want to detect. Haze, low contrast, size of structure (frequently referred to as 'scale'), etc. all can effect how finely you can discriminate what you want to detect. To give an example. I may want to image trees that are 10 m across in an image in which each pixel covers an area of 1x1 m. So, the pixel resolution is 1 m, but my object resolution is 10 m. If tree sizes varied, the smallest tree I could reliably detect would probably have a crown 2 m across. It's actually more complicated than this since the tree probably wouldn't be exactly centered right, and I'd end up with some pixels that are all tree and some that are a mixture of tree and background, making the tree id harder. So my best reliable resolution (i.e., smallest tree) is probably 3x3 m.

However, people who build optical and camera systems want to have a way to compare the theoretical capabilities of the hardware. Sometimes, lines per inch are quoted (I've seen this in camera lens reviews, which then ignores the grain of the film, which would be equivalent to the size and density of pixels in an electronic system). In planetary missions, the instantaneous field of view is often given, which describes the angle seen by an individual pixel. To get theoretical resolution, you need this information and the distance to the object being imaged.

Computer monitor resolution is usually quoted as an area of pixels (e.g., 1600x1200 pixels), but the pitch between pixels is somewhat equivalent to the IFV in camera systems, but not exactly. Every camera forms images in an array of pixels x by y in size (with push broom cameras have a single line of pixels and spacecraft motion creates the y dimension). Image size can be quoted in x by y dimensions, but that says nothing about the resolution. You can put the same 1000x1000 CCD chip behind both a telescopic and a wide angle lens and get very different resolutions that cover very different areas on the surface

The above is not my area of specialty, so others may add or correct.

What is my area of specialty is sub-pixel interpretation for Landsat scenes. Each pixel of a Landsat scene has several 'colors' that represent key spectral ranges. (More technically, each landsat image is really several images, each of which was imaged with one filter.) If you know the dominant materials within a scene, you can use that knowledge to determine the approximate area that each material represents within the area imaged by an individual pixel. To continue with my tree analogy, if you know that everything in the picture is tree canopy or a soil background, you can model how much of the area covered by each pixel. The technique is called spectral unmixing.

It sounds like similar approaches can be used with radar data. They key, though, is that you need to know a lot about the surface you are imaging.

And so the dispute you two are having is not over the actual hardware being used but over the effect of these other factors?

I dont know what the dispute was about. As far as I am concerned there is no dispute, the resolution as normallydefined is 350m and that's that.

QUOTE (Greg Hullender @ Jul 13 2010, 10:33 AM)

Ralph: When you talk about doing science below the pixel scale, are you talking about making repeated observations of the same thing and computing a higher-resolution model from that? That is, you have to depend on having a static target. Or do you mean something more complex? (I may be guilty of seeing Bayesian and Markov Networks everywhere these days.) ;-)

What you describe sounds a bit like how i understand 'super resolution' works. (I think with the procedurecan also be referred to as 'dithering' : it was used on Pathfinder, also on HST). Radio astronomers (with typicallylow angular resolutions defined by the real aperture) use similar methods by e.g. allowing objects to passthrough the beam as the Earth or spacecraft rotate. The key is having a well-defined psf, and having a precise enough pointing history to know where in the psf of the scene the pixels of the detector actually are.

But it can be as simple as taking an image (many pixels) of an object which is geometrically smaller than a pixel (e.g. a star) but whose image, as defined by the telescope optical system, is much larger. The informationobtained by sampling many pixels allows you to estimate where the star was to much less than one pixel, ifthe point-spread function is known. That's a nicely-posed problem for a point source like a star, the cleverness(Maximum Entropy, Bayesian, whatever) comes in how you deconvolve that psf from the image to make a best estimate of a more complex scene (non-point objects like planets, many stars, plus some noise).

Only thing that concerns me is the UAV mechanical deployment sequence (which seems complex, to say nothing of time-critical)

The deployment isn't too bad -- one joint on each wing unfolds, and that's it. And there's plenty of time to do it -- this isn't Mars, you know! 1/7th gravity, and 4 times the air density as Earth means we've got about 12 hours between entry and when we'd hit the ground. So not nearly as hair-raising as on Mars.

IMAGE COPYRIGHT
Images posted on UnmannedSpaceflight.com may be copyrighted.
Do not reproduce without permission. Read
here for further information on space images and copyright.

OPINIONS AND MODERATION
Opinions expressed on UnmannedSpaceflight.com are those of the
individual posters and do not necessarily reflect the opinions
of UnmannedSpaceflight.com or The Planetary Society. The all-volunteer
UnmannedSpaceflight.com moderation team is wholly independent
of The Planetary Society. The Planetary Society has no influence
over decisions made by the UnmannedSpaceflight.com moderators.

SUPPORT THE FORUM
Unmannedspaceflight.com is a project of the Planetary Society
and is funded by donations from visitors and members. Help keep
this forum up and running by contributing
here.