If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Signal processing on digital images (warning, esoteric!)

Having played with unsharp masking, I wonder if there is another way to sharpen detail. Once an image is digitized, we know the sampling rate and the Nyquist sp atial frequency. From published material from film manufacturers, we know the po wer spectrum of the film's response, the MTF. Assuming film induces no phase-shi ft, why not deconvolve the films MTF from the image... do a 2D FFT on the image and divide out the films MTF, then inverse transform. Assuming a good quality sc an with low noise, that should restore the image that was produced by the lens i n the film plane. Has anybody tried this? Am I missing something obvious?

Signal processing on digital images (warning, esoteric!)

The traditional way to approach this is to determine the impulse reponse of the system (taking lens plus film plus scanner) and derive a convolution mask which as nearly as possible inverts it, such that a step function in the subject is reproduced as a step function in the postprocessed data.

I suspect that the reasons why the approach you describe generally aren't used are twofold:

1. The processing overhead for 2-D FFTs is non-trivial, particularly compared to that required for a simple convolution. You might be able to get around that by employing a blocked approach (along the lines of the 8x8 DCT operation that forms the heart of JPEG) but then you'd risk discontinuities at the block boundaries. There are a fair number of recent papers in the literature which deal with the computational complexity of the various FFT algorithms, including one from a few years back which estimated that 40% of all computational cycles on all Cray Research machines in the installed base at that time were being spent on FFTs.

2. Convolution-based approaches already do a very good job of extracting all of the image detail available, so that's not the issue here. The real problem is noise rejection: We want to sharpen details adequately without amplifying noise, or creating moire patterns in cases where the original isn't contone. Given that noise from scanning in particular is vastly more likely to repeat at fixed frequencies than are image details, it's not clear that going into the frequency domain and then boosting FFT terms is really what you want to do! (an exception: If you know the frequency of the noise you want to suppress, then you might see some benefits by attenuating the corresponding terms of an FFT on the data; I say "might" because I've tried that approach in the context of moire suppression and haven't had a whole lot of luck).

Signal processing on digital images (warning, esoteric!)

This will work, but you have to remember that with modern lenses and good technique it is the film's MTF which limits the total MTF of the system. Even if you are careful to avoid dividing by zero, you often end up simply amplifying the noise floor as Patrick mentioned.

Another problem is that unlike the point response function of lenses, which is nice and smooth at their resolution limit, the MTF of film does some funky things as you approach spatial frequencies corresponding to the grain size. The human eye is good at averaging the grain out and ignoring it, but including the effects of grain in a deconvolution algorithm takes some care.

Another way of looking at the grain problem is to remember that the MTF of film varies with the contrast level, because subtle tonal variations can only be expressed with a larger number of grains. Thus the MTF depends on the amplitude of the spatial wave you are trying to capture - which knocks Fourier's theorem into a cocked hat.

So in practice, despite the fact that deconvolving the film's MTF has a justification grounded in physics, it doesn't have many real advantages over other arbitrarily-chosen sharpening algorithms, at least for general imagery.