AuthorTopic: Deconvolution sharpening revisited (Read 225564 times)

In his comparison of the new Leica S2 with the Nikon D3x, Lloyd Chambers (Diglloyd) has shown how the deconvolution sharpening (more properly image restoration) with the Mac only raw converter Raw Developer markedly improves the micro-contrast of the D3x image to the point that it rivals that of the Leica S2. Diglloyd's site is a pay site, but it is well worth the modest subscription fee. The Richardson-Lucy algorithm used by Raw Developer partially restores detail lost by the presence of a blur filter (optical low pass filter) on the D3x and other dSLRs.

Bart van der Wolf and others have been touting the advantages of deconvolution image restoration for some time, but pundits on this forum usually pooh pooh the technique, pointing out that deconvolution techniques are fine in theory, but in practice are limited by the difficulties in obtaining a proper point spread function (PSF) that enables the deconvolution to undo the blurring of the image. Roger Clark has reported good results with the RL filter available in the astronomical program ImagesPlus. Focus Magic is another deconvolution program used by many for this purpose, but it has not been updated for some time and is 32 bit only.

Isn't it time to reconsider deconvolution? The unsharp mask is very mid 20th century and originated in the chemical darkroom. In many cases decent results can be obtained by deconvolving with a less than perfect and empirically derived PSP. Blind deconvolution algorithms that automatically determine the PSP are being developed.

You can also add the freeware RawTherapee to the list of converters that offer RL deconvolution sharpening. The program is currently undergoing some major revisions which will make it much better (alpha builds can be found here). The deconvolution sharpening works pretty well, but there may be improvements to be had; it's on my list of things to tackle in the future.

I have also heard that Smart Sharpen in Photoshop is deconvolution based, but I've never been able to find any semi-official confirmation of that from the people who should know.

If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

John

Suppose we are interested in undoing the effect of the AA filter. The image coming through the lens is focussed on the plane of the sensor, then the AA filter acts on the image in a manner similar to a Gaussian blur in Photoshop (though with a different blur profile). The idea of deconvolution sharpening is that mathematically, if one knows the blur profile, one can reverse engineer to a reasonable approximation what the image was before it went through the AA filter. The same idea works for many other kinds of image degrading phenomena, from misfocus to motion blur, to lens aberrations, etc. The trick is that each different kind of blur has its own blur profile, ideally one would want to use the one specific to the image flaw one is trying to remove. But practically that isn't possible, so typically one tries to use a generic blur profile and hope that that does a reasonably good job. Another issue is that the method, like any sharpening method, can amplify noise, and generate haloes, and so one has to modify it to suppress noise enhancement and haloes.

For the more technically inclined, a more detailed explanation is that a good way to think about imaging components is in terms of spatial frequencies; for instance, MTF's are multiplicative -- for a fixed spatial frequency, the MTF of the entire optical chain is the product of the MTF's of the individual components. So if the component doing the blurring has a blur profile B(f) for as a function of spatial frequency f, and the image has spatial frequency content I(f) at the point it reaches this component, then the image after passing through that component is I'(f)=I(f)*B(f). Thus, if one knows the blur profile B(f), one can recover the unblurred image by dividing: I(f)=I'(f)/B(f). The problem comes that B(f) can be small at high frequencies, since it is a low pass filter that is removing these frequencies from the image. Dividing by a small number is inherently numerically unstable, and so choosing the wrong blur profile, or having a bit of noise in the image, all those inaccuracies get amplified by the method. So in practice one includes a bit of damping at high frequency (quite similar to the 'radius' setting in USM) to keep the algorithm from going too far astray.

Edit: This multiplicative (in frequencies) aspect of the blur I'(f)=I(f)*B(f) is what is known as convolution, which is why the reverse process I(f)=I'(f)/B(f) is called deconvolution. I see all the techies have chimed in

If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

Hi John,

Basically it tries to invert the process of what caused the blurring of image detail, that's why they are also refered to as restoration algorithms. Blurring is about spreading some of the info of a single pixel over its neighbors, the deconvolution is about removing the blur component from neighboring pixels and adding it back to the original pixel. The blurring component is mathematically modeled as a convolution, hence the inverse is called deconvolution.

One of the difficulties is about how to discriminate between signal and noise. One preferrably sharpens the signal, not the noise.

If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

John

When you apply a filter to an image in Photoshop, you are generally applying what is called a convolution kernel. These are tables (arrays in computer speak) of numbers that describe the transformation applied to the pixels. The blurring of an image is a convolution. De-convolution is basically running the process in reverse.

In real life, lens blur, movement, filter interference etc... can also be represented as convolutions. The catch is that we often have no precise idea of what the array numbers are and therefore have to guess them. There are many different methods of guessing, some statistical, some based on knowledge of the parameters of the system, etc.... Some specific methods are more suited to some specific situations. Mathematically, that can become quite complex, as we can't simply try all kernels and possible sequence of operations: brute force doesn't work.

One interesting point to note is that if you have a perfect point source and its image (called Point Spread Function), you can have a fairly good estimate of the convolution that was applied.

This is why, in astronomical imaging, deconvolution has been so successful: stars are, for most practical purposes, point sources and their image allows us to reverse engineer the convolution their photons have endured. But even that isn't perfect and there's still a small part of mystery and black magic to the process. Your deconvolution algorithm can converge to the correct initial image, but also diverge.

You can also add the freeware RawTherapee to the list of converters that offer RL deconvolution sharpening.

Indeed, and it can also use a TIFF as input, not only Raw files. AFAIK it is implemented as a postprocessing operation anyway, so TIFFs (without prior sharpening) are just as good as Raws, for that PP phase. JPEGs are less suited for RL sharpening, as it may bring out block artifacts related to the lossy compression.

Holy Cow. That must be some sort of record for the fastest number of highly technical and precisely worded replies to a question on LL Forum, ever. I am humbled, and (I think) enlightened.

Many thanks. I shall now re-read them in a quieter moment. I did try the deconvolution in Raw Therapee, but I didn't really see a kind of wow factor. But then my DB does not have an AA filter, so perhaps the effects would be pretty subtle.

If it is possible to explain it in (reasonably) simple terms, how does deconvolution sharpening actually work?

John

The usual text approach (generalized Richardson-Lucy types) is that it takes the original image data and does a guess on certain parameters associated with the image data. The using these parameters generates new image data. Then takes the guess again for newer parameters using this new image data, and using the newer parameters generates a newer set of image data. Keep on doing this iteration until you are satisfied. In technical terms that satisfaction is called "convergence". And under typical settings the solution converges to what is called maximum likelihood estimation.

Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.

Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.

Hi Eric,

Thanks for confirming that.

Could you disclose if the Smart Sharpen filter visible effectiveness has been changed between, say, CS3 and CS5, or is it essentially the same since its earlier versions?

I've compared it before, and used it on installations without better alternative plug-ins, but its restoration effectiveness for larger radii seemed less than a direct Richardson-Lucy or similar implementation, although faster. Perhaps a new test/comparison is in order.

Wouldn't it be possible to add a "OLP filter" option. I presume that Gaussian and Lens Blur are different PSF (Point Spread Functions), I presume that OLP essentially splits the light so it will affect the pixels above, under, left of and right of the central pixel?

Best regardsErik

Quote from: BartvanderWolf

Hi Eric,

Thanks for confirming that.

Could you disclose if the Smart Sharpen filter visible effectiveness has been changed between, say, CS3 and CS5, or is it essentially the same since its earlier versions?

I've compared it before, and used it on installations without better alternative plug-ins, but its restoration effectiveness for larger radii seemed less than a direct Richardson-Lucy or similar implementation, although faster. Perhaps a new test/comparison is in order.

Hi Bart, unfortunately I don't know the answer to that, but I will check with the scientist who does. I believe they limit the number of iterations for speed, so I expect this is the reason it would not be as effective for some parameters as the plug-ins, as you've observed.

Hi Erik, yes, the Gaussian and Lens Blur are different PSFs. The Gaussian is basically just that, and the Lens Blur is effectively simulating a nearly circular aperture (assuming even light distribution within the aperture, very unlike Gaussian). You will get better results with the latter though in many cases they are admittedly subtle. The OLP filter can be somewhat complex to model. (I believe the Zeiss articles you've referenced recently have some nice images showing how gnarly they can be. I recall it was in the first of the two MTF articles). Gaussians are handy because they have convenient mathematical properties but not the best for modeling this, unfortunately ...

Since you are talking about "undoing" the effects of AA filter, won't that introduce aliasing artifacts mistaken for sharpness? What about moire?

My understanding is that moire avoidance is not the only reason camera manufacturers put those expensive filters on. It's not like some marketing guy came to the engineers and said "slap one of those make-my-pictures-all-blurred-to-hell -filters on all our cameras, would'ya?" What the reasons are I don't know, but Hot Rod mods haven't been that popular, and I've heard more than one complain about the resulting aliasing.

I've seen so many photos which are oversharpened to the extent of making them as surreal as overcooked HDR. I haven't seen the samples of the results from this undoing, but the samples from D3X I've seen show that it produces exceptionally sharp results out of the box.

The images that Diglloyd had show no Moirés. This doesn't say that sharpening would not restore artifacts, but I actually don't think that this would be the case. If we down scale an image it will contain a lot of artifacts, a good standard practice is to blur the image very slightly before downscaling and the sharpen after scaling.

I did also run a quick test with RawDeveloper and got pretty good results, better than with CS5/Smart Sharpen Gaussian Blur. I may look a bit more into that but I'm going to travel a lot next two weeks.

Best regardsErik

Quote from: feppe

Since you are talking about "undoing" the effects of AA filter, won't that introduce aliasing artifacts mistaken for sharpness? What about moire?

My understanding is that moire avoidance is not the only reason camera manufacturers put those expensive filters on. It's not like some marketing guy came to the engineers and said "slap one of those make-my-pictures-all-blurred-to-hell -filters on all our cameras, would'ya?" What the reasons are I don't know, but Hot Rod mods haven't been that popular, and I've heard more than one complain about the resulting aliasing.

I've seen so many photos which are oversharpened to the extent of making them as surreal as overcooked HDR. I haven't seen the samples of the results from this undoing, but the samples from D3X I've seen show that it produces exceptionally sharp results out of the box.

Some discussion of what sharpening in LR3 actually does would be interesting. In the Bruce Fraser/Jeff Schewe book it's said that the Detail slider affects "halo supression", but that was bfore LR3.

The way I see it I much prefer "parametric adjustments" so I'd like to stay with LR as long as possible, if I need to render TIFFs to be able to sharpen with deconvolution it would brake the workflow, making it into workslow.

Best regardsErik

Quote from: madmanchan

Yes Photoshop's Smart Sharpen is based on deconvolution (but you will need to choose the "More Accurate" option and the Lens Blur kernel for best results). Same with Camera Raw 6 and Lightroom 3 if you ramp up the Detail slider.

Since you are talking about "undoing" the effects of AA filter, won't that introduce aliasing artifacts mistaken for sharpness? What about moire?

In a word, no. Aliasing is a shifting of image content from one frequency band to another that is an artifact of discrete sampling. Deconvolution doesn't introduce aliasing (ie shift frequencies around) so much as try to reverse some of the suppression of high frequency image content that the AA filter effects in its effort to mitigate aliasing.