Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.

Hi Wayne,

No problem, in fact the question is quite relevant. The challenge is to address the effects of several sources of blur combined, in a simple interface, yet with lots of control over the process.

Quote

I would assume it would be much more challenging than resolving the issues from an AA filter, since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings. But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.

Correct, diffraction has a different PSF shape than e.g. defocus, yet in practice we need a mix of both (in addition to addressing residual optical and OLPF induced blur). The point spread function is just a mathematical description of the blur function which is used to reverse its effect.

Quote

I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.

The difficulty with (deconvolution) restoration of a signal is 2-fold. First, there has to be enough signal to noise to have something to restore. If detail is blurred too far, i.e. it has fused with its surroundings, then it will be impossible to lift it up from the background. Second, we are always faced with noise, even light is noisy (photon shot noise). When signal levels get reduced down to the noise level, then the restoration needs to find a way and disciminate between signal to amplify, and noise not to amplify. That's a challenge.

Currently there are a few algorithms that can do such a task with reasonable success, but there are limits to what can be achieved. One algorithm that's popular, but not necessarily the best, is the Richardson-Lucy restoration algorithm. It was used to improve the Hubble Space Telescope images, and the adaptive variety of the RL restoration addresseses the noise amplification issue with visible improvement of the S/N ratio. One of it's drawbacks can be that it is processing intensive, therefore slow, and it's success also depends on a decent input as to what the PSF should look like. Other, so-called blind deconvolution algorithms, attempt to find the optimal PSF shape as part of the process, but they tend to have a difficulty in separating noise out of the enhancement.

Quote

Just curious.

Curious is good, it's the start of progress.

So, another attempt to address diffraction blur might be in order. Diffraction blur can actually help to reduce moiré, because it kills high spatial frequencies before discrete sampling takes place, but we are confronted with it mostly if we want to add DOF to a scene. Therefore it has both useful (artifact reduction and artistic control) and detrimental (diffraction blur of the focused micro-detail) effects. Wouldn't it be nice if the drawbacks could be reduced? Well they can (upto a certain level).

I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.

... I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.

This is done in microscopy ... an area where there is a constant battle to overcome extremely shallow DOF, or to put it another way, to reduce the painful trade-offs between OOF effects (aperture too big) and diffraction effects (aperture too small). One snippet:http://en.wikipedia.org/wiki/Microscopy#Su...tion_techniques

Sounds very interesting, Bart. Some of my Zeiss lenses go to f45 Never used to worry me on film . . .

John

The resolution limiting effects of the Airy disk has the same effect on an 8 x 10 inch view camera as on a Minox miniature format. However, for a given print size, the effects of diffraction for as given Airy disc diameter are much more apparent with the Minox due to to the magnification factor. Likewise, the effects of diffraction do not depend on pixel size. For a given overall sensor size, a small pixel camera will have the same diffraction limited resolution as a larger pixel sized camera.

Shouldn't it be possible to some extent to reverse this, since the PSF of diffraction is well known?( I think its called some sort of Bessel function).

Yes, for a circular aperture it involves the square of a Bessel function J_1

Which has the following characteristic intensity pattern (PSF)

An issue may be that it is hard to deal with the pattern of peaks and minima accurately in a numerical setting. Of course, what the sensor will typically see unless you're really stopped down is a box blur of this pattern, since it is sampled by pixels of finite size. Another issue is that the tail of the PSF has a much slower falloff than a Gaussian, so might need more computational resources to mitigate accurately. All this is saying that deconvolution may have a harder time dealing with diffraction than with, say, OOF blur.

I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.

Okay, here we go.

1. I've taken a crop of a shot taken with my 1Ds3 (6.4 micron sensel pitch + Bayer CFA) and the TS-E 90mm af f/7.1 (the aperture where the diffraction pattern spans approx. 1.5 pixels).0343_Crop.jpg (1.203kb) I used 16-b/ch TIFFs throughout the experiment, but provide links to JPEGs and PNGs to save bandwidth.

2. That crop is convolved with a single diffraction (at f/32) kernel for 564nm wavelength (the luminosity weighted average of R, G and B taken as 450, 550 and 650 nm) at a 6.4 micron sensel spacing (assuming 100% fill-factor). That kernel was limited to the maximum 9x9 kernel size of ImagesPlus, a commercial Astrophotography program chosen for the experiment because a PSF kernel can be specified and the experiment can be verified. That means that only a part of the infinite diffraction pattern (some 44 micron, or 6.38 pixel widths, in diameter to the first minimum) could be encoded. So I realise that the diffraction kernel is not perfect, but it covers the majority of the energy distribution. The goal is to find out how well certain methods can restore the original image, so anything that resembles diffraction will do. The benefit of using a 9x9 convolution kernel is that the same kernel can be used for both convolution and deconvolution, so we can judge the potential of a common method under somewhat ideal conditions (a known PSF, and computable in a reasonable time). it will present a sort of benchmark for the others to beat.Crop+diffraction (5.020kb !) This is the subject to restore to it's original state before diffraction was added.

3. And here (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).

Again, this is a simplified case (with only moderate noise) with only one type of uniform blur, and its PSF is exactly known. But it does suggest that under ideal circumstances, a lot can be restored. So that reduces the quest to an accurate characterization of the PSF in a given image, and a software that can use it for restoration ...

1. I've taken a crop of a shot taken with my 1Ds3 (6.4 micron sensel pitch + Bayer CFA) and the TS-E 90mm af f/7.1 (the aperture where the diffraction pattern spans approx. 1.5 pixels).0343_Crop.jpg (1.203kb) I used 16-b/ch TIFFs throughout the experiment, but provide links to JPEGs and PNGs to save bandwidth.

2. That crop is convolved with a single diffraction (at f/32) kernel for 564nm wavelength (the luminosity weighted average of R, G and B taken as 450, 550 and 650 nm) at a 6.4 micron sensel spacing (assuming 100% fill-factor). That kernel was limited to the maximum 9x9 kernel size of ImagesPlus, a commercial Astrophotography program chosen for the experiment because a PSF kernel can be specified and the experiment can be verified. That means that only a part of the infinite diffraction pattern (some 44 micron, or 6.38 pixel widths, in diameter to the first minimum) could be encoded. So I realise that the diffraction kernel is not perfect, but it covers the majority of the energy distribution. The goal is to find out how well certain methods can restore the original image, so anything that resembles diffraction will do. The benefit of using a 9x9 convolution kernel is that the same kernel can be used for both convolution and deconvolution, so we can judge the potential of a common method under somewhat ideal conditions (a known PSF, and computable in a reasonable time). it will present a sort of benchmark for the others to beat.Crop+diffraction (5.020kb !) This is the subject to restore to it's original state before diffraction was added.

3. And here (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).

Again, this is a simplified case (with only moderate noise) with only one type of uniform blur, and its PSF is exactly known. But it does suggest that under ideal circumstances, a lot can be restored. So that reduces the quest to an accurate characterization of the PSF in a given image, and a software that can use it for restoration ...

I tried sharpening the PNG file using Focus Magic (which I've been using for a number of years now). The automatic detection of blur width gave me readings varying from 2 pixels to 7 pixels, depending on which part of the image was selected. One can get some rather ugly results sharpening a whole image at a 7-pixels setting, especially at 100%, so I tried using a 1-pixel blur width at 50%, repeating the operation 7 times.

Below is the result, using maximum quality jpeg compression. To my eyes, the result looks very close to yours. However, at 200% it's clear that your result shows slightly finer detail. An obvious example of this is the lower window to the left of the tree. The faint horizontal stripes suggest the presence of a venetian blind. In my FM-sharpened image, there's no hint of this detail.

3. And here (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).

Again, this is a simplified case (with only moderate noise) with only one type of uniform blur, and its PSF is exactly known. But it does suggest that under ideal circumstances, a lot can be restored. So that reduces the quest to an accurate characterization of the PSF in a given image, and a software that can use it for restoration ...

Cheers,Bart

Bart, this is very interesting! I was not yet able to achieve the same deconvolution by using RawTherapee (RL deconvolution), ACR, SmartSharpen, Topaz Detail and ALCE(bigano.com).Just discovered this tool - DeblurMyImage (http://www.adptools.com/en/deblurmyimage-description.html) that allows to import a PSF. Do you have by any chance an image of the PSF used by ImagePlus?This will be an interesting experiment!

I suppose if I will be able to measure the PSF for my lens + camera + raw converter, it will provide the best sharpening for my images, This is very tempting!

3. And here (945kb) is the result after only one Richardson Lucy restoration (although with 1000 iterations) with a perfectly matching PSF. There are some ringing artifacts, but the noise is almost the same level as in the original. The resolution has been improved significantly, quite usable for a simulated f/32 shot as a basis for further postprocessing and printing. Look specifically at the Venetian blinds at the first floor windows in the center. Remember, the restoration goal was to restore the original, not to improve on it (that will take another postprocessing step).

What is the effect of using fewer than 1000 iterations? In RawTherapee there doesn't seem to be much change after 40 or 50.

BTW, I looked at the (open) source code for RT, and it assumes a Gaussian PSF. I think it would be easily modified to use different PSF's, and possibly not too hard to allow one input by the user.

I tried sharpening the PNG file using Focus Magic (which I've been using for a number of years now). The automatic detection of blur width gave me readings varying from 2 pixels to 7 pixels, depending on which part of the image was selected. One can get some rather ugly results sharpening a whole image at a 7-pixels setting, especially at 100%, so I tried using a 1-pixel blur width at 50%, repeating the operation 7 times.

Below is the result, using maximum quality jpeg compression. To my eyes, the result looks very close to yours. However, at 200% it's clear that your result shows slightly finer detail. An obvious example of this is the lower window to the left of the tree. The faint horizontal stripes suggest the presence of a venetian blind. In my FM-sharpened image, there's no hint of this detail.

[attachment=23359:FM_1_pix...fraction.jpg]

Ray,

Your experiment debunks one of the main criticisms of deconvolution: deconvolution is fine in theory but falls down in practice because a suitable PSP can not be found. Bart used a near perfect PSP (limited by the 9*9 filter in ImagesPlus) and you used a trial and error method to derive a PSP that produced nearly as good results.

The PSP used by FocusMagic and how it is affected by the BlurWidth and Amount parameters is not well documented. Does amount determine the number of iterations or some other quantity? Restorations for defocus, diffraction, and lens aberrations such as spherical aberration require different PSPs. As implied by its name, FocusMagic may use a PSP optimized for restoration of defocus. However, as your experiment demonstrates, decent results may be obtained with a PSP that is not optimal. A decent approximation may be sufficient.

It was disappointing to learn that the PSP for Raw Therapee is for Gaussian blur. Photoshop's SmartSharpen has PSPs for Gaussian blue and lens blur (whatever that is), and the latter is recommended for photographic use. Does anyone have information on the PSPs used by Raw Developer or ACR?

Bart, this is very interesting! I was not yet able to achieve the same deconvolution by using RawTherapee (RL deconvolution), ACR, SmartSharpen, Topaz Detail and ALCE(bigano.com).Just discovered this tool - DeblurMyImage (http://www.adptools.com/en/deblurmyimage-description.html) that allows to import a PSF. Do you have by any chance an image of the PSF used by ImagePlus?This will be an interesting experiment!

I have supplied a link to the data file. You can read the dat file with Wordpad or a similar simple document reader. You can input those numbers(rounded to 16-bit values, or converted to 8-bit numbers by dividing by 65535 and multiplying by 255 and rounding to integers). A small warning, the lower the accuracy, the lower the output quality will be. For convenience I've added a 16-bit Greyscale TIFF (convert to RGB mode if needed). I have turned it into an 11x11 kernel (9x9 + black border) because the program you referenced apparently (from the description) requires a zero backgound level.

Quote

I suppose if I will be able to measure the PSF for my lens + camera + raw converter, it will provide the best sharpening for my images, This is very tempting!

Yes, that would be a goal, but the trick is to acquire the PSF from an arbitrary image without prior knowledge, or be able to interactively synthesize a PSF that works well on a preview.

What is the effect of using fewer than 1000 iterations? In RawTherapee there doesn't seem to be much change after 40 or 50.

Hi Emil,

The reason was because fewer iterations showed more ringing artifacts, but one could opt for that compromise and try to deal with the artifacts in an other way. After a few hundred iterations the ringing started to reduce a bit, so I decided to give the PC a workout. Perhaps a larger kernel size would have allowed to stop earlier with less ringing, but a larger kernel would also increase calculation time per iteration.

Quote

BTW, I looked at the (open) source code for RT, and it assumes a Gaussian PSF. I think it would be easily modified to use different PSF's, and possibly not too hard to allow one input by the user.

I'm sure that would increase its usability even further, although it's already quite effective for normal capture sharpening. It is possible to approximate the most important part of a diffraction pattern with a Gaussian, but it will deliver lower quality results for deconvolution of diffraction effects alone. A mix of PSFs can potentially be appoximated by (a mix of) Gaussians, but defocus has a markedly different PSF shape. It would be preferable to use prior knowledge (e.g. from a database of analyses) or by analyzing the image content (or a test pattern image taken with the same shooting parameters).

The beauty of deconvolution is that it really increases resolution, not just edge contrast (and halo).

Your experiment debunks one of the main criticisms of deconvolution: deconvolution is fine in theory but falls down in practice because a suitable PSP can not be found. Bart used a near perfect PSP (limited by the 9*9 filter in ImagesPlus) and you used a trial and error method to derive a PSP that produced nearly as good results.

Hi Bill,

Yes, Ray did well by modifying the method of using a good (more defocus oriented) deconvolver.

Quote

The PSP used by FocusMagic and how it is affected by the BlurWidth and Amount parameters is not well documented. Does amount determine the number of iterations or some other quantity? Restorations for defocus, diffraction, and lens aberrations such as spherical aberration require different PSPs.

That's correct, but then FocusMagic doesn't claim to be a cure for everything. The documentation leaves a bit to be desired, but on the other hand the preview makes it into a quick trial and error procedure to find the best settings. What works well in most cases is to increase the amount and start increasing the radius. There comes a point where the resolution suddenly changes for the worse. Just back up one click and fine-tune the amount.

Quote

As implied by its name, FocusMagic may use a PSP optimized for restoration of defocus. However, as your experiment demonstrates, decent results may be obtained with a PSP that is not optimal. A decent approximation may be sufficient.

I agree. The improvement will be quite visible anyway, and a bit of creativity may find an even better solution. As Ray's example showed, he came very close to an optimal scenario, and with less visible artifacts.

Quote

It was disappointing to learn that the PSP for Raw Therapee is for Gaussian blur.

The program has an open development structure now, so who knows what the future has in store.

I tried Photoshop CS5 with these settings:[attachment=23361:Screen_s...20.29_PM.png]

And got this result:[attachment=23362:0343_Cro...ion_ekr1.jpg]

I'd suggest, from what I have read, that Smart Sharpen with 'Lens Blur' is also oriented to remove defocusing errors.

Best regardsErik

Quote from: BartvanderWolf

Hi Bill,

Yes, Ray did well by modifying the method of using a good (more defocus oriented) deconvolver.

That's correct, but then FocusMagic doesn't claim to be a cure for everything. The documentation leaves a bit to be desired, but on the other hand the preview makes it into a quick trial and error procedure to find the best settings. What works well in most cases is to increase the amount and start increasing the radius. There comes a point where the resolution suddenly changes for the worse. Just back up one click and fine-tune the amount.

I agree. The improvement will be quite visible anyway, and a bit of creativity may find an even better solution. As Ray's example showed, he came very close to an optimal scenario, and with less visible artifacts.

The program has an open development structure now, so who knows what the future has in store.

The reason was because fewer iterations showed more ringing artifacts, but one could opt for that compromise and try to deal with the artifacts in an other way. After a few hundred iterations the ringing started to reduce a bit, so I decided to give the PC a workout. Perhaps a larger kernel size would have allowed to stop earlier with less ringing, but a larger kernel would also increase calculation time per iteration.

What software were you using to do the deconvolution? Is it damped RL? Adaptive? I would have thought the ringing could be controlled with damping using fewer iterations, but I'm no expert.

I had a look at what is going on in the frequency domain. Following are some log(Magnitude+1) plots (Fourier spectra):

The target, original crop:[attachment=23365:0343_Crop_Mag.png]

The MTF of the PSF:[attachment=23368:Airy9x9_Mag.png]

The crop+diffraction. Note the horizontal and vertical streaking (looks like interpolation?):[attachment=23366:0343_Cro...tion_Mag.png]

The crop you restored with Lucy:[attachment=23367:0343_Cro...1000_Mag.png]

I tried convolving the PSF and the original crop in Matlab and got this for the crop+diffraction:[attachment=23370:0343_cro...conv_Mag.png]It looks like there might be more high frequencies to restore in this one.

Here's the Matlab convolved crop+diffraction if anyone wants to play with it:

Hi Erik,We're into extreme pixel-peeping here, are we not? It appears that CS5 might now be doing a better job than Focus Magic.

As I mentioned, one of the critical areas in Bart's image, which highlights the quality of the sharpening, is that window nearest the ground, just to the left of the tree. It's clear there's a ventian blind there, so it's reasonable to deduce that the horizontal lines represent real detail and are not just artifacts. My sharpening attempt with FM has not done well in that section of the image. Bart's attempt with a single Richardson Lucy restoration does the best job, your's next and mine a poor third.

Such differences are best viewed at 300%. Here's a comparison at 300% so we all know what we're talking about. Bart's is first on the left, yours in the middle and mine furthest to the right. I added one more iteration of 1 pixel blur width at 50%, so the title should read 8x instead of 7x.

[attachment=23380:Comparis..._at_300_.jpg]

Okay! Let's now shift our gaze to the smooth blue surface at the top of the crop. What! Is that noise I see? Surely it must be! However, in my FM sharpened image, that plain blue section at the top is as smooth as a baby's bottom.

I guess we have trade-offs in operation here.

Out of curiosity, I tried sharpening Bart's image using ACR 6.1 with the following settings. Detail 100%, 0.5 pixels, amount 120%, no masking. (Masking reduces resolution.)

It's done an excellent job. So close to Bart's, I would say the differences are irrelevant. A 300% enlarged crop on the monitor represents a print size of the entire image of about 10 metres by 25 metres (maybe a slight exaggeration, but you get my point ).