I have noticed that noise gets amplified and even considered starting a discussion about that. Also, I would point out, it's not mine job but LR's job. What's very nice with LR is that it doesn't break parametric workflow. LR/ACR have some measures to contain noise, masking and others but they may look artificial.

I guess that both Focus Magic (which I used before) and Ps/CS5 Advanced Sharpen mostly handle defocus errors. Eric Chen described the Lens Blur PSF used in CS5's Advanced Sharpen as: "The Gaussian is basically just that, and the Lens Blur is effectively simulating a nearly circular aperture (assuming even light distribution within the aperture, very unlike Gaussian)."

Regarding your test with FM, I guess/suggest that using a larger radius may be better, you could perhaps try with "blur with 2" and use your iterative technique. Ideally the deconvolver would use a correct PSF. I assume that blur width is closely related to the width of the PSF.

I have tested a little bit with FM and larger "blur radius" but have not yet found what's optimal in my eyes.

I agree that FM doesn't seem to enhance noise and LR seems to do it if no noise suppression is used. Noise suppression may counter act what we to to achieve. Masking does not necessarily reduce resolution, it decides which areas to be sharpened so you would choose mask to keep sharpening on detail but suppress sharpening in smooth areas, like the blue paint. Using intensive/excessive sharpening the transition area between masked/unmasked may be ugly.

Than we need to keep in mind that when we process the image for printing we may rescale and sharpen for output, the printer will also add a processing of it's own. A complex world we live in.

Anyway, a week ago I didn't know LR had deconvolution although I was pretty sure that PS/CS5's Advanced Sharpen actually had some deconvolution going on. So now we start utilizing techniques that we didn't even know we had.

Also, we need to keep in mind that there are a lot of deconvolution algorithms around and all are not created equal.

Best regardsErik

Quote from: Ray

Hi Erik,We're into extreme pixel-peeping here, are we not? It appears that CS5 might now be doing a better job than Focus Magic.

As I mentioned, one of the critical areas in Bart's image, which highlights the quality of the sharpening, is that window nearest the ground, just to the left of the tree. It's clear there's a ventian blind there, so it's reasonable to deduce that the horizontal lines represent real detail and are not just artifacts. My sharpening attempt with FM has not done well in that section of the image. Bart's attempt with a single Richardson Lucy restoration does the best job, your's next and mine a poor third.

Such differences are best viewed at 300%. Here's a comparison at 300% so we all know what we're talking about. Bart's is first on the left, yours in the middle and mine furthest to the right. I added one more iteration of 1 pixel blur width at 50%, so the title should read 8x instead of 7x.

[attachment=23380:Comparis..._at_300_.jpg]

Okay! Let's now shift our gaze to the smooth blue surface at the top of the crop. What! Is that noise I see? Surely it must be! However, in my FM sharpened image, that plain blue section at the top is as smooth as a baby's bottom.

I guess we have trade-offs in operation here.

Out of curiosity, I tried sharpening Bart's image using ACR 6.1 with the following settings. Detail 100%, 0.5 pixels, amount 120%, no masking. (Masking reduces resolution.)

It's done an excellent job. So close to Bart's, I would say the differences are irrelevant. A 300% enlarged crop on the monitor represents a print size of the entire image of about 10 metres by 25 metres (maybe a slight exaggeration, but you get my point ).

Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction. (and if this has already been discussed I also apologize - I only skimmed through the thread, seeing how most of it is above my pay-grade).

I would assume it would be much more challenging than resolving the issues from an AA filter, since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings. But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.

I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.

Ray,Masking does not necessarily reduce resolution, it decides which areas to be sharpened so you would choose mask to keep sharpening on detail but suppress sharpening in smooth areas, like the blue paint. Using intensive/excessive sharpening the transition area between masked/unmasked may be ugly.

Erik,I understand that's the principle. But in practice the result may be different. In my experiment to achieve maximum clarity in the venetian blinds, any tinkering with the 'masking' slider in ACR, reduced that clarity. Try it for yourself.

Out of curiosity, I tried sharpening Bart's image using ACR 6.1 with the following settings. Detail 100%, 0.5 pixels, amount 120%, no masking. (Masking reduces resolution.)

It's done an excellent job. So close to Bart's, I would say the differences are irrelevant. A 300% enlarged crop on the monitor represents a print size of the entire image of about 10 metres by 25 metres (maybe a slight exaggeration, but you get my point ).

[attachment=23381:ACR_6.1_..._Bart__s.jpg]

I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example. Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).

I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example. Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).

Hi Emil.

Much more Gibb's phenomenon??

Here's a 400% crop comparison between Bart's sharpened result and ACR 6.1. Could you point out any significant ringing artifacts along edges, which are apparent in the ACR sharpened image but not in Bart's?

The most significant differences I see between the two images are a few faint horizontal lines on the blue paint-work at the top of the crop, which are apparent in Bart's rendition but not in the ACR rendition.

I suppose if one were examining an image of some distant planet, then such faint lines might be of great significance (assuming they are not software-generated artifacts ).

I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example. Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).

Those 1000 iterations of Bart's RL deconvolution were not without benefit. The Gibbs phenomenon is well demonstrated with the slanted edge and line spread plots of Imatest. The illustration on the left shows no sharpening is on the left and sharpening with FocusMagic Blur width 50, amount 150 is on the right. The line spread is for the Focus Magic image.

[attachment=23388:003CompSh.png] [attachment=23392:lineSpread.png]

The dangers of pixel peeping are well demonstrated by looking at the actual images of the target. The maze pattern is in the area of Nyquist. The benefits of sharpening on looking at the overall image IMHO are most pronounced in the low frequencies, which appear to have much better contrast. This is because the contrast sensitivity function (CSF) of the eye peaks at the relatively low resolution of 8 cycles/degree, which corresponds to 1 cycle per mm for a print that is viewed at a distance of 34cm (about 13.5") [Bob Atkins]. If you zoom in to look at high frequencies, the contrast in the low frequencies may be missed, and aliasing artifact is quite apparent. Artifacts above Nyquist are shown both by the Imatest analysis and the actual image.

Here's a 400% crop comparison between Bart's sharpened result and ACR 6.1. Could you point out any significant ringing artifacts along edges, which are apparent in the ACR sharpened image but not in Bart's?

The most significant differences I see between the two images are a few faint horizontal lines on the blue paint-work at the top of the crop, which are apparent in Bart's rendition but not in the ACR rendition.

[attachment=23391:400__crop.jpg]

They both have ringing artifacts. Bart's have more side lobes, yours have a stronger first peak and trough. It was that initial over- and under-shoot that I was referring to when I wrote "much more" -- the initial amplitude is stronger. Though that longer tail of side lobes can be more of a problem in some places -- see the white sliver next to the left side of the tree trunk near the bottom.

They both have ringing artifacts. Bart's have more side lobes, yours have a stronger first peak and trough. It was that initial over- and under-shoot that I was referring to when I wrote "much more" -- the initial amplitude is stronger. Though that longer tail of side lobes can be more of a problem in some places -- see the white sliver next to the left side of the tree trunk near the bottom.

The white sliver is more natural in the ACR image on the right, right?

I would say all the examples other than Bart's show much more Gibbs' phenomenon ('ringing' artifacts along sharp edges); look at the edges of the white window frames, for example. Though it looks like Bart's ringing is longer range (perhaps the result of all those iterations).

The "ringing" in deconvolution may not be the Gibbs effect, rather it may be caused by inaccuracies in modeling image noise and psf correctly.

Quote from: bjanes

The Gibbs phenomenon is well demonstrated with the slanted edge and line spread plots of Imatest.

Gibbs phenomenon is dependent upon the metric to measure it. For e.g., L1 norm has higher immunity than L2. So unless the metric is specified it is incomplete information.

In his comparison of the new Leica S2 with the Nikon D3x, Lloyd Chambers (Diglloyd) has shown how the deconvolution sharpening (more properly image restoration) with the Mac only raw converter Raw Developer markedly improves the micro-contrast of the D3x image to the point that it rivals that of the Leica S2. Diglloyd's site is a pay site, but it is well worth the modest subscription fee. The Richardson-Lucy algorithm used by Raw Developer partially restores detail lost by the presence of a blur filter (optical low pass filter) on the D3x and other dSLRs.

Bart van der Wolf and others have been touting the advantages of deconvolution image restoration for some time, but pundits on this forum usually pooh pooh the technique, pointing out that deconvolution techniques are fine in theory, but in practice are limited by the difficulties in obtaining a proper point spread function (PSF) that enables the deconvolution to undo the blurring of the image. Roger Clark has reported good results with the RL filter available in the astronomical program ImagesPlus. Focus Magic is another deconvolution program used by many for this purpose, but it has not been updated for some time and is 32 bit only.

Isn't it time to reconsider deconvolution? The unsharp mask is very mid 20th century and originated in the chemical darkroom. In many cases decent results can be obtained by deconvolving with a less than perfect and empirically derived PSP. Blind deconvolution algorithms that automatically determine the PSP are being developed.

if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.

If computational resources and a large amount of memory are available then it is possible to have closed form solutions under some usual circumstances instead of iterative procedures in the type of deconvolution being discussed here. This has something to do with the structure of block Toeplitz and circulant matrices that are reduced to usual convolution problems in iteration based deconvolution procedures. However, the amount of memory required would be huge, in tera bytes for the usual sized images these days, and is perhaps not a possibility currently for home users.

Quote from: Ray

Much more Gibb's phenomenon??

Unfortunately, Gibb's phenomenon, which produces ringing like effects, is usually mistaken for ringing produced by convolution (or deconvolution) operations. They are not the same in general, and the typical ringing associated with image restoration is mistakenly identified with Gibbs phenomenon on this thread.

Unfortunately, Gibb's phenomenon, which produces ringing like effects, is usually mistaken for ringing produced by convolution (or deconvolution) operations. They are not the same in general, and the typical ringing associated with image restoration is mistakenly identified with Gibbs phenomenon on this thread.

Well, Joofa, you obviously appear to know what you are talking about. I confess I have almost zero knowledge about the Gibb's phenomenon, but I can appreciate that it may be useful to be able to indentify and name any artifacts one may see in an image, especially if one is examining an X-ray of someone's medical condition, or indeed searching for evidence of alien life on a distant planet.

I wouldn't attempt to argue points of physics or mathematics with the eminent Emil Martinec. However, when Emil implies that my ACR 6.1 'detail enhancement' has significantly more ringing artifacts than Bart's Richardson Lucy rendition, I'm plain confused. I just don't see it; at least not at 400% enlargement.

Here's the comparison again.

[attachment=23419:400__crop.jpg]

As it appears to me, the edges of the white sliver at the bottom left of the tree has slightly more noticeable ringing artifacts in Bart's image. Furthermore, if one examines the plain blue area at the top of the crop, immediately above the uppermost white bar, one can see 4 or 5 faint horizontal lines in Bart's image, but only one line in the ACR image (exluding the very dark edge adjoining the white bar, which is apparent in both crops).

I presume these faint lines in the blue paint-work are ringing artifacts, but I'm not certain. Perhaps those faint blue lines actually exist in the paint-work. If I were a doctor examining an X-ray, I'd be concerned about such matters.

As a matter of interest, I tried another shazpening experiment using Focus Magic. Those who are familiar with this program will know that there are several options for different types of image source. 'Digital Camera' is the default and the one I used earlier, but at the bottom of the list is 'Forensic'. It sounds as though that option would produce a better result at restoring detail, and so it does.

[attachment=23420:FM_Foren...mparison.jpg]

Showing 300% crops (above) of the same part of the image, Focus Magic is now doing a much better job in delineating the individual slats of the blind. Bart's image has the edge regarding the clarity of those slats, but I think one could say the FM (forensic) image displays slightly lower noise in the blue area at the top. The crop on the far right is my first result using the default 'Digital Camera' source. The blue paint-work is clearly much smoother, but the detail of the slats much worse, even non-existent. Trade-offs again.

I wouldn't attempt to argue points of physics or mathematics with the eminent Emil Martinec. However, when Emil implies that my ACR 6.1 'detail enhancement' has significantly more ringing artifacts than Bart's Richardson Lucy rendition, I'm plain confused. I just don't see it; at least not at 400% enlargement.

Since I have a dog in this hunt, (involved in ACR capture sharpening and PhotoKit Sharpener) I've avoided this thread like the plague...but I will say this, I keep my ear to the ground and short of exotic deconvolution algorithms (but with no easy to use plug-ins) with generally well know PSF's, I'm not sure that any theoretical image "restoration" or detail sharpening via deconvolution is really and truthfully useful for general photograpy.

Yes, there may be technical solutions to image processing that un-blurs motion blur to the point where you "might" be able to use facial recognition software to identify a person or un-blur a license plate number so when they blow past a red light, you can send them a ticket...I suspect England would LOVE to be able to un-blur speeders so they can send a bill to a speeder for going over the limit.

But the fact that ACR 6.1 comes real close (and perhaps arguably with less ringing) to a 1K iteration of deconvolution processing should tell you something...the other side of the fence is NOT always a whole lot greener...

Yes, it's useful to keep pushing the limits of image processing. Take a look at ACR 6.1 Process 2010 and the new noise reduction (and lens corrections)...part of the 2010 Process is tweaking of the sharpening blend and radius precision...and of course, radically better noise reduction.

Some people seem hellbent on looking for computational correction of things that really, should be taken care of in selecting the optimal shutter speed and aperture for a given shot and then use the proper combination of capture, creative and output sharpening for the image.

I know the research is ongoing...I welcome it! I've spent a nice, eye-opening time at MIT looking at a variety of doctoral dissertations for a bunch of different directions...and cool stuff DOES come from MIT (think Seam Carving AKA Content Aware Scaling) but seriously, the thought that deconvolution image restoration is the ultimate solution to all of photography's woes, is well, SciFi–as in zooming into the image of the photo in Blade Runner or the way the CSI "Enhance" filter seems to work.

Rather that looking towards and hoping on the future, I think it would be generally more useful for people to really learn how to use the tools they already have to advance their images...but ya know, that's just me.

We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based. So what hairs are you splitting to distinguish it from deconvolution sharpening?

We now have it on Eric Chan's authority that, when the detail slider is cranked up to 100%, the sharpening in ACR 6 is deconvolution based. So what hairs are you splitting to distinguish it from deconvolution sharpening?

Well, that was a bit of a surprise to me...

But I would ask again, what did a 1K iteration deconvolution do that ACR 6.1 couldn't do (except add ringing effects)?

But I would ask again, what did a 1K iteration deconvolution do that ACR 6.1 couldn't do (except add ringing effects)?

I think you are fixating on the particular implementation (that Bart used) rather than considering the method in general. Typically most of the improvement to be had with RL deconvolution comes in the first few tens of iterations, and the method can be quite fast (as it is in RawTherapee, FocusMagic, and RawDeveloper, for instance). A good implementation of RL will converge much faster than 1K iterations. It's hard to say what is sourcing the ringing tails in Bart's example; it could be the truncation of the PSF, it could be something else. I would imagine that the dev team at Adobe has spent much more time tweaking their deconvolution algorithm than the one day that Bart spent working up his example.

But I would ask again, why do you want to throw dirt on deconvolution methods if you are lavishing praise on ACR 6.1?