Thanks, although it is a shot of a slanted edge that is required for an objective/accurate assessment of how the lens + aperture + Raw-converter performs, the original file does open up some possibilities.

Quote

Of course a good prime doesn't need any capture sharpening. (This is an old Minolta 50 2.8 from ebay - 1970s? ) The A350 CCD color is great. Digressing...

Well, in a way. Apparently the earlier image was not full size (2292 x 3444 pixels versus 3056 x 4592 pixels), I assume it was a test of my eyeballing capabilities . Now that the original data is available, I also arrive at a larger estimated blur radius of 0.9, still eyeballing here.

Quote

Look for the impact of any sharpening on the specular highlights in the bottom right amber color. Also the beer neck label around the o.

Including the choice of Raw converter into the equation adds another degree of freedom, and 12-bit/channel Raw data will be more sensitive to (quantization) noise amplification than 14-bit data, so noise removal will add even more uncertainty. The Amber highlights are clipped in the Raw Red and Green channels, and there are some artifacts in the Raw data around the 'o' (perhaps re-mapped (Red?) sensels).

Deconvolution (can be) a linear filter. Sharpening (can be) a linear filter. A filter alters sample values by a weighted sum of samples in its neighborhood. This behaviour can be described as a (frequency-domain) filter: some frequencies are boosted.

When you flatten the end-to-end frequency response, the PSF will usually tend to skrink (lets avoid minimum vs maximum-phase complexity here).

Quote

A USM boosts contrast.

Claiming that "USM boosts contrast" suggests that it is a global operator working on pixels in isolation, like curves/levels, which it is not.

Quote

If you imagine an edge like a sine wave R-L increases the frequency of the wave. USM increases the amplitude.

I have no idea what you are trying to say here. Are you saying that R-L does frequency modulation? It does not.

http://en.wikipedia.org/wiki/Wiener_deconvolutionIn other words: Wiener deconvolution tries to find an inverse filter, G, that enhance signal corrupted by H, by boosting frequencies that were attenuated. However, frequency bands with poor SNR are not boosted.

The difference is in my opinion not so fundamental as claimed in this thread. Both methods tries to boost signal components that are assumed attenuated, while avoiding excessive noise amplification. The difference is in complexity: USM is a low-order filter with a small kernel. Deconvolution can be as high order as you like, and all of those parameters have to be known before-hand, or estimated manually or automatically.

• Image enhancement (convolution): apply heuristic procedures to manipulate an image to takeadvantage of the psychophysical aspects of the human visual system e.g., edge enhancement,brightness/contrast by convolving image with a high-pass filter etc. • Image restoration (deconvolution): attempt to recover an image that has been degradedusing knowledge of the degradation phenomenon; model the degradation and apply the inverse process.

The difference is in my opinion not so fundamental as claimed in this thread. Both methods tries to boost signal components that are assumed attenuated, while avoiding excessive noise amplification. The difference is in complexity: USM is a low-order filter with a small kernel. Deconvolution can be as high order as you like, and all of those parameters have to be known before-hand, or estimated manually or automatically.

Hi h,

In an attempt to keep a bit of structure in this thread, may I suggest that a discussion about the relative merits of Deconvolution versus Unsharp Masking may be more productive in another thread, where it was already demonstrated that deconvolution is more effective in restoring resolution than USM is because deconvolution tends to shrink the spatially blurred features, whereas USM tends to boost the amplitude of the edge gradient while producing halo artifacts (an almost inevitable by-product of adding an inverted blurred copy, unless edge masks are used).

This thread is a bit more about a tool that allows to find the optimal parameters for our various sharpening workflows, the radius control setting in particular.

This tool is the result of some of the questions in that other thread, where it was questioned if a Gaussian PSF is a good assumption, given the shape differences between a PSF dominated by defocus, optical artifacts, and/or diffraction. As I stated there, the mix of the different types of blur tends to resemble a Gaussian blur, and my tool allows to confirm that a Gaussian distribution does a pretty good job of characterizing the blur we find in our actual images. Call it empirical proof of that statement.

Seems that he is able to make models and measurements of D40 and D7000 sensel/olpf/lense fit quite well. So Nikon use 0.375 pixel pitch OLPFs?

Hi h,

It's an interesting model (which BTW doesn't account for residual lens aberrations, defocus, and a non square sensel aperture), which may fit a particular situation. I'm not convinced it can be applied universally. It also doesn't account for the result after demosaicing, which is the basis for our Capture sharpening effort. However, as my tool shows for the cameras I've tested, and others have independently found for their cameras, in actual empirical tests the simple Gaussian model still describes the actual blur of an edge profile (Edge Spread Function, or ESF) very accurately:

The very slight mismatch at the dark end of the curve is caused by lens glare, not blur, and should be fixed with tone curve adjustments, not Capture sharpening. So the blur pattern from the entire imaging and Raw conversion chain can apparently be very well modeled by a simple Gaussian.

And there seems to be a theoretical explanation for that resemblence to a Gaussian shaped blur pattern, the input (a cascade of blur sources) apparently comes close to satisfying the requirements of the Central Limit Theorem. It (loosely formulated) states that the sum of a number of independent distributions will resemble a normal distribution (which can be described by a Gaussian). The DSP Guide, a free on-line book about Digital Signal Processing, also has a nice example at the bottom of that page link. It shows how fast a cascade of (even not close to Gaussian) distributions, very rapidly converges to a Gaussian shape.

Beside the interesting theoretical model of the PSF shape of the OLPF+sensel, and the unknown Raw conversion exploitation of that input (which at least normalizes the MTF of the R/G/B channels despite the differences in sampling density), we also have to consider that the only practical tool that most people have in their workflow, is the Sharpening dialog panel of our Raw converter or image editor, which essentially only offers radius and amount as controls for the PSF shape to use. IMO it is therefore useful to determine as close a match to such a PSF shape as possible, and the formerly unknown blur radius is what we now can determine for that specific purpose.

My tool turns out to be so sensitive, that it can detect the difference between the left and right side of a horizontal edge if the target was not shot perpendicular enough to the optical axis. It also detects differences within the DOF zone, and shows that there is only one plane of best focus. Luckily we need not, and we even cannot, specify the radius to that degree of precision in the common sharpening interfaces. It can still help with more elaborate deconvolution sharpening algorithms, which allow to specify the PSF kernel and also allow to regulate a lower sharpening of noise compared to the actual signal, thus boosting the S/N ratio even further.

Just FYI but I came across this SpyderCheckr blog entry on what camera profiling can do to the appearance of sharpness I thought appropriate for this topic. It backs up what I've always thought about perceived sharpness in a digital capture.

As quoted:

Quote

Its often surprising to see that color correction does not just improve the colors; it improves the color detail, which results in a more detailed image; something we tend to associate with focus and lens quality, when it can actually be an artifact of incorrect color.

Note the image samples that back up the premise by scrolling down through the article.

This explains why I often don't sharpen at all on certain captures AFTER tone mapping, adding clarity and adjusting mainly the Saturation and Luminance sliders in HSL in order to get a realistic tonality to my DSLR images edited in ACR.

Did anyone in this thread factor that into the mathematical calculations presented here?

Did anyone in this thread factor that into the mathematical calculations presented here?

Hi,

Tonal separation, especially with saturated colors, is a bit of a different subject, although the result will be further enhanced by proper Capture sharpening. Calibrating the color rendition is the tool for that enhancement.

Capture sharpening enhances surface structure (and fine edge and line detail) regardless of the color. All non-smooth surfaces have a somewhat bumpy surface, sometimes gritty, sometmes smooth. Those bumps (if resolvable by the optics) will cause local specular reflectons and shadows depending on the angle between the lightsource, the surface, and the viewer. When the image is not Capture sharpened, those specular reflections, and other microdetail, will have lower local contrast and thus cause a somewhat dull looking surface. After Capture sharpening with the proper radius setting that greyish mist will be lifted and the subject comes to life. It will also help saturation of the areas next to the specular highlights a bit by locally darkening those areas while lightening the highlights.

That's the point I was making with that link that I thought would make clear in that part of the camera profiling involving HSL adjustment presets has a lot to do with luminance as well as color refining.

The reduced saturation (the part of color tied to luminance) as seen in those examples affects all aspects of edge definition and clarity which is a large part of sharpness perception.

We don't look at image's sharpness by honing in on small sections of surface texture to make sure it's sharp. We perceive sharpness in a scene by its overall appearance.

For example I don't want to see a thick halo on the outer edge of an orange but I do want to see its dimpled texture. You can only go so far in the sharpening stage before the two are not in agreement in this respect. Draw back the HSL luminance slider on the orange channel and note an increase in definition to dimpled surface texture and now the orange looks sharper.

This is all about overriding the camera's entire capture system and actually injecting our calibration according to the human visual system.

All this math and toe/shoulder curve graphs don't connect the dots in distinguishing between the two.

The Mark 5D III profiling link sample images could be made to look even more sharp with continued HSL adjusts and curves without even touching a Sharpening slider. I've done it to my own images.

What you do is to manipulate contrast between hues. Also, once a color is oversaturated, there will be very little contrast in that color.

Keep in mind, the information is there. You cannot extract information that is not there in postprocessing.

In my view, a methodical approach makes a lot of sense.

1) You first correct for problems caused by the imaging pipeline. Try to reconstruct the original image.

2) You apply creative modifications

3) You optimize the image for presentation

Color calibration belongs to the first step. Unfortunately, color calibration is a bit tricky. Sensitometrically correct colors tend to be dull. If you use Lightroom or ACR you could just check the different profiles under "camera calibration".

That's the point I was making with that link that I thought would make clear in that part of the camera profiling involving HSL adjustment presets has a lot to do with luminance as well as color refining.

The reduced saturation (the part of color tied to luminance) as seen in those examples affects all aspects of edge definition and clarity which is a large part of sharpness perception.

We don't look at image's sharpness by honing in on small sections of surface texture to make sure it's sharp. We perceive sharpness in a scene by its overall appearance.

For example I don't want to see a thick halo on the outer edge of an orange but I do want to see its dimpled texture. You can only go so far in the sharpening stage before the two are not in agreement in this respect. Draw back the HSL luminance slider on the orange channel and note an increase in definition to dimpled surface texture and now the orange looks sharper.

This is all about overriding the camera's entire capture system and actually injecting our calibration according to the human visual system.

All this math and toe/shoulder curve graphs don't connect the dots in distinguishing between the two.

The Mark 5D III profiling link sample images could be made to look even more sharp with continued HSL adjusts and curves without even touching a Sharpening slider. I've done it to my own images.

Erik, I'm not making the connection with your list of processing instructions as having anything to do with what I just pointed out concerning the importance of using math and graphs to come up with improvements to sharpening considering the variables involved with reconstructing the image according to human perception that goes beyond the camera's capture system.

It's equivalent to expecting consistent results with every image using a ruler to measure a lump of clay that's never quite consistent image to image because of all the variables in front and behind the camera.

After all that's been written on this subject, I still have no idea what a sharpened edge is suppose to look like after capture sharpening viewed at 100% which can't be seen anyway no matter the output. Any significant gain to sharpening viewed at 100% will never be seen or appreciated on any output device be it a display or print because we don't view images at that size except for editing.

The subtle improvements to sharpening shown in the samples here and in other tutorials I can never see applying to my images and viewing on output to the web or print. Since I don't know where the sharpening starting point lies for further sharpening to these two output mediums, I just sharpen once to get it to look good viewing the downsampled image destined for the web. If it looks bad and it usually does, go back and change the sharpening until it looks good. For print I just do a test print of a small section of the image.

Erik, I'm not making the connection with your list of processing instructions as having anything to do with what I just pointed out concerning the importance of using math and graphs to come up with improvements to sharpening considering the variables involved with reconstructing the image according to human perception that goes beyond the camera's capture system.

It's equivalent to expecting consistent results with every image using a ruler to measure a lump of clay that's never quite consistent image to image because of all the variables in front and behind the camera.

After all that's been written on this subject, I still have no idea what a sharpened edge is suppose to look like after capture sharpening viewed at 100% which can't be seen anyway no matter the output. Any significant gain to sharpening viewed at 100% will never be seen or appreciated on any output device be it a display or print because we don't view images at that size except for editing.

The subtle improvements to sharpening shown in the samples here and in other tutorials I can never see applying to my images and viewing on output to the web or print. Since I don't know where the sharpening starting point lies for further sharpening to these two output mediums, I just sharpen once to get it to look good viewing the downsampled image destined for the web. If it looks bad and it usually does, go back and change the sharpening until it looks good. For print I just do a test print of a small section of the image.

Ok. I hope I do not come across as being defensive or aggressive. Here goes ....

I agree that my model (OLPF + diffraction + sensel aperture) is of limited real-world use. In particular, I specifically tried to avoid the whole demosaicing issue by working on only a single channel at a time, mostly because my blog is a record of my learning process, and it helps to separate the issues when you are still learning. I do agree with your main observation, i.e., that a practical approach to optimal sharpening should work on the demosaiced image (or perhaps that noise removal, demosaicing and capture sharpening should be addressed simultaneously).

I would like to point out, though, that comparing the empirical ESF to the theoretical (Gaussian) ESF by overlaying the plots is a little misleading. During my experiments on generating synthetic images with known PSFs I discovered that a more reliable method is to plot the difference between the ESFs. If your theoretical ESF is a good match for the empirical ESF, then their difference should look like white noise. If you can see any structure in the difference curve, then you still have some systematic error in your theoretical ESF.

In particular, from your posted plot, I can see a systematic error in the "knee" of the curve on both the light side, and the dark side (after compensating for glare it should still be there, if I had to guess). This may seem trivial in the ESF curve, but it can potentially have a large impact on the MTF. For that reason, I prefer to compare not only the ESFs, but also the MTFs.

The main point, though is whether such a difference will have any practical significance. Personally, I think that RL-deconvolution with a Gaussian is good enough for government work (as Numerical Recipes would say). Any potential gains in more accurate modelling of the PSF is going to be overshadowed by the practical difficulties with RL deconvolution (e.g., selecting the appropriate damping parameters). I also like the fact that the Gaussian is separable, and that I can use any number of libraries to efficiently perform the forward-blur step in the RL algorithm.

So my gut feeling is that the Gaussian PSF approximation is reasonable, but I still plan on working through the process more rigorously to obtain some quantitative data on the relative magnitude of the various errors we may introduce along the way.

Erik, I'm not making the connection with your list of processing instructions as having anything to do with what I just pointed out concerning the importance of using math and graphs to come up with improvements to sharpening considering the variables involved with reconstructing the image according to human perception that goes beyond the camera's capture system.

Hi,

Erik's list is exactly what this thread is about, namely using the right tools in the right order and it's in particular about Capture sharpening. That doesn't mean that a correct camera profile or exposure setting doesn't fit in the total workflow, but they aren't really the topic of this thread. They are used to avoid pushing certain tones/colors towards or past clipping and improve tonal separation. Very useful, but it isn't Capture sharpening.

This thread deals with that one aspect, Capture sharpening (assuming one embraces the concept of separating Capture/Creative/Output sharpening), and it (hopefully) shows that it is difficult to nail (or even guess, e.g. when using an extender) the correct settings by eye. What it also shows, for those who are willing to understand the principles, is that the traditional view on the use of the Detail panel or (Smart/USM) Sharpening filters misses one critical aspect that leads towards optimal quality:Capture sharpening is a hardware oriented correction, not a subject oriented one.

When we attempt to kill 2 birds with one stone, do Capture sharpening and Creative sharpening with one control, we will have to accept a compromise as to how much artifacting we are willing to accept. However, we don't have to! We can do localized Creative sharpening with an adjustment brush, and that will be based on fundamentally better data (even including tonal separations).

Quote

It's equivalent to expecting consistent results with every image using a ruler to measure a lump of clay that's never quite consistent image to image because of all the variables in front and behind the camera.

I respectfully think you are mistaken. With some time invested in figuring out this stuff, I can tell you that Capture sharpening is very predictable, because it is caused by our hardware, and the main parameter that drives our best possible sharpness is the aperture setting (besides focusing, obviously). The resulting Capture blur is directly related to the aperture value, for a given camera (due to OLPF/sensel pitch) and lens combination. And quality lenses more or less produce the same amount of blur. That means that the correction values can be simply tabulated and used based on EXIF aperture value (something the raw converter could do automatically, based on prior calibration).

Quote

After all that's been written on this subject, I still have no idea what a sharpened edge is suppose to look like after capture sharpening viewed at 100% which can't be seen anyway no matter the output. Any significant gain to sharpening viewed at 100% will never be seen or appreciated on any output device be it a display or print because we don't view images at that size except for editing.

You are actually making my point, it is hard to judge by eye what the optimal settings are ... That's why I made this tool, to get a handle on the matter, inject some objectivity. Is the tool perfect? No, far from it, but it is only the start of what's to come.

Quote

The subtle improvements to sharpening shown in the samples here and in other tutorials I can never see applying to my images and viewing on output to the web or print.

I'm not sure if you embrace the concept of Capture sharpening as a separate step in the sharpening workflow, but Capture sharpening is intended to only correct or compensate for the lossess linked to the capture process. It is not intended to be viewed as a final product for viewing on output to the web or print. It's more like casting a solid foundation on which to build the final construction.

Quote

Since I don't know where the sharpening starting point lies for further sharpening to these two output mediums, I just sharpen once to get it to look good viewing the downsampled image destined for the web. If it looks bad and it usually does, go back and change the sharpening until it looks good. For print I just do a test print of a small section of the image.

If you are happy with that workflow, then by all means stick to that. Many people do. I'm not forcing anybody to improve their quality, it's all voluntary. All I'm offering is insight in the fundamental process, and a tool to assist. As the saying goes; You can lead a horse to water, but you can't make it drink.

Ok. I hope I do not come across as being defensive or aggressive. Here goes ....

Hi Frans,

Not at all. I welcome your view because I know you have studied (an ongoing process) the underlying principles in depth. I like you blog posts, and think it is useful to make models that can help to understand the fundamentals, or see where the model doesn't agree with empirical evidence. In the scientific approach it is important to build and then test a hypothesis (and a rejected hypothesis is still a positive result).

Quote

I agree that my model (OLPF + diffraction + sensel aperture) is of limited real-world use. In particular, I specifically tried to avoid the whole demosaicing issue by working on only a single channel at a time, mostly because my blog is a record of my learning process, and it helps to separate the issues when you are still learning. I do agree with your main observation, i.e., that a practical approach to optimal sharpening should work on the demosaiced image (or perhaps that noise removal, demosaicing and capture sharpening should be addressed simultaneously).

Well, there is a lot going on during the Demosaicing process that we cannot control (unless we create our own converter). There are some things that could be done at the Raw level, e.g. Lateral Chromatic Aberration correction which could produce more accurate demosaicing, but that is not trivial to do. So what we can do is deal with the bare conversion as best as we can.

Quote

I would like to point out, though, that comparing the empirical ESF to the theoretical (Gaussian) ESF by overlaying the plots is a little misleading. During my experiments on generating synthetic images with known PSFs I discovered that a more reliable method is to plot the difference between the ESFs. If your theoretical ESF is a good match for the empirical ESF, then their difference should look like white noise. If you can see any structure in the difference curve, then you still have some systematic error in your theoretical ESF.

I fully agree. However, there will be almost certainly some systematic residual signal in the difference curve. For example, our optics will exhibit a certain amount of glare, which affects the darkest signals most because it's a locally fixed amount of signal that's added, as shown in the chart I posted. Yet, the benefit of using a rather simple model (which happens to characterize the real issue quite well) is that those deviations become clear. If I had used an adaptive, e.g. polynomial, curve fit, then the difference curve would be closer to white noise. But that would not be an accurate model for both highlights and shadows.

Quote

In particular, from your posted plot, I can see a systematic error in the "knee" of the curve on both the light side, and the dark side (after compensating for glare it should still be there, if I had to guess). This may seem trivial in the ESF curve, but it can potentially have a large impact on the MTF. For that reason, I prefer to compare not only the ESFs, but also the MTFs.

I do as well, but it's not as intuitive for many photographers who are less seasoned in reading such very helpful charts. The average photographer can relate to edge sharpness in the spatial domain much easier. Another thing is that the tool I made available, only attempts to fit a single Gaussian sigma (which characterizes the majority of the blur), because we are only offered a single radius setting in our sharpening tool. From earlier research it was shown that a combination of Gaussians produces an even better fit. But to use that, one needs a possibility to apply one's own deconvolution kernel, and not many software packages facilitate that in a simple interface.

Quote

The main point, though is whether such a difference will have any practical significance. Personally, I think that RL-deconvolution with a Gaussian is good enough for government work (as Numerical Recipes would say). Any potential gains in more accurate modelling of the PSF is going to be overshadowed by the practical difficulties with RL deconvolution (e.g., selecting the appropriate damping parameters). I also like the fact that the Gaussian is separable, and that I can use any number of libraries to efficiently perform the forward-blur step in the RL algorithm.

Indeed. While not perfect, the Richardson-Lucy deconvolution still offers a very useful improvement, also for normal (terrestrial) imaging, even with a single Gaussian PSF as input (as long as it's a good Gaussian PSF).

Quote

So my gut feeling is that the Gaussian PSF approximation is reasonable, but I still plan on working through the process more rigorously to obtain some quantitative data on the relative magnitude of the various errors we may introduce along the way.

Looking forward to your findings. The more the merrier, as they say. I offer one consideration for your analysis, and that is to look at the differences between a regular Gaussian PSF, and one based on not point sampling of the Gaussian but (sensel aperture) area sampling. Especially with the apparently small sigmas we encounter with good optics, 0.7 or 0.8 is common for the optimal aperture, there will be a noticeable difference in the resulting PSF kernels and the resulting restoration. Both of the following kernels (crude integer versions, with limited amplitude, and support dimensions) have the same Gaussian as basis:

Sigma=0.7, fill-factor=point-sample

0 2 4 2 02 33 92 33 24 92 255 92 42 33 92 33 20 2 4 2 0

Sigma=0.7, fill-factor=100% area-sample

0 3 8 3 03 45 108 45 38 108 255 108 83 45 108 45 30 3 8 3 0

Just something to consider, since we both know that a more accurate PSF will lead to a more accurate restoration.

thank you very much for making this tool available, and for your efforts explaining it. - If I would not use it to determine the quality of my whole pipe line including printer, but just to calculate the radius for the deconvolution in Raw Developer - would it be necessary to print the target, or would it be sufficient to shoot it displayed on screen?

thank you very much for making this tool available, and for your efforts explaining it. - If I would not use it to determine the quality of my whole pipe line including printer, but just to calculate the radius for the deconvolution in Raw Developer - would it be necessary to print the target, or would it be sufficient to shoot it displayed on screen?

Hi Hening,

Most displays are pretty low resolution (approx. 100 PPI), compared to the highest quality of inkjet printers (600-720 PPI). That means that to challenge the optical system to the same degree, you would need to shoot it from a 6x longer distance (6 x 25 = 150x focal length). That's not very practical.

The printer is only used to produce a very high resolution target, to challenge the optics and make sure that the target is not the weakest link in the test setup. It's not the quality of the printer that is being tested, although if it produces low resolution targets, that may influence the lens/camera results at the nearer end of the recommended shooting distance range of 25 x focal length. My target does have a few tools incorporated in the design to judge the print quality, so it can alternatively also be used for that purpose.

thank you for your very fast reply.My longest lens is 85 mm, 150x focal length is 12,75 m, I can almost muster that for a single shot. That is more practical for me than having to send the file to a print service (I should have added that I do not print myself).Is the plan-parallelity between the camera sensor and the display screen very critical?

Is the plan-parallelity between the camera sensor and the display screen very critical?

Hi Hening,

There is no need to exaggerate it. While the analysis tool will pick up small differences along the edges, we're talking about decimal fractions that you probably will not be able to enter in the Raw converter anyway. Defocus will have a larger impact, so try to nail that as good as practically possible.

Also remember that your Raw converter's contrast/curve settings should be the same that you normally use, and for the analysis there should be no sharpening applied to the output file that's going to be analysed.