We have great Raw conversion and sharpening tools at our disposal, but it is not always clear which settings will objectively lead to the best results. Human vision is easily fooled e.g. by differences in contrast, so finding the optimal settings by eye may not be easy. Especially for 'Capture Sharpening' it is important to get it as accurate as possible. When we don't sharpen enough, we'll leave image quality on the table, and when we overdo it then we'll have to face the consequences. When we for example produce large format output, or need to crop a lot, we may discover distracting halos because at a larger output magnification our eyes now have an easier task distinguishing between the actual detail and the artifacts.

Regardless of the exact sharpening method used, one of the sharpening parameters is usually a radius setting which controls how wide of an area around each pixel is going to influence that central pixel's brightness, and thereby how much contrast will be added to the local micro-detail. Ideally we only want to restore the original image's sharpness as it was before it got blurred by lens aberrations, diffraction, the AA-filter, Raw conversion, etc. Creative sharpening is considered by many to be a separate process, best applied locally.The radius control is the most important one to get right, regardless of the sharpening method we use. The actual sharpening method may influence the amount we need to apply, but the radius is pretty much a physical given for a certain lens and sensor combination.

Now, wouldn't it be nice to have a tool to objectively determine that optimal radius setting? Well, now there is such a tool, the 'Slanted Edge evaluation' (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html) tool, and it makes use of the 'slanted edge' features that can be found in a number of test charts (such as the one I proposed here (http://www.openphotographyforums.com/forums/showthread.php?t=13217)).

I've made it a web-page based tool, which can therefore also operate on modern smartphones, and it allows to objectively determine that optimal sharpening radius. Unfortunately, the basic functionality of HTML web pages doesn't allow to read and write random user selected image files on client side computers, so there is some manual input e.g. of pixel brightness values required, but it's a free tool so who could complain. You could try and ask your money back if you don't like it, but with enough support I might actually make a commercial version available, we'll see.

This new tool works by making a model of the blur pattern. That model will essentially be based on the shape of a Gaussian bell curve, which actually has a pretty good overall correspondence with the more complex, Point Spread function (PSF). Such a PSF is a mathematical model which not only characterizes the blur pattern, but also allows to invert the blur effects, and restore the original sharp signal.

Actually, those who use the Imatest software already have some great capability to simplify the data collection process, because it can analyse image files directly, even Raw files. Part of the trick is in figuring out how to interpret the output results, and convert them to input for this tool.

However, this new tool continues where most analysis tools stop, and it not only gives feedback in the form of a (Capture) sharpening radius to use, but it also allows to produce a discrete deconvolution kernel based on the prior analysis. There are free tools available on the internet (e.g. ImageJ (http://rsbweb.nih.gov/ij/download.html), or ImageMagick (http://www.imagemagick.org/script/index.php)) that allow to use such a kernel and let you apply deconvolution sharpening to images that were similarly blurred (same lens and aperture, and Raw conversion) as the test file that was used to determine the kernel.

How to use the results of the analysis?The easy way to use it is by copying the optimal radius that results from the analysis to your sharpening tool. You can then optimize the other parameters, knowing that any resulting artifacts are caused by overdoing the amount or other settings. Also when the resolution drops after adjusting the other parameters, you'll know that you are applying too much noise correction or are using too strong masking. Just re-analyze the same test image after the additional processing and compare the results if you want an objective verdict.

A more advanced use of the analysis involves the creation of a deconvolution filter kernel from the blur radius parameter and using that kernel to deconvolve the image, or similar (same lens/aperture/camera/rawprocessing) images. One can also re-analyse the initial test image after an initial deconvolution, and determine if (a) subsequent run(s) with a different filter kernel further improves the result. It will, if the original blur is a combination of different but similarly strong sources of blur.

I will be adding some before/after examples of what can be achieved with the analysis results, but feel free to experiment with it and ask questions about how to use it for specific situations.

Here I have attached 2 crops, from a Canon EOS 1Ds Mark III image that was shot with the EF 100mm f/2.8L Macro lens at f/4.5 at a little over 13 metres distance. Tripod, mirror lockup, and focused with Live View and a loupe. I tried to shoot when the wind didn't move the branches too much.

The Raw conversion was done with ACR 7.1 for the unsharpened version, and that TIFF file was Deconvolved with ImageJ using a single Deconvolution filter kernel that was derived from a blur radius of approx. 0.725. That radius was determined earlier to characterize the f/4.5 blur pretty well (see the attached chart).

Even with the 0.7 Radius as a given, it was very difficult to find the other optimal ACR Capture sharpening parameters by eye, but using a deconvolution filter takes that guesswork also out of the equation.

Thanks for making this tool, and for providing the associated explanations. It will be interesting to see if a custom deconvolution filter kernel, derived from a specific camera, provides visibly better sharpening than the generic sharpening filters in commercial raw converters like LR, ACR, CNX, or C1.

Those of us who have workflows built around those products may not want to add additional steps, like passes through ImageJ or ImageMagick, in order to obtain a theoretically optimal sharpening of every image. But some experiments with your method might enable us to see what an optimal sharpening looks like, and to choose radius and amount settings in our favorite converters which come close. I think that there are quite a few people (including some internet celebrities) who are over-sharpening their files, and could use a reality check. OK, so I suppose sharpening is ultimately a matter of taste, but I prefer my images chewy but not crunchy.

Bart,Thanks for the information! I'll start with a very simple question - this is for capture level sharpening yes? If a two tiered sharpening technique is used does that effect this calculation?

Hi Eric,

The tool can be used for many things, and it offers a lot of insight for those who invest some time. As you can see in the crop example I posted earlier, it also restores the so-called 3D look that is sometimes attributed to some camera platforms. It's all about restoring the original input data, the MTF input, as much as the system MTF allows.

Step one would be to optimize Capture sharpening, when used on an unsharpened Raw conversion of a Slanted Edge shot. Without Raw conversion sharpening, well get the baseline that the lens/aperture/camera system produces (assuming good focus). That would be the time to determine optimal capture sharpening, since that would create a much better starting point for the Creative and Output sharpening steps.

It also doesn't produce halos and, because it only sharpens (amongst others) highlights, it shouldn't produce clipping either. If the highlights clip after sharpening, then they were too bright to begin with because only the original signal is restored. Nothing is exaggerated, only restoration takes place.

So to aswer your question, for a two-tiered sharpening approach, this would be the basis. One word of caution, the contrast and tonality settings during the Raw conversion do influence the sharpening result. Therefore it would be optimal to incorporate this blur radius we found earlier, as soon in the Raw conversion process as possible, and be systematic about it. Hence its suitability for Capture sharpening. All it does is restore capture losses, subsequent sharpening will have a better foundation.

Do note, if one routinely adjusts the tonality, e.g. bumps the contrast a bit or adds an S-curve, then it would make sense to also do that on a Slanted edge conversion, so its effect will be incorporated in the Blur Radius analysis.

In Photoshop this would become my Background layer, without avoidable blur and halo, with the maximum quality pulled out of the capture system. Using a Smart object would still allow me to return to certain Raw conversion related settings, such as WB and minor tonality adjustments, or spot removal. Subsequent Creative sharpening, or output sharpening after resampling to the output size, should only add emphasis to elements that help to better get our creative intentions across. I like using High-Pass filter layers to do such targeted resolution adjustments. Anyhow, with proper Capture sharpening we can spend more time on the creative aspects without having to worry about artifacts being 'enhanced' by further processing, because there are virtually no artifacts to start with.

But it doesn't have to stop there. In fact, not only can it be used to find the sweet spot aperture for a lens (in case the highest resolution is required), but it allows over time to build a 'database' (or look-up table) of Radius parameters that can be reused at will. Only determine the parameters once, use them on many occasions. When done sytematically one can also exchange findings, or at least get started with settings that are in the ball-park.

It can also be used for improving our large format output. One can e.g. produce an ImageMagick script that upsamples our halo free Capture sharpened image by e.g. a fixed factor of 2, or 3, or whatever, and automatically deconvolves to restore part of the upsampling blur. When a perfect slanted edge (e.g. a crop of my test target) is used for that upsampling step, then we can create a deconvolution kernel that restores the original lower resolution data as good as possible. It will even allow to detect flaws in the upsampling algorithms (many add halos).

Thanks for making this tool, and for providing the associated explanations. It will be interesting to see if a custom deconvolution filter kernel, derived from a specific camera, provides visibly better sharpening than the generic sharpening filters in commercial raw converters like LR, ACR, CNX, or C1.

Hi Julian,

You're welcome. I'm confident that a deconvolution filter kernel will be at least as good as a generic sharpening filter, but likely it will be better. It just depends on how much better, and if that justifies the effort. There's always a trade-off, but perfect quality requires putting in at least some effort.

Quote

Those of us who have workflows built around those products may not want to add additional steps, like passes through ImageJ or ImageMagick, in order to obtain a theoretically optimal sharpening of every image. But some experiments with your method might enable us to see what an optimal sharpening looks like, and to choose radius and amount settings in our favorite converters which come close. I think that there are quite a few people (including some internet celebrities) who are over-sharpening their files, and could use a reality check.

I agree, not everybody needs or wants to go the extra mile. However, as you say, it will allow some reality checks and it may also get some of the established industry moving in the right direction. Meanwhile, the solutions are available for those who need them.

Quote

OK, so I suppose sharpening is ultimately a matter of taste, but I prefer my images chewy but not crunchy.

Yes, (Creative) sharpening is very much a matter of taste, and so it should be. However, what we do not need is the quality losses that are inherent in the Capture process and the Output process, and the good news is that we can restore some of the losses, which will improve our creative options without having to fear for unwanted artifacts.

Here is another example of Capture sharpening only, with the use of the optimal Radius.

I've shot the same spruce tree cone scene as I showed earlier, but this image was taken with an f/16 aperture (instead of f/4.5). That obviously did increase the DOF which might be required for artistic reasons, but there is a small price to be paid due to diffraction. On my 1Ds3 camera with its 6.4 micron sensel pitch, visible diffraction sets in at f/7.1 and gets progressively worse at narrower apertures. I've attached both the before and after Capture sharpening crops at the end of this post. It's clear that the unsharpened f/16 shot needed more help than the earlier f/4.5 one.

However, by using optimal Capture sharpening, most of the losses will be restored to detail and we should get an almost identical result to base our Creative sharpening on. And indeed, besides some unrecoverable loss in micro contrast, the Capture sharpened images look almost identical:

(http://bvdwolf.home.xs4all.nl/temp/LuLa/8307+8312_ImageJ.jpg)

The f/4.5 image was deconvolved with a kernel for a 0.725 blur radius, and the f/16 image was deconvolved with a kernel for a 1.037 blur radius. Despite, or rather because of, the different blur radii, the resulting images look almost the same and form a good foundation for almost identical further processing. That's another benefit for optimal Capture sharpening, the new 'calibrated' or restored baseline allows to use a more unified approach for further processing. Both images have the more 3D look restored, with e.g. a similar amount of glossyness on the needles.

To allow a quick start exploration of the tool, I've prepared a text file with some data which can be copied and pasted into the application, here (http://bvdwolf.home.xs4all.nl/temp/LuLa/IJProfilePlot_8104.txt). You can save it with a right mouse click, or just copy it from your screen. The data was collected with ImageJ.

The two x,y coordinates on the edge are given at the start, and then there are 3 columns of data (one for each color channel). You can copy and paste the numerical data of one channel at a time in the tool's textbox (right click, and use 'select all', before pasting new data over existing data), and click 'Calculate sigma'.

That should create, in addition to the single number Blur Radius value, also comma and space delimited columns of data that can be copied, and pasted in e.g. MS Excel. There the text can be separated in columns of numbers with a heading, with the Data|Text to Columns menu function, where you select delimited data, and check the comma and space delimiters.

That is of course only needed if you want to further analyse or compare the data, or produce e.g. such a chart of the data:

(http://bvdwolf.home.xs4all.nl/temp/LuLa/8104_GreenProfile.png)

BTW, that chart shows how good a Gaussian approximation can fit the actual edge profile of an unsharpened Raw conversion of a Slanted Edge image.

There is also a pretty close correlation between the Red/Green/Blue channels (sigmas of 0.757/0.762/0.758), which shows how the Bayer CFA Demosaicing for mostly Luminance data produces virtually identical resolution in all channels. Since Luminance is the dominant factor for the Human Visual System's contrast sensitivity, it also shows that we can use a single sharpening value for the initial Capture sharpening of all channels. Only at the Creative sharpening stage we could focus some more attention on localized sharpening of predominantly Red or Blue surfaces.

Suppose you want to only use ACR7 or LR4 for sharpening, because you don't want to make round-trips to external applications. How to approach the iterative process of finding the best default settings for a given camera/aperture/lens combination? Here follows a suggestion how I would do it.

Use your common Exposure, Contrast, etc., settings in the basic settings panel, and in the Tone Curve panel. With the introduction of Process version 2012, all Basic controls also have an influence on the tone curve, and contrast in general has an influence on the Sharpening settings. Perform a White Balancing on the gray areas of the target, and adjust the overall exposure/brightness of the image to medium gray values for the chart's gray background. For now, keep these settings the same for the following conversions. We only want to change a single parameter at a time.

1. You first start by generating an unsharpened Raw conversion of a Slanted Edge target shot. This requires setting the Amount slider on the Detail panel to zero. A so-called 16-bit/channel TIFF output will give the most accurate results, but 8-b/ch data will also give correct (only slightly less accurate) results.

2. Then use your preferred procedure to collect the edge angle coordinates and transition data from the converted result. It helps if you use a procedure that allows to plot the edge transition pixel values, because we also want to visually interpret the shape of the edge transition.

3. Copy / Paste the data into the Slanted Edge analysis tool's textbox, and click the 'Calculate sigma' button. This will calculate the Blur Radius we should use in the Detail settings panel. On the file I'm currently using, with my preferred settings, that gives a result of sigma=0.6635332701693427. I'll enter the closest possible setting in the Detail panel for Radius, 0.7 . This determines one of the interdependent variables, and we can start changing the other settings one-by-one.

4. I'll start with an initial setting of the Detail slider to 50. This sets the sharpening method to using 50% USM like sharpening, and 50% Smart Sharpen like deconvolution. Now I'll make a new Raw conversion where only the Amount slider is changed, e.g to 20. This Raw conversion, saved as a TIFF is again analysed (make sure to use the exact same edge transition area in each conversion), and the Slanted Edge tool now reports a Blur radius of 0.4518385151528749, and a 10-90% edge rise of 1.16 pixels. A perfectly sharp image would have a 10-90% edge rise of a little less than 1.0, and a Blur radius of a bit under 0.39 . So it seems I can increase the amount setting a bit.

5. A Raw conversion with an amount of 30, results in a Blur radius of 0.3710860886167325, and a 10-90% rise of 0.95 pixels, which is only slightly oversharpened, and a graphic plot of the ESF shows a very minor amount of highlight halo, but also an increase of the shadow noise (See attachment).

6. Therefore I decided to decrease the Detail slider to 35, and try again. This resulted in a Blur radius of 0.41275946448178497 and a 10-90% edge rise of 1.06, in other words a slight undersharpening.

7. I boosted the Amount control to 35 and did a new analysis on the TIFF conversion. This time it resulted in a Blur radius of 0.3876845834047897, and a 10-90% edge rise of 0.99 pixels, almost perfect (See attachment).

This produces a pretty good Capture sharpening although the shadow noise did increase a bit more than I like. Maybe it would be wise to find a setting with an even lower Detail value, but with the Amount boosted a bit further. Since this was a relatively wide aperture and thus low diffraction shot, there wouldn't be too much deconvolution benefit to be gotten anyway.

I'll repeat the procedure a bit later for a file with much more diffraction, to see if the Detail slider helps without boosting noise as much.

It would be great to have a little more detail on how to perform steps like

Then use your preferred procedure to collect the edge angle coordinates and transition data from the converted result.

for those of us who can have never done this before, please.

Thanks.

Hi,

The two points on the edge that define its angle can be picked by any image viewer that reports pixel coordinates and pixel value. It's described when you click the question mark icon (http://bvdwolf.home.xs4all.nl/main/foto/psf/PSF_files/hlp_20.png) (http://bvdwolf.home.xs4all.nl/main/foto/psf/help1.html) of the first step on the Slanted Edge analysis tool page.

You can also click on the question mark icon (http://bvdwolf.home.xs4all.nl/main/foto/psf/PSF_files/hlp_20.png) (http://bvdwolf.home.xs4all.nl/main/foto/psf/help2.html) of the second step.That will open a new tab or webpage where I describe how to collect the pixel values with the use of ImageJ, a free JAVA based image processing utility. There may be other tools available that allow to record the pixel values of a row of pixels, and it would be nice if people would share such information.

I have a preference for ImageJ because it can do a lot more (which I also use) than strictly needed for this functionality, but I understand that it represents an additional learning curve. So if anybody can recommend another utility or built-in method, or Photoshop plug-in or Lightroom module to collect the pixel values for copying and pasting, please share that tip.

Hope that answers your question, but don't hesitate to ask for further clarification if needed.

Cheers,Bart

P.S. I've found that this method (http://books.google.nl/books?id=T8l-SVgI0NMC&pg=PA284&lpg=PA284&dq=%22line+profile%22+photoshop&source=bl&ots=iHTMfXNbfG&sig=djl2qBc8i5xsOCGIjGcFHjfVPkw&hl=nl&sa=X&ei=MBbkT57EAcml0QWKjYGTCQ&ved=0CE8Q6AEwAQ#v=onepage&q=%22line%20profile%22%20photoshop&f=false) for use in Photoshop Extended works, but it requires saving and converting the text output before it can be copied and pasted into the webpage tool. Make sure to set a lower spacing than described there because we want to sample each pixel, no pixels skipped.

This is great! As a long-time user of ImageJ (NIH Image, written in Pascal!) I am loving this insightful tool - there are plug-ins for deconvolution but trying to get them adapted to the kind of processing your method targets is tricky business and not very straightforward.

Thanks for the effort. Now I need to print your test chart and get to work analyzing....

This is great! As a long-time user of ImageJ (NIH Image, written in Pascal!) I am loving this insightful tool - there are plug-ins for deconvolution but trying to get them adapted to the kind of processing your method targets is tricky business and not very straightforward.

Hi Kirk,

The difficulty with many of those deconvolution plugins is that they require a significant level of prior knowledge by the users. In contrast, my tool and the link to a Gaussian based PSF kernel generator can produce an almost perfect Deconvolution kernel for the spatial domain. From there it is just a matter of copying and pasting that kernel output into ImageJ's Process|Filter|Convolve... menu. That will perform the deconvolution in the spatial domain, which is only less efficient for larger image sizes, but otherwise should give the same result (without the need for 'abstract' concepts like regularization parameters) as deconvolution in the frequency domain (after a Fourier conversion).

The only issue spoiling the fun sofar is that there seems to be a small bug in the ImageJ Convolver function code, it seems to offset the resulting image a few pixels to the left (I'll raise a bug report to the programmers, I'll have to first sign up for their forum).

Quote

Thanks for the effort. Now I need to print your test chart and get to work analyzing....

Thanks, and you're welcome. One could use the Slanted Edge features on e.g. the DPReview resolution charts, but they are usually already sharpened (with sub-optimal settings), so that would not give a useful zero-baseline. Also, since contrast influences the sharpening requirements I indeed recommend to use one's own preferred Raw conversion method, but without any sharpening (or optimise in-camera presets). My test chart at least helps to get accurate focus, and eliminating defocus from the equation is obviously an important prerequisite for addressing the other/unavoidable blur sources.

I do understand that the need to do some preparations (shooting targets and collecting edge transition data) is a significant obstacle, but the rewards are sweet ... It is also rewarding in that it exposes the flaws in many current sharpening/resampling solutions, and thus offers ways to reduce/overcome those flaws. The use of a free web-based tool obviously comes with some disadvantages compared to a commercial software product, but I cannot give everything away for free ...

I printed your target on my ancient Epson R800 and shot an aperture sequence in whole stops from 2.8 to 22 for the combination of the Canon 5DmkII and the Zeiss 50mm MakroPlanar. I have gone through evaluating the 2.8, 4.0 and 22 apertures and I am really impressed with the results. Thusfar, I have not exercised the full range of the tool, but I can appreciate the detailed data that one can generate to analyze the effect of the sharpening radius (sigma) and the resulting deconvolution kernel. I shot the target and then a test scene in the same light with the same set up, composed of objects of various textures and frequencies.

For the apertures analyzed so far, I determined sigma and the deconv kernel. I shot in relatively diffused light. My raw converter (Raw Photo Processor) performs no sharpening. I took the RGB image into ImageJ and converted the green channel to grayscale to perform the angle and ESF measurements. I then applied the kernel to the original target image and repeated the measurements. As expected, the 10-90 slope increases to within 1 to 1.5 pixels from 2 to 3, typically. I can see that one would want to perform this exercise for all lenses and have that database on hand for automating deconvolution on keeper images.

Very cool. I would be happy to make a donation via Paypal, as this is a very useful tool, regardless of the workflow. Ages ago I actually had a working knowledge of the then NIH Image macro language. Time to dust off the cobwebs, perhaps...

I printed your target on my ancient Epson R800 and shot an aperture sequence in whole stops from 2.8 to 22 for the combination of the Canon 5DmkII and the Zeiss 50mm MakroPlanar. I have gone through evaluating the 2.8, 4.0 and 22 apertures and I am really impressed with the results.

Hi Kirk,

One of the benefits of doing these things oneself, is that it allows to learn so much more about the image quality. It also becomes (even more) clear that different apertures require different Capture sharpening to achieve the best quality, and that the resulting differences in quality between apertures can be minimized.

Quote

I took the RGB image into ImageJ and converted the green channel to grayscale to perform the angle and ESF measurements. I then applied the kernel to the original target image and repeated the measurements. As expected, the 10-90 slope increases to within 1 to 1.5 pixels from 2 to 3, typically.

Yes, that's commonly the case. A single iteration will usually not achieve the ultimate goal of approx. 1 pixel ESF edge rise, but it can come close. One could repeat the deconvolution with the newly found blur Radius after one iteration for even sharper results, but it does increase the risk of noise amplification and mild halos. A single Deconvolution would minimize such risks to some noise amplification, which can be mitigated by performing a mild noise reduction before Capture sharpening. The analysis also allows to improve one's up/down-sampling output quality. The potential improvement on large format output is impressive, but may also reveal the need for a better algorithm for that purpose. Even Photoshop's Bicubic Smoother for instance falls kind of flat on its face when the image was sharpened in addition to Capture sharpening ...

Quote

I can see that one would want to perform this exercise for all lenses and have that database on hand for automating deconvolution on keeper images.

It can also become apparent that most of one's lenses (assuming quality lenses) exhibit similar behavior. A peak performance with many lenses can be found some 2 stops from the wide end, and narrow aperture diffraction is known to progressively deteriorate the resolution thus leading to larger sigmas. Of course nothing beats the accuracy of testing one's own lenses, but it does require a bit of work.

Quote

Very cool. I would be happy to make a donation via Paypal, as this is a very useful tool, regardless of the workflow. Ages ago I actually had a working knowledge of the then NIH Image macro language. Time to dust off the cobwebs, perhaps...

Is it my imagination, or is there a slight color shift in kirkt's processed images? Seems like the sharpened versions are a bit more saturated. Will the decon kernel do that?

Deconvolution will only have an influence on local saturation when it lifts the veil of blur and reveals the microdetail of the original material structure. So it attempts to only restore the original colors and brightness at the pixel level. As also Kirk's examples demonstrate (look at the fibres of the stitches and the surface of the leather), the restoration of small specular reflections and shadows in dents produces a dramatically more realistic rendering of the material structure. And that is only due to Capture sharpening, we haven't even begun to augment that with Creative sharpening, should we want to stress certain features.

Quote

As for a commercial version of the tool - I vote for a Mac product.

I was afraid of that, it would mean a significantly more complex programming effort. Maybe I'll do it in JAVA instead, that should also allow to run it on even more platforms.

If you only have the resources & inclination for one platform, then the best of all possible worlds might be Windows, but standards-compliant (no Microsoft funny business): make it work under the Wine libraries, so Mac users could use it without opening themselves to all the malware risks of a full-blown Windows OS installation. And please not Java — the security risks of the recent Flashback malware have made many Mac users disable Java completely.

My previous "by eye" capture sharpening was, at best, hit or miss. I have been, admittedly, struggling with my sharpening workflow - ad hoc-ing it as best as I could visually approximate based on some assessment of detail frequency. I've read the real world sharpening, done all that - it just turns out that my eye is not so good sometimes. Sometimes it worked great, other times I knew my early attempts in the workflow to compensate for optical softness caused issues that only got propagated and amplified as I continued in post.

I have been testing the capture sharpening optimization on both target images with accompanying test scenes, as well as previously shot images that, serendipitously, contained relatively useable slanted edges of high contrast. I have been applying the deconvolution kernel in ImageJ as well as simply using the Gaussian sigma as the radius for raw conversion sharpening in ACR7.1 (I have not tried in DXO or Raw Developer yet). The results are significantly better and more predictable.

Moreover, once the proper capture sharpening is dialed in I find two additional benefits:

1) Less of the need for noise reduction. Some ACR NR balances the proper sharpening radius, and I can leave more "grain" - i.e., less luminance NR. This appears to be the result of not having to clobber the incorrectly sharpened image with NR to get a result. I may have been able to achieve this balance previously, but with this new tool, it is a matter of applying a sharpening amount, as opposed to juggling radius, amount, detail, masking and doe-see-doe'ing around and around until it looked right. Also, the tendency with capture sharpening is to use a very small, sub pixel radius always. This makes sense intuitively, but is many times not the optimal amount. Using Bart's approach takes the guesswork out and provides an efficient method for removing this inherent bias in my capture sharpening starting point.

2) Once the proper capture sharpening is applied, subsequent image up or downsizing is less plagued by artifact, and final output sharpening is virtually halo-free. This is particularly noticeable compared to my "by eye" hit or miss attempts. Using PK Sharpener is much more predictable, especially on significantly downsized images that were particularly susceptible to aliasing artifact at low resolution and narrow output sharpening.

The combination of more original "grain" permitted to pass into the raw conversion and proper radius gives a much more useable image with fewer "corrections" required to pull out the output-optimized sharp image.

In reply to Bart's previous comments about multi-pass evaluation and deconvolution - what is particularly cool is that the ESF for a doubly deconvolved image can demonstrate the potential over sharpening that can occur and show up as overshoot in the edge spread profile. See attached plot as an example of the effect of multi-pass deconvolution - I guess we're all shooting for a critically damped capture sharpening!

It is pretty clear to me that I am going to learn a lot about how I have been subtly destroying my images at the most crucial point in their life - raw conversion!

Just when I think I know a little bit about something, I learn a little more and realize I have a lot to learn.

That's what I love about this stuff.

kirk

PS - I'm a Mac user, but I'm used to having to kluge together workflow, so whatever you chose to implement, I'll adopt and adapt.

My ImageJ is only 32-bit and crashes when accessing my network share files.

The 64-bit version seems to want to install an old Java SDK, that I'd prefer to avoid.

You can also install a version of ImageJ which uses an existing JAVA installation, but you'll have to edit some paths in the setup file that's used when IJ starts. I just used a complete installation, which installs the JRE in a subdirectory of ImageJ. I don't think it touches an existing JAVA version/installation, but you'd have to check the documentation for that.

Quote

How does your approach compare with TopazLabs InFocus? That works quite well at detecting the optimal radius, but it does introduce a lot of artefacts that undermine the quality of the result?

Topaz Labs' InFocus has several modes to choose from. The Generic and the Out of Focus ones also require a Radius input. People usually set the Blur Radius too large, which will result in artifacts. Now you have a means to know the correct radius to use. The 'Unkown/Estimate' method of the plugin's deconvolution requires to zoom in on some detail in a narrow DOF zone. If the chosen area includes too many clues from different DOF zones, it will get confused and generate artifacts.

My method just uses a single (optimal radius) deconvolution to build a deconvolution kernel. If speed is less of an issue, then my method can in principle also use a weighted average of several PSFs, but for the web version I wanted to avoid time-out issues with scripts that run too long. It is also possible to add Gamma adjustment to the calculations, but for most Raw converters we don't have the luxury to decide when to sharpen until after the Raw converted result is already converted to a gamma that's not 1.0 . So I skipped that option as well, also to keep the user interface simple.

One can also use the Richardson-Lucy deconvolution sharpening in RawTherapee, which also uses a Radius as input parameter.

My previous "by eye" capture sharpening was, at best, hit or miss. I have been, admittedly, struggling with my sharpening workflow - ad hoc-ing it as best as I could visually approximate based on some assessment of detail frequency. I've read the real world sharpening, done all that - it just turns out that my eye is not so good sometimes. Sometimes it worked great, other times I knew my early attempts in the workflow to compensate for optical softness caused issues that only got propagated and amplified as I continued in post.

I have been testing the capture sharpening optimization on both target images with accompanying test scenes, as well as previously shot images that, serendipitously, contained relatively useable slanted edges of high contrast. I have been applying the deconvolution kernel in ImageJ as well as simply using the Gaussian sigma as the radius for raw conversion sharpening in ACR7.1 (I have not tried in DXO or Raw Developer yet). The results are significantly better and more predictable.

Hi Kirk,

I'm glad you apparently have also come to the conclusion that a good sharpening Radius starting-point will benefit the quality and predictability of our technical image quality, from the start to its final state. I have also thought of myself as being reasonably good in finding the optimal settings for capture sharpening, until I started to create a level playing-field by actually 'removing' the physical blur component.

While true deconvolution should alway provide superior quality, it is true that even ACR/LR and others do benefit from the (or a more) correct choice of the Radius control. What struck me most about the "Real World Sharpening" book, was the continuing attempts to reduce the visibility of resulting halos, instead of preventing them to begin with ... I'm a strong believer of Prevention being better that Cure.

Quote

Moreover, once the proper capture sharpening is dialed in I find two additional benefits:

1) Less of the need for noise reduction.

What's interesting about the deconvolution side of the situation, is that the weighted average contribution of surrounding pixels, say 48 or more per new pixel value for each channel, should also have a bit of an averaging effect on the per pixel noise. Of course this will add to central pixel's noise which will dominate the resulting pixel value, so noise will increase (noise is related to signal) as we boost the microcontrast by lifting the blur veil.

Especially for higher ISO settings one could use a bit of noise reduction before/or after Capture sharpening, but as you also have found, the resulting noise has a nice quality about it when the sharpening Radius was more in tune with the pysical source of the blur itself.

I'm still a bit puzzled by the ACR/LR dialog which starts with an Amount control, instead of a Radius control, where the rest of the controls are in a much more logical order top to bottom.

Quote

I may have been able to achieve this balance previously, but with this new tool, it is a matter of applying a sharpening amount, as opposed to juggling radius, amount, detail, masking and doe-see-doe'ing around and around until it looked right. Also, the tendency with capture sharpening is to use a very small, sub pixel radius always. This makes sense intuitively, but is many times not the optimal amount. Using Bart's approach takes the guesswork out and provides an efficient method for removing this inherent bias in my capture sharpening starting point.

That is indeed one of my goals, removing the subjective part (and our eyes are easily fooled), and at least eliminate one important variable from the list of controls. The tool also allows to get a better understanding of the effect of the Detail slider by trying a few fixed settings and then dialing in the correct Amount. As long as there are not too many negative effects on noise, I'd increase the Detail slider towards the Deconvolution biased side.

Quote

2) Once the proper capture sharpening is applied, subsequent image up or downsizing is less plagued by artifact, and final output sharpening is virtually halo-free.

That's right, although downsampling may even benefit from no capture sharpening at all. The downsampled result can use a bit of sharpening, but we won't be able to set the correct parameters until after the actual down-sampling. My tool will allow to show if damage was already done, and that may lead to using a blur before (or a different algorithm for) downsampling.

Upsampling will not only benefit from the absence of artifacts, but it also gets a quality boost by using the correct post-resample deconvolution. As will become apparent, even Bicubic Smoother will add halos, but now we have a tool to see if a small pre-blur will take that artifact away, after which a deconvolution blur will restore the sharpness that was available in the original file data. It won't necessarily create much more resolution than the original had, but it will look less blurred and still natural.

Quote

The combination of more original "grain" permitted to pass into the raw conversion and proper radius gives a much more useable image with fewer "corrections" required to pull out the output-optimized sharp image.

That's it, and it also produces also a more unified look between images.

Quote

In reply to Bart's previous comments about multi-pass evaluation and deconvolution - what is particularly cool is that the ESF for a doubly deconvolved image can demonstrate the potential over sharpening that can occur and show up as overshoot in the edge spread profile. See attached plot as an example of the effect of multi-pass deconvolution - I guess we're all shooting for a critically damped capture sharpening!

Absolutely, and yet we do have the freedom to tweak to our heart's content. One could e.g. tweak the second/third deconvolution filter's scale down a bit, or make another combination of kernels. Lots of possibillities if one can invest some time in it.

Quote

It is pretty clear to me that I am going to learn a lot about how I have been subtly destroying my images at the most crucial point in their life - raw conversion!

Just when I think I know a little bit about something, I learn a little more and realize I have a lot to learn.

That's what I love about this stuff.

The same here, the learning never stops but it helps to have some useful tools to assist in that process.

I just found this thread and while I haven’t played with your tool it looks fantastic! Before I print your target, shoot an aperture series and analyze the images, I have a few questions if you don’t mind.

- You mention the intriguing possibility of using your tool and an imageJ deconvolution kernel to help with upsampling artifacts. What workflow do you recommend for this? Would you “capture sharpen” then increase the size, and then resharpen? Or only sharpen the final image? How would you build the deconvolution kernels? Only one for the increased size or two separate ones (before and after resizing..?)- I just downloaded the trial version of DxO pro and I am pretty impressed with their lens modules, and how DxO lifts the “veil” of some of my images and restores microcontrast. I wonder if they apply similar methods to build their camera/lens modules? - Maybe you should consider setting up a database, so that people who go through the trouble of shooting an aperture series with their favorite camera/lens can deposit the data (and/or the original slanted edge images).- I assume the distance at which you shoot the target doesn’t influence the PSF of the camera lens combination (and 25-50x the focal length would be fine and then can be used for all images)- Lastly, what happens with out of-focus blur/bokeh if you apply your tool

Sorry, about these probably naïve questions but I am a newbie when it comes to sharpening. I just got a D800E, and while I don’t expect that it needs a lot of sharpening in general, the upsizing and diffraction recovery possibilities look vey attractive. Maybe I also convert to the D800 once I will discover that its deconvolved images look as good as the D800E files ;)

I just found this thread and while I haven’t played with your tool it looks fantastic! Before I print your target, shoot an aperture series and analyze the images, I have a few questions if you don’t mind.

Hi Hiroshi,

No problem.

Quote

- You mention the intriguing possibility of using your tool and an imageJ deconvolution kernel to help with upsampling artifacts. What workflow do you recommend for this? Would you “capture sharpen” then increase the size, and then resharpen? Or only sharpen the final image? How would you build the deconvolution kernels? Only one for the increased size or two separate ones (before and after resizing..?)

Those are indeed the two routes one could take. If we can exactly nail the Capture sharpening, then I would prefer to do that as step one, because it would give a better idea about how far we can go with subsequent Creative sharpening without introducing e.g. clipping. On the other hand, if we e.g. have a high ISO image and we do not want to noise reduce all the life out of it, we could consider postponing the Capture sharpening, and wrap it together into one operation if we already know we are going to enlarge the image. One other consideration is, which upsampling artifacts we may encounter and if correcting them is better done on a sharpened or unsharpened basis.

In general, because I'm a low ISO shooter myself (if possible), I would probably go for separate deconvolution sharpening for Capture, and again when preparing for upsampling+output. I will start another thread about the upsampling workflow where my tool can help to analyse issues and solve some of the softness (it won't create new detail, but it will restore losses).

Quote

- I just downloaded the trial version of DxO pro and I am pretty impressed with their lens modules, and how DxO lifts the “veil” of some of my images and restores microcontrast. I wonder if they apply similar methods to build their camera/lens modules?

They essentially do the same, but with many more things being considered. They also differentiate across the image, and thus treat e.g. corners with their specific deblurring. That's why it can take a while before a camera/lens combination is added to the converter solutions that are automatically invoked based on EXIF information. They also calibrate for distance, because lenses do not necessarily perform equally well at all distances.

Viewed in that light, it is amazing how much a single sharpening radius can already restore. For lenses with very poor corner performance one can attempt to do 2 separate deconvolutions, one based on the center of the image and one based on the corners, and then use a radial blend to combine the results in Photoshop. A Raw converter like Capture one already allows to compensate for sharpness fall-off.

Quote

- Maybe you should consider setting up a database, so that people who go through the trouble of shooting an aperture series with their favorite camera/lens can deposit the data (and/or the original slanted edge images).

If people want to share their findings, and make a serious effort to follow the guidelines of no-sharpening, a linear tonecurve, and a decent Low ISO exposure conversion (medium grey is medium gray) and contrast is normal (therefore black and white are not clipped), then it can also be useful for others when that info is shared.

I wouldn't mind making an overview when the data is sent to me (link is at the bottom of the tool's webpage).

Quote

- I assume the distance at which you shoot the target doesn’t influence the PSF of the camera lens combination (and 25-50x the focal length would be fine and then can be used for all images)

That's correct. the target is 'scale invariant'. In fact that is a major benefit that prevents the need for magnification calibration. The only thing not covered is when lenses perform significantly better/worse at certain distances other than these medium distance settings. Things can be done though for extreme situations like macro, or scanners, or long telelenses. For scanners I use a slide mount with a razor-blade mounted at a slant, and for long distances one can use a larger version of the target (enlarged and deconvolution sharpened, ;) ).

Quote

- Lastly, what happens with out of-focus blur/bokeh if you apply your tool

It stays OOF, but becomes a bit less blurred. If the target itself is not optimally focused, then removal of that level of defocus will be attempted. All my tool does, is determine the major blur component, and fit a model to allow removal of that particular blur. Similar but different blur levels will be sub-optimally restored, and there will remain a certain amount of blur if the radius for that blur was larger. If there are fore/background zones with better focus (a smaller radius) then they will be restored with too large a radius and it is likely that sharpening halos will be the result. Therefore it is important to try and focus as good as possible, to find the smallest possible blur radius one could encounter in an image.

Quote

Sorry, about these probably naïve questions but I am a newbie when it comes to sharpening.

No, there is no need to be so modest, your questions were excellent and may help others who were wondering but didn't ask.

Quote

I just got a D800E, and while I don’t expect that it needs a lot of sharpening in general, the upsizing and diffraction recovery possibilities look vey attractive. Maybe I also convert to the D800 once I will discover that its deconvolved images look as good as the D800E files ;)

Well, the focus is only perfect in a very narrow zone around the focus plane, and there will always be some level of residual lens aberratons and/or diffraction, even on cameras without an AA-filter. And then there is a Demosaicing step which has to make trade-offs between artifacts and sharpness. And then there is resampling, up or down, which will add its own blur. There will always be something to improve, and now we can know how to do that.

This tool is super useful. I have been testing it on some images I shot previously in high contrast (sunlit) conditions with a 5DII+70-200 2.8 with a 1.4x extender. This combination left all of the images soft, but the contrast in the images permits focus to be evaluated fairly well. I nailed focus most of the time.

I was able to find a set of images where there was a distinct, high-contrast slanted edge in the image - i.e., I did not use the target to assess the blur of the combination, but I used "field data" to assess the edge spread function. I was not surprised that the tool output a sigma of 1.99xxxxx. But, I would NEVER have used a capture sharpening radius that large, it just does not seem right. I tested this batch of images with Capture One and the difference is huge. I will post a comparison here to demonstrate once I get all of the images and data together. I also used the field-data based deconvolution kernel and I will post that for comparison as well. It seems that deconvolution spares clipping highlights, whereas USM in C1 appears, to my eye, to cause some highlight clipping upon sharpening (nothing really noticeable in reality, but every bit counts).

Suffice it to say that I would never have eyeballed a 1.9 pixel capture sharpening radius before, but now that I can assess this critical variable quantitatively, it makes so much more sense and permits tweaking in no time.

Re. The f4.5/f16 pine cone comparison. It's not very persuasive. Are you sure the f16 image has only been degraded by diffraction? It looks like it is back-focused. Or maybe the foreground foliage has moved during exposure?

Re. The f4.5/f16 pine cone comparison. It's not very persuasive. Are you sure the f16 image has only been degraded by diffraction? It looks like it is back-focused.

I focused with 10x Live View magnification using a loupe on the camera's LCD. I'd say focus was accurate, and the wider aperture shot proves that.

Quote

Or maybe the foreground foliage has moved during exposure?

Sure, that is always possible, and as I mentioned I tried shooting between the moments of wind moving the branches. That's landscape photography for ya ...

Quote

It would be good to see a more compelling visual demonstration.

I tried avoiding a brick wall, and use a subject that's more in line with the name of this website. But feel free to convince yourself, while I'll look for a more stable subject (while trying to avoid road traffic vibrations and/or atmospheric turbulence).

I do have testchart shots I can share, but not too many people get excited about that type of subject because it is too remote from what they usually shoot (it's harder to make the mental connection to what improvements to expect in their specific shooting situations).

I obviously agree ;) I'm glad it's found to be useful to others as well, and it is also clear to me that you really understand the importance what it teaches us and how it allows to improve our technical image quality. It allows to remove a lot of subjectivity, and it demonstrates how poor we humans are in finding the optimal settings by eye.

Quote

I was not surprised that the tool output a sigma of 1.99xxxxx. But, I would NEVER have used a capture sharpening radius that large, it just does not seem right. I tested this batch of images with Capture One and the difference is huge. I will post a comparison here to demonstrate once I get all of the images and data together.

That would be appreciated a lot. You have proven what I said before, that it's hard to accept or even find these better settings by eyeballing the previews of our sharpening tools. Subjectively, and we have been taught that in books on the subject as well, we would expect that small radius settings are best for high spatial frequency subject matter. Well, apparently they are not always the best, and quality is left on the table if we do not look beyond our preconceptions.

Quote

I also used the field-data based deconvolution kernel and I will post that for comparison as well. It seems that deconvolution spares clipping highlights, whereas USM in C1 appears, to my eye, to cause some highlight clipping upon sharpening (nothing really noticeable in reality, but every bit counts).

Yes, those are my findings as well. Of course, the closer the deconvolution kernel comes to the actual convolution that took place, the better the restoration will be (and halos were not in the original signal so they should not be in the reconstructed signal either). Halos can lead to clipping because they overshoot the original signal level gradients.

Quote

Suffice it to say that I would never have eyeballed a 1.9 pixel capture sharpening radius before, but now that I can assess this critical variable quantitatively, it makes so much more sense and permits tweaking in no time

Totally cool..

Yes, that's an other benefit. There is a learning effect (or maybe even an unlearning of preconceived notions), that allows us to reach better results much faster once we've invested some time. It's a good investment IMHO, because one will soon discover that there are similarities between how different lenses behave.

My best lenses sofar all produce radii of around 0.7-0.8 in the center of the image at the optimal aperture (knowing that, makes it also easier to spot a 'poor' lens, e.g. a new one or a rental), and there is a deterioration towards the more defocused regions that follows a somewhat parabolic path. There is also a pattern for other apertures, so we may interpolate results quite accurately without the need to test each possible setting (although that would be even more accurate). There are returns, but they are diminishing returns for the time invested, so it helps if these patterns are proven to be reliable.

The addition of Extenders or Teleconverters, which effectively magnify the optical projection (and blur) of the lens itself and adds a bit of its own, shows that the results can be surprising at first but are actually somewhat predictable. The use of my tool will show how that exactly works out, no more guessing but actual facts instead.

Yes, that illustrates the 2 dimensions (defocus/diffraction) around the optimum nicely.

It would be interesting to add deconvolved versions of the images to the test, but of course we'd need to have an idea of the actual blur radius involved. The radius/radii can be established by shooting a slanted edge after the fact in a similar setup.

I could do a guess of the radius that does the best job, but as shown before we can be surprised by the actual radius we need. Also, guessing based on a JPEG is not very reliable, although I do already get significantly improved results with some quick trials I did (although diffraction or defocus losses in micro detail cannot be restored once they disappear in the 8-bit rendering).

What happens to OOF areas? Test on bubbles:http://www.sendspace.com/file/xfp28l (http://www.sendspace.com/file/xfp28l)

Here is the result:https://rcpt.yousendit.com/1595297373/8350158a870bbadcb79d842b19582da4 (https://rcpt.yousendit.com/1595297373/8350158a870bbadcb79d842b19582da4)The link will expire on July 15, 2012 05:57 PDT.

Of course I have no idea which settings to use for the best result, without doing a slanted edge test of your setup. The file is already quite sharp, looks like possibly from a camera without AA-filter, so I guessed that a 0.60 radius would be best to use. As said before, eyeballing the right settings is failure prone, but this is all I had to go on.

It did sharpen the in-focus bubbles and rims/edges into having more punch, while not affecting the defocused areas too much visually. Of course, basing this on JPEG input is not ideal, so I saved the linked result as a PNG file, to avoid adding another lossy compression. I've attached a JPEG version in case the link has expired when people read this.

Quote

What does USM do?

The original is already reasonably sharp, so the difference will not be huge, but USM does not increase resolution like deconvolution does, it just boosts edge contrast.

...USM does not increase resolution like deconvolution does, it just boosts edge contrast.

In the linear sense, all you can do to combat blur is to amplify (weak) high-frequency components to sharpen an image, until the signal component looks "good" or "close to correct" while the noise component looks "not too bad".

Now, USM and deconvolution are usually not purely linear processes, but your statement above seems strange to me.

In the linear sense, all you can do to combat blur is to amplify (weak) high-frequency components to sharpen an image, until the signal component looks "good" or "close to correct" while the noise component looks "not too bad".

Now, USM and deconvolution are usually not purely linear processes, but your statement above seems strange to me.

-h

Sounds fine to me. A deconvolution shrinks circles of confusion. A USM boosts contrast. If you imagine an edge like a sine wave R-L increases the frequency of the wave. USM increases the amplitude.

Here is the raw if you want to test the settings in the original conversion.http://www.sendspace.com/file/r22fje (http://www.sendspace.com/file/r22fje)

Thanks, although it is a shot of a slanted edge that is required for an objective/accurate assessment of how the lens + aperture + Raw-converter performs, the original file does open up some possibilities.

Quote

Of course a good prime doesn't need any capture sharpening. (This is an old Minolta 50 2.8 from ebay - 1970s? ) The A350 CCD color is great. Digressing...

Well, in a way. Apparently the earlier image was not full size (2292 x 3444 pixels versus 3056 x 4592 pixels), I assume it was a test of my eyeballing capabilities ;) . Now that the original data is available, I also arrive at a larger estimated blur radius of 0.9, still eyeballing here.

Quote

Look for the impact of any sharpening on the specular highlights in the bottom right amber color. Also the beer neck label around the o.

Including the choice of Raw converter into the equation adds another degree of freedom, and 12-bit/channel Raw data will be more sensitive to (quantization) noise amplification than 14-bit data, so noise removal will add even more uncertainty. The Amber highlights are clipped in the Raw Red and Green channels, and there are some artifacts in the Raw data around the 'o' (perhaps re-mapped (Red?) sensels).

Deconvolution (can be) a linear filter. Sharpening (can be) a linear filter. A filter alters sample values by a weighted sum of samples in its neighborhood. This behaviour can be described as a (frequency-domain) filter: some frequencies are boosted.

When you flatten the end-to-end frequency response, the PSF will usually tend to skrink (lets avoid minimum vs maximum-phase complexity here).

Quote

A USM boosts contrast.

Claiming that "USM boosts contrast" suggests that it is a global operator working on pixels in isolation, like curves/levels, which it is not.

Quote

If you imagine an edge like a sine wave R-L increases the frequency of the wave. USM increases the amplitude.

I have no idea what you are trying to say here. Are you saying that R-L does frequency modulation? It does not.

http://en.wikipedia.org/wiki/Wiener_deconvolution(http://upload.wikimedia.org/wikipedia/en/math/6/f/5/6f5b10dacd946e9c1b218ada30d0f6c3.png)In other words: Wiener deconvolution tries to find an inverse filter, G, that enhance signal corrupted by H, by boosting frequencies that were attenuated. However, frequency bands with poor SNR are not boosted.

http://en.wikipedia.org/wiki/Unsharp_masking"From a signal-processing standpoint, an unsharp mask is generally a linear or nonlinear filter that amplifies high-frequency components."

The difference is in my opinion not so fundamental as claimed in this thread. Both methods tries to boost signal components that are assumed attenuated, while avoiding excessive noise amplification. The difference is in complexity: USM is a low-order filter with a small kernel. Deconvolution can be as high order as you like, and all of those parameters have to be known before-hand, or estimated manually or automatically.

http://www.astro.uvic.ca/~pope/PHYS515-lectures_ed_new2.pdf

Quote

• Image enhancement (convolution): apply heuristic procedures to manipulate an image to takeadvantage of the psychophysical aspects of the human visual system e.g., edge enhancement,brightness/contrast by convolving image with a high-pass filter etc. • Image restoration (deconvolution): attempt to recover an image that has been degradedusing knowledge of the degradation phenomenon; model the degradation and apply the inverse process.

The difference is in my opinion not so fundamental as claimed in this thread. Both methods tries to boost signal components that are assumed attenuated, while avoiding excessive noise amplification. The difference is in complexity: USM is a low-order filter with a small kernel. Deconvolution can be as high order as you like, and all of those parameters have to be known before-hand, or estimated manually or automatically.

Hi h,

In an attempt to keep a bit of structure in this thread, may I suggest that a discussion about the relative merits of Deconvolution versus Unsharp Masking may be more productive in another thread (http://www.luminous-landscape.com/forum/index.php?topic=45038.msg378541#msg378541), where it was already demonstrated that deconvolution is more effective in restoring resolution than USM is because deconvolution tends to shrink the spatially blurred features, whereas USM tends to boost the amplitude of the edge gradient while producing halo artifacts (an almost inevitable by-product of adding an inverted blurred copy, unless edge masks are used).

This thread is a bit more about a tool that allows to find the optimal parameters for our various sharpening workflows, the radius control setting in particular.

This tool is the result of some of the questions in that other thread, where it was questioned if a Gaussian PSF is a good assumption, given the shape differences between a PSF dominated by defocus, optical artifacts, and/or diffraction. As I stated there, the mix of the different types of blur tends to resemble a Gaussian blur, and my tool allows to confirm that a Gaussian distribution does a pretty good job of characterizing the blur we find in our actual images. Call it empirical proof of that statement.

Sorry if this is the wrong thread. I wanted to tell about this blog post:http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html

Seems that he is able to make models and measurements of D40 and D7000 sensel/olpf/lense fit quite well. So Nikon use 0.375 pixel pitch OLPFs?

Hi h,

It's an interesting model (which BTW doesn't account for residual lens aberrations, defocus, and a non square sensel aperture), which may fit a particular situation. I'm not convinced it can be applied universally. It also doesn't account for the result after demosaicing, which is the basis for our Capture sharpening effort. However, as my tool shows for the cameras I've tested, and others have independently found for their cameras, in actual empirical tests the simple Gaussian model still describes the actual blur of an edge profile (Edge Spread Function, or ESF) very accurately:

(http://bvdwolf.home.xs4all.nl/temp/LuLa/8104_GreenProfile.png)

The very slight mismatch at the dark end of the curve is caused by lens glare, not blur, and should be fixed with tone curve adjustments, not Capture sharpening. So the blur pattern from the entire imaging and Raw conversion chain can apparently be very well modeled by a simple Gaussian.

And there seems to be a theoretical explanation for that resemblence to a Gaussian shaped blur pattern, the input (a cascade of blur sources) apparently comes close to satisfying the requirements of the Central Limit Theorem (http://en.wikipedia.org/wiki/Central_limit_theorem#Central_limit_theorems_for_dependent_processes). It (loosely formulated) states that the sum of a number of independent distributions will resemble a normal distribution (which can be described by a Gaussian). The DSP Guide (http://www.dspguide.com/ch7/2.htm), a free on-line book about Digital Signal Processing, also has a nice example at the bottom of that page link. It shows how fast a cascade of (even not close to Gaussian) distributions, very rapidly converges to a Gaussian shape.

Beside the interesting theoretical model of the PSF shape of the OLPF+sensel, and the unknown Raw conversion exploitation of that input (which at least normalizes the MTF of the R/G/B channels despite the differences in sampling density), we also have to consider that the only practical tool that most people have in their workflow, is the Sharpening dialog panel of our Raw converter or image editor, which essentially only offers radius and amount as controls for the PSF shape to use. IMO it is therefore useful to determine as close a match to such a PSF shape as possible, and the formerly unknown blur radius is what we now can determine for that specific purpose.

My tool turns out to be so sensitive, that it can detect the difference between the left and right side of a horizontal edge if the target was not shot perpendicular enough to the optical axis. It also detects differences within the DOF zone, and shows that there is only one plane of best focus. Luckily we need not, and we even cannot, specify the radius to that degree of precision in the common sharpening interfaces. It can still help with more elaborate deconvolution sharpening algorithms, which allow to specify the PSF kernel and also allow to regulate a lower sharpening of noise compared to the actual signal, thus boosting the S/N ratio even further.

Just FYI but I came across this SpyderCheckr blog entry on what camera profiling can do to the appearance of sharpness I thought appropriate for this topic. It backs up what I've always thought about perceived sharpness in a digital capture.

As quoted:

Quote

Its often surprising to see that color correction does not just improve the colors; it improves the color detail, which results in a more detailed image; something we tend to associate with focus and lens quality, when it can actually be an artifact of incorrect color.

Note the image samples that back up the premise by scrolling down through the article.

This explains why I often don't sharpen at all on certain captures AFTER tone mapping, adding clarity and adjusting mainly the Saturation and Luminance sliders in HSL in order to get a realistic tonality to my DSLR images edited in ACR.

Did anyone in this thread factor that into the mathematical calculations presented here?

Did anyone in this thread factor that into the mathematical calculations presented here?

Hi,

Tonal separation, especially with saturated colors, is a bit of a different subject, although the result will be further enhanced by proper Capture sharpening. Calibrating the color rendition is the tool for that enhancement.

Capture sharpening enhances surface structure (and fine edge and line detail) regardless of the color. All non-smooth surfaces have a somewhat bumpy surface, sometimes gritty, sometmes smooth. Those bumps (if resolvable by the optics) will cause local specular reflectons and shadows depending on the angle between the lightsource, the surface, and the viewer. When the image is not Capture sharpened, those specular reflections, and other microdetail, will have lower local contrast and thus cause a somewhat dull looking surface. After Capture sharpening with the proper radius setting that greyish mist will be lifted and the subject comes to life. It will also help saturation of the areas next to the specular highlights a bit by locally darkening those areas while lightening the highlights.

That's the point I was making with that link that I thought would make clear in that part of the camera profiling involving HSL adjustment presets has a lot to do with luminance as well as color refining.

The reduced saturation (the part of color tied to luminance) as seen in those examples affects all aspects of edge definition and clarity which is a large part of sharpness perception.

We don't look at image's sharpness by honing in on small sections of surface texture to make sure it's sharp. We perceive sharpness in a scene by its overall appearance.

For example I don't want to see a thick halo on the outer edge of an orange but I do want to see its dimpled texture. You can only go so far in the sharpening stage before the two are not in agreement in this respect. Draw back the HSL luminance slider on the orange channel and note an increase in definition to dimpled surface texture and now the orange looks sharper.

This is all about overriding the camera's entire capture system and actually injecting our calibration according to the human visual system.

All this math and toe/shoulder curve graphs don't connect the dots in distinguishing between the two.

The Mark 5D III profiling link sample images could be made to look even more sharp with continued HSL adjusts and curves without even touching a Sharpening slider. I've done it to my own images.

What you do is to manipulate contrast between hues. Also, once a color is oversaturated, there will be very little contrast in that color.

Keep in mind, the information is there. You cannot extract information that is not there in postprocessing.

In my view, a methodical approach makes a lot of sense.

1) You first correct for problems caused by the imaging pipeline. Try to reconstruct the original image.

2) You apply creative modifications

3) You optimize the image for presentation

Color calibration belongs to the first step. Unfortunately, color calibration is a bit tricky. Sensitometrically correct colors tend to be dull. If you use Lightroom or ACR you could just check the different profiles under "camera calibration".

That's the point I was making with that link that I thought would make clear in that part of the camera profiling involving HSL adjustment presets has a lot to do with luminance as well as color refining.

The reduced saturation (the part of color tied to luminance) as seen in those examples affects all aspects of edge definition and clarity which is a large part of sharpness perception.

We don't look at image's sharpness by honing in on small sections of surface texture to make sure it's sharp. We perceive sharpness in a scene by its overall appearance.

For example I don't want to see a thick halo on the outer edge of an orange but I do want to see its dimpled texture. You can only go so far in the sharpening stage before the two are not in agreement in this respect. Draw back the HSL luminance slider on the orange channel and note an increase in definition to dimpled surface texture and now the orange looks sharper.

This is all about overriding the camera's entire capture system and actually injecting our calibration according to the human visual system.

All this math and toe/shoulder curve graphs don't connect the dots in distinguishing between the two.

The Mark 5D III profiling link sample images could be made to look even more sharp with continued HSL adjusts and curves without even touching a Sharpening slider. I've done it to my own images.

Erik, I'm not making the connection with your list of processing instructions as having anything to do with what I just pointed out concerning the importance of using math and graphs to come up with improvements to sharpening considering the variables involved with reconstructing the image according to human perception that goes beyond the camera's capture system.

It's equivalent to expecting consistent results with every image using a ruler to measure a lump of clay that's never quite consistent image to image because of all the variables in front and behind the camera.

After all that's been written on this subject, I still have no idea what a sharpened edge is suppose to look like after capture sharpening viewed at 100% which can't be seen anyway no matter the output. Any significant gain to sharpening viewed at 100% will never be seen or appreciated on any output device be it a display or print because we don't view images at that size except for editing.

The subtle improvements to sharpening shown in the samples here and in other tutorials I can never see applying to my images and viewing on output to the web or print. Since I don't know where the sharpening starting point lies for further sharpening to these two output mediums, I just sharpen once to get it to look good viewing the downsampled image destined for the web. If it looks bad and it usually does, go back and change the sharpening until it looks good. For print I just do a test print of a small section of the image.

Erik, I'm not making the connection with your list of processing instructions as having anything to do with what I just pointed out concerning the importance of using math and graphs to come up with improvements to sharpening considering the variables involved with reconstructing the image according to human perception that goes beyond the camera's capture system.

It's equivalent to expecting consistent results with every image using a ruler to measure a lump of clay that's never quite consistent image to image because of all the variables in front and behind the camera.

After all that's been written on this subject, I still have no idea what a sharpened edge is suppose to look like after capture sharpening viewed at 100% which can't be seen anyway no matter the output. Any significant gain to sharpening viewed at 100% will never be seen or appreciated on any output device be it a display or print because we don't view images at that size except for editing.

The subtle improvements to sharpening shown in the samples here and in other tutorials I can never see applying to my images and viewing on output to the web or print. Since I don't know where the sharpening starting point lies for further sharpening to these two output mediums, I just sharpen once to get it to look good viewing the downsampled image destined for the web. If it looks bad and it usually does, go back and change the sharpening until it looks good. For print I just do a test print of a small section of the image.

Ok. I hope I do not come across as being defensive or aggressive. Here goes ....

I agree that my model (OLPF + diffraction + sensel aperture) is of limited real-world use. In particular, I specifically tried to avoid the whole demosaicing issue by working on only a single channel at a time, mostly because my blog is a record of my learning process, and it helps to separate the issues when you are still learning. I do agree with your main observation, i.e., that a practical approach to optimal sharpening should work on the demosaiced image (or perhaps that noise removal, demosaicing and capture sharpening should be addressed simultaneously).

I would like to point out, though, that comparing the empirical ESF to the theoretical (Gaussian) ESF by overlaying the plots is a little misleading. During my experiments on generating synthetic images with known PSFs I discovered that a more reliable method is to plot the difference between the ESFs. If your theoretical ESF is a good match for the empirical ESF, then their difference should look like white noise. If you can see any structure in the difference curve, then you still have some systematic error in your theoretical ESF.

In particular, from your posted plot, I can see a systematic error in the "knee" of the curve on both the light side, and the dark side (after compensating for glare it should still be there, if I had to guess). This may seem trivial in the ESF curve, but it can potentially have a large impact on the MTF. For that reason, I prefer to compare not only the ESFs, but also the MTFs.

The main point, though is whether such a difference will have any practical significance. Personally, I think that RL-deconvolution with a Gaussian is good enough for government work (as Numerical Recipes would say). Any potential gains in more accurate modelling of the PSF is going to be overshadowed by the practical difficulties with RL deconvolution (e.g., selecting the appropriate damping parameters). I also like the fact that the Gaussian is separable, and that I can use any number of libraries to efficiently perform the forward-blur step in the RL algorithm.

So my gut feeling is that the Gaussian PSF approximation is reasonable, but I still plan on working through the process more rigorously to obtain some quantitative data on the relative magnitude of the various errors we may introduce along the way.

Erik, I'm not making the connection with your list of processing instructions as having anything to do with what I just pointed out concerning the importance of using math and graphs to come up with improvements to sharpening considering the variables involved with reconstructing the image according to human perception that goes beyond the camera's capture system.

Hi,

Erik's list is exactly what this thread is about, namely using the right tools in the right order and it's in particular about Capture sharpening. That doesn't mean that a correct camera profile or exposure setting doesn't fit in the total workflow, but they aren't really the topic of this thread. They are used to avoid pushing certain tones/colors towards or past clipping and improve tonal separation. Very useful, but it isn't Capture sharpening.

This thread deals with that one aspect, Capture sharpening (assuming one embraces the concept of separating Capture/Creative/Output sharpening), and it (hopefully) shows that it is difficult to nail (or even guess, e.g. when using an extender) the correct settings by eye. What it also shows, for those who are willing to understand the principles, is that the traditional view on the use of the Detail panel or (Smart/USM) Sharpening filters misses one critical aspect that leads towards optimal quality:Capture sharpening is a hardware oriented correction, not a subject oriented one.

When we attempt to kill 2 birds with one stone, do Capture sharpening and Creative sharpening with one control, we will have to accept a compromise as to how much artifacting we are willing to accept. However, we don't have to! We can do localized Creative sharpening with an adjustment brush, and that will be based on fundamentally better data (even including tonal separations).

Quote

It's equivalent to expecting consistent results with every image using a ruler to measure a lump of clay that's never quite consistent image to image because of all the variables in front and behind the camera.

I respectfully think you are mistaken. With some time invested in figuring out this stuff, I can tell you that Capture sharpening is very predictable, because it is caused by our hardware, and the main parameter that drives our best possible sharpness is the aperture setting (besides focusing, obviously). The resulting Capture blur is directly related to the aperture value, for a given camera (due to OLPF/sensel pitch) and lens combination. And quality lenses more or less produce the same amount of blur. That means that the correction values can be simply tabulated and used based on EXIF aperture value (something the raw converter could do automatically, based on prior calibration).

Quote

After all that's been written on this subject, I still have no idea what a sharpened edge is suppose to look like after capture sharpening viewed at 100% which can't be seen anyway no matter the output. Any significant gain to sharpening viewed at 100% will never be seen or appreciated on any output device be it a display or print because we don't view images at that size except for editing.

You are actually making my point, it is hard to judge by eye what the optimal settings are ... That's why I made this tool, to get a handle on the matter, inject some objectivity. Is the tool perfect? No, far from it, but it is only the start of what's to come.

Quote

The subtle improvements to sharpening shown in the samples here and in other tutorials I can never see applying to my images and viewing on output to the web or print.

I'm not sure if you embrace the concept of Capture sharpening as a separate step in the sharpening workflow, but Capture sharpening is intended to only correct or compensate for the lossess linked to the capture process. It is not intended to be viewed as a final product for viewing on output to the web or print. It's more like casting a solid foundation on which to build the final construction.

Quote

Since I don't know where the sharpening starting point lies for further sharpening to these two output mediums, I just sharpen once to get it to look good viewing the downsampled image destined for the web. If it looks bad and it usually does, go back and change the sharpening until it looks good. For print I just do a test print of a small section of the image.

If you are happy with that workflow, then by all means stick to that. Many people do. I'm not forcing anybody to improve their quality, it's all voluntary. All I'm offering is insight in the fundamental process, and a tool to assist. As the saying goes; You can lead a horse to water, but you can't make it drink.

Ok. I hope I do not come across as being defensive or aggressive. Here goes ....

Hi Frans,

Not at all. I welcome your view because I know you have studied (an ongoing process) the underlying principles in depth. I like you blog posts, and think it is useful to make models that can help to understand the fundamentals, or see where the model doesn't agree with empirical evidence. In the scientific approach it is important to build and then test a hypothesis (and a rejected hypothesis is still a positive result).

Quote

I agree that my model (OLPF + diffraction + sensel aperture) is of limited real-world use. In particular, I specifically tried to avoid the whole demosaicing issue by working on only a single channel at a time, mostly because my blog is a record of my learning process, and it helps to separate the issues when you are still learning. I do agree with your main observation, i.e., that a practical approach to optimal sharpening should work on the demosaiced image (or perhaps that noise removal, demosaicing and capture sharpening should be addressed simultaneously).

Well, there is a lot going on during the Demosaicing process that we cannot control (unless we create our own converter). There are some things that could be done at the Raw level, e.g. Lateral Chromatic Aberration correction which could produce more accurate demosaicing, but that is not trivial to do. So what we can do is deal with the bare conversion as best as we can.

Quote

I would like to point out, though, that comparing the empirical ESF to the theoretical (Gaussian) ESF by overlaying the plots is a little misleading. During my experiments on generating synthetic images with known PSFs I discovered that a more reliable method is to plot the difference between the ESFs. If your theoretical ESF is a good match for the empirical ESF, then their difference should look like white noise. If you can see any structure in the difference curve, then you still have some systematic error in your theoretical ESF.

I fully agree. However, there will be almost certainly some systematic residual signal in the difference curve. For example, our optics will exhibit a certain amount of glare, which affects the darkest signals most because it's a locally fixed amount of signal that's added, as shown in the chart I posted. Yet, the benefit of using a rather simple model (which happens to characterize the real issue quite well) is that those deviations become clear. If I had used an adaptive, e.g. polynomial, curve fit, then the difference curve would be closer to white noise. But that would not be an accurate model for both highlights and shadows.

Quote

In particular, from your posted plot, I can see a systematic error in the "knee" of the curve on both the light side, and the dark side (after compensating for glare it should still be there, if I had to guess). This may seem trivial in the ESF curve, but it can potentially have a large impact on the MTF. For that reason, I prefer to compare not only the ESFs, but also the MTFs.

I do as well, but it's not as intuitive for many photographers who are less seasoned in reading such very helpful charts. The average photographer can relate to edge sharpness in the spatial domain much easier. Another thing is that the tool I made available, only attempts to fit a single Gaussian sigma (which characterizes the majority of the blur), because we are only offered a single radius setting in our sharpening tool. From earlier research it was shown that a combination of Gaussians produces an even better fit. But to use that, one needs a possibility to apply one's own deconvolution kernel, and not many software packages facilitate that in a simple interface.

Quote

The main point, though is whether such a difference will have any practical significance. Personally, I think that RL-deconvolution with a Gaussian is good enough for government work (as Numerical Recipes would say). Any potential gains in more accurate modelling of the PSF is going to be overshadowed by the practical difficulties with RL deconvolution (e.g., selecting the appropriate damping parameters). I also like the fact that the Gaussian is separable, and that I can use any number of libraries to efficiently perform the forward-blur step in the RL algorithm.

Indeed. While not perfect, the Richardson-Lucy deconvolution still offers a very useful improvement, also for normal (terrestrial) imaging, even with a single Gaussian PSF as input (as long as it's a good Gaussian PSF).

Quote

So my gut feeling is that the Gaussian PSF approximation is reasonable, but I still plan on working through the process more rigorously to obtain some quantitative data on the relative magnitude of the various errors we may introduce along the way.

Looking forward to your findings. The more the merrier, as they say. I offer one consideration for your analysis, and that is to look at the differences between a regular Gaussian PSF, and one based on not point sampling of the Gaussian but (sensel aperture) area sampling. Especially with the apparently small sigmas we encounter with good optics, 0.7 or 0.8 is common for the optimal aperture, there will be a noticeable difference in the resulting PSF kernels and the resulting restoration. Both of the following kernels (crude integer versions, with limited amplitude, and support dimensions) have the same Gaussian as basis:

Sigma=0.7, fill-factor=point-sample

0 2 4 2 02 33 92 33 24 92 255 92 42 33 92 33 20 2 4 2 0

Sigma=0.7, fill-factor=100% area-sample

0 3 8 3 03 45 108 45 38 108 255 108 83 45 108 45 30 3 8 3 0

Just something to consider, since we both know that a more accurate PSF will lead to a more accurate restoration.

thank you very much for making this tool available, and for your efforts explaining it. - If I would not use it to determine the quality of my whole pipe line including printer, but just to calculate the radius for the deconvolution in Raw Developer - would it be necessary to print the target, or would it be sufficient to shoot it displayed on screen?

thank you very much for making this tool available, and for your efforts explaining it. - If I would not use it to determine the quality of my whole pipe line including printer, but just to calculate the radius for the deconvolution in Raw Developer - would it be necessary to print the target, or would it be sufficient to shoot it displayed on screen?

Hi Hening,

Most displays are pretty low resolution (approx. 100 PPI), compared to the highest quality of inkjet printers (600-720 PPI). That means that to challenge the optical system to the same degree, you would need to shoot it from a 6x longer distance (6 x 25 = 150x focal length). That's not very practical.

The printer is only used to produce a very high resolution target, to challenge the optics and make sure that the target is not the weakest link in the test setup. It's not the quality of the printer that is being tested, although if it produces low resolution targets, that may influence the lens/camera results at the nearer end of the recommended shooting distance range of 25 x focal length. My target does have a few tools incorporated in the design to judge the print quality, so it can alternatively also be used for that purpose.

thank you for your very fast reply.My longest lens is 85 mm, 150x focal length is 12,75 m, I can almost muster that for a single shot. That is more practical for me than having to send the file to a print service (I should have added that I do not print myself).Is the plan-parallelity between the camera sensor and the display screen very critical?

Is the plan-parallelity between the camera sensor and the display screen very critical?

Hi Hening,

There is no need to exaggerate it. While the analysis tool will pick up small differences along the edges, we're talking about decimal fractions that you probably will not be able to enter in the Raw converter anyway. Defocus will have a larger impact, so try to nail that as good as practically possible.

Also remember that your Raw converter's contrast/curve settings should be the same that you normally use, and for the analysis there should be no sharpening applied to the output file that's going to be analysed.

Hi again Bart. Now I have shot the target at distances beween 11 and 3,7 meters. Should one try to remove the CA in the resulting files before proceeding? by feeding them to ACR, which now does this automatically?Edit:Ooops - I discovered that I had activated silent shooting, but not mirror lock-up, because on the Canon 5D2, the latter can not be combined with burst, and that is what I use in the field for exposure triplets. Should I repeat the test shots with mirror lock-up, or is it better that they reflect real life conditions?Kind regards - Hening.

Hi again Bart. Now I have shot the target at distances beween 11 and 3,7 meters. Should one try to remove the CA in the resulting files before proceeding? by feeding them to ACR, which now does this automatically?Edit:Ooops - I discovered that I had activated silent shooting, but not mirror lock-up, because on the Canon 5D2, the latter can not be combined with burst, and that is what I use in the field for exposure triplets. Should I repeat the test shots with mirror lock-up, or is it better that they reflect real life conditions?

Hi Hening,

Yes, CA removal should be done first because that involves resampling which will affect sharpeness that will require more Capture sharpening. So you can let ACR do its stuff, which may include distortion correction if you normally use that.

Whether the mirror lockup is required, depends on how sturdy your tripod is and at which shutterspeed the shots were taken. Mirror induced vibration is usually most visible in the 1/15th to 1/60th sec. region. When you concentrate only on the vertical edge, then mirror vibration effects will have less noticeable impact. When you can see that the central blur spot in the recorded image of my 'star' target is not circular but elliptical (stretched vertically), then I'd reshoot.

It's a good thing that on the Northern hemisphere we have plenty of dark and cold evenings ahead ...

I am trying to read the pixel coordinates for the target and am in doubt exactly which pixels to choose. The devil is in the detail...Even after CA removal in ACR, the 2000% view shows visible CA. The screen shot shows the upper left corner of the slanted square (after 90° cw rotation) displayed in ImageJ. (The TIF has my camera profile embedded - not sure if ImageJ reads this.)So which 2 pixels would you choose to determine the angle?

I am trying to read the pixel coordinates for the target and am in doubt exactly which pixels to choose. The devil is in the detail...Even after CA removal in ACR, the 2000% view shows visible CA. The screen shot shows the upper left corner of the slanted square (after 90° cw rotation) displayed in ImageJ. (The TIF has my camera profile embedded - not sure if ImageJ reads this.)So which 2 pixels would you choose to determine the angle?

Hi Hening,

Good question, because the angle determination in the first step of the analysis will have an impact on the accuracy of the result. As your example shows, it is not always obvious how to pick the endpoints on the edge segment that's going to be used.

First of all, I would choose a longer segment to base the measurement on (longer base distance gives higher precision), but your crop is sufficient to demonstrate the principle of what to do. In the first attachment I show which end-points I would choose. I used ImageJ's Straight line tool (5th icon on the toolbar) to mark the pixel centers of coordinates [63,19] and [238,37] as endpoint. The reason I chose them is because they both are 'relatively' neutral grey and have a Green value that's quite similar (104 vs 102), which suggests that they are on the same position on the ramp of the edge profile. This is further confirmed by the pixels in between where the line is also almost centered on ( [119,25] and [179,31]).

Your question therefore allows me to share a little secret, which allows to confirm whether the right pixels were chosen. When, after drawing the line, you select the menu option Analyse|Plot Profile, then you should get a graphic plot of the pixel values along that line that should produce an almost horizontally trending signal, like in attachment 2. A longer line will show it even clearer, the signal fluctuates along the average with repeating similar highs and lows. That works best if you select end-points that are roughly mid-grey. When you activate the Live button in the Plot window, then the plot will update as you drag the endpoints. As a bonus, when the trend looks a bit convex (higher in the middle, lower at both endpoints), or concave (drooping in the middle), then that's a signal of a Barrel or Pincushion distortion (even more obvious when the slanted edge is nearer one of the images edges).

When you fill in the coordinate pairs, then the Slanted edge evaluation tool will calculate an angle of 5.873 degrees, which is very close to the designed angle of the target, 5.71 degrees, which suggests that the target was shot pretty close to perfectly level. This is assuming that a longer edge segment produces a similar angle readout. The exact angle is not very important, as long as it is accurately determined. An angle of approx. 5.7 degrees will allow to super-sample the edge transition at 1/10th of a pixel accuracy, which is good enough to get an accurate discrete edgeprofile of even the sharpest lenses, even from an 8-bit/channel image.

This angle is now used in the calculations on the pixel values of a horizontal row across the almost horizontal edge that will be filled in under section 2 of the tool. So if you want to re-analyse the edge at a later date, or with a different Raw conversion, I suggest you save these coordinates in a text file or a spreadsheet. That will save you from having to determine the coordinates again. The edge will be constant for all images shot with the same setup on a sturdy tripod, assuming the camera didn't move/rotate between shots. The only thing that can change is the image contrast and thus edge sharpness due to Raw converter settings, but the angle is now fixed for the shots in this session.

Thank you for your reply. - Now I managed... My angle is 5.553 degrees. [Of course I had planned to use a wider basis, but wanted to save Lula's disk space with the screen shot :-)] I also had the bright idea to choose an image that was shot at f/8 rather than the previous f/2.8 :-), so as to bring down the CA. This had some effect, but there is still CA (This is a Contax Sonnar 2.8/85, after all... What marvel of a lens did you use for your demonstration, which shows a perfectly neutral grey step wedge?)

When I looked at the first list of values, I saw that at the light end, there where 2 identical values, 65535, obviously the monitor white. After I excluded one of them, the calculated sigma went up from 0.67... to 0.70...

So this leads to question #2: How many, and which, pixels to include in the row of pixels across the edge? Because the dark end is far from as well defined.

In my next attempt, I tried to limit at the dark end visually. The plot seems to extend more than necessary to the left. At the same time, the downward spikes go further down than the "black" level.

So I identified the lowest value in the list and deleted values representing pixels to the left of that.

Now the sigma is calculated to 0.50...

I see that your plot includes about 15 pixels on either side of dark and light. But this will lead to a very different sigma.

Thank you for your reply. - Now I managed... My angle is 5.553 degrees. [Of course I had planned to use a wider basis, but wanted to save Lula's disk space with the screen shot :-)] I also had the bright idea to choose an image that was shot at f/8 rather than the previous f/2.8 :-), so as to bring down the CA. This had some effect, but there is still CA (This is a Contax Sonnar 2.8/85, after all... What marvel of a lens did you use for your demonstration, which shows a perfectly neutral grey step wedge?)

Hi Hening,

Most of my samples were taken from a EF-100mm f/2.8L Macro lens, and the Raw conversions were white balanced and corrected for Chromatic aberrations with Capture One (version 6 at that time, so I'll have to repeat a test with Version 7 and see if it changed much). Your Raw converter also added a bit of a zipper effect to the sharp edge, but that doesn't really bother the analysis tool because it will hone in on the average brightness anyway (that's one of the benefits of oversampling).

Quote

When I looked at the first list of values, I saw that at the light end, there where 2 identical values, 65535, obviously the monitor white. After I excluded one of them, the calculated sigma went up from 0.67... to 0.70...

So this leads to question #2: How many, and which, pixels to include in the row of pixels across the edge? Because the dark end is far from as well defined.

Okay, first of all the bright end of the scale is too bright at 65535, you should reduce the exposure a bit. When you shoot a relected white, e.g. a colorchecker or as in this case a resolution chart, the paper white usually should read something like 235-240 in an 8-bit range if exposed to the right, therefore around 60000-62000 in a 16-bit range at the most. I usually include enough darkest and brightest pixels on either side of the edge that they run pretty horizontal except for some noise fluctuation. depending on the sensel pitch, that could require some 120 pixels, or even 160 for significant amounts of diffraction. Try and avoid clipped pixels (saturated black or white), because that will mean that the real signal cannot be reconstructed anymore.

That also means that the whole edge transition will be characterized, right from where it starts to pick up some blur from the bright end on top of the uniform dark, to where there is no influence anymore at the bright end from the blurred dark side of the edge. A tighter selection will produce sililar results as long as the selection is symmetrical, but it is best to avoid that potential influence by allowing some of the uniform unblurred ends of the edge in the selected range.

Quote

In my next attempt, I tried to limit at the dark end visually. The plot seems to extend more than necessary to the left. At the same time, the downward spikes go further down than the "black" level.

So I identified the lowest value in the list and deleted values representing pixels to the left of that.

Now the sigma is calculated to 0.50...

I see that your plot includes about 15 pixels on either side of dark and light. But this will lead to a very different sigma.

So which is the the right procedure?

What I usually do is the following.I zoom in on the edge with the magnifying glass tool, and then switch to the rectangular selection tool at the left of the toolbar. I draw a quick horizontal selection across the edge with the mouse pointer, and then use the keyboard to resize (with the Alt key plus the arrow keys) the selection dimensions to 1 high and 120 wide (you can see the dimensions in the ImageJ status bar). I then select the menu choice Analyze|Plot Profile, and click the 'Live button'. When you then reactivate the image window by clicking on its edge (to avoid undoing the selection) you can move the selection rectangle around and position it on the edge with the arrow keys, while the Profile Plot updates dynamically until the center of the edge profile is about centered in the plot window.

You can then copy the values from the Profile Plot window and paste them where you need them.

Okay, first of all the bright end of the scale is too bright at 65535, you should reduce the exposure a bit. When you shoot a relected white, e.g. a colorchecker or as in this case a resolution chart, the paper white usually should read something like 235-240 in an 8-bit range if exposed to the right, therefore around 60000-62000 in a 16-bit range at the most.

Is there a way to achieve this other than trial and error? What I did was point my Pentax spot meter to an area on the test image that I believed was defined as middle gray and exposed after that. I would have to re-shoot all images - or could it be done in the raw converter?

Is there a way to achieve this other than trial and error? What I did was point my Pentax spot meter to an area on the test image that I believed was defined as middle gray and exposed after that. I would have to re-shoot all images - or could it be done in the raw converter?

Hi Hening,

I assume the exposure meter picked up some of the darker areas as well, and decided to increase the exposure level a bit too much.

There is some chance that the Raw file holds more highlight detail than it showed. If it does, then you can reduce the exposure level in the Raw converter, until the conversion is no longer clipped to white. Just give it a try, and maybe a different Raw converter can restore a bit more highlight detail and reduce the zipper artifacts as well. Lightroom is e.g. pretty good at highlight recovery.

Ooops -It is only in ImageJ that the white patch of the gray scale on your target reads 65535 - in Raw Developer and PhotoLine its about 175 (in 8 bit). The reason seems to be that my output from RD is in my own camera profile. After change to ProPhoto, ImageJ says between ca 61 500 and 62 534 - would that be OK? --edit: well if I need not re-shoot then I can just adjust a little in the raw converter.Kind regards - Hening

Ooops -It is only in ImageJ that the white patch of the gray scale on your target reads 65535 - in Raw Developer and PhotoLine its about 175 (in 8 bit). The reason seems to be that my output from RD is in my own camera profile. After change to ProPhoto, ImageJ says between ca 61 500 and 62 534 - would that be OK? --edit: well if I need not re-shoot then I can just adjust a little in the raw converter.

Hi Hening,

62000-ish sounds fine to me, no clipping involved so the entire range of brightnesses is available for the characterization of the edge profile. It would mean though, that the edge profile is valid for ProPhoto RGB data (=gamma 1.8 compensated). I'm not sure what the contrast/gamma is for the Camera profile setting. What you preferably want is to know is the apparent blur for your normal workflow. It seems that if the Camera profile is part of your normal workflow and indicative for what you base your sharpening on, then the images would be a bit on the bright side. However, if you base your workflow and sharpening on the ProPhoto converted image data, then you should also do the blur analysis in that colorspace.

Hi again - This is a little off topic in relationship to blur radius - but why is 65000 too bright? Why would one give away dynamic range?

Hi Hening,

Only if the data is really clipped, which would obscure the real shape of the edge profile at the top of the curve, it could be an issue. When the real shape of the edge profile is clipped at the top, then the estimation of the actual shape becomes more of a guess. The only thing to limit the guesswork in such a case, is to limit the selection of pixels to include only really relevant pixels. In that case the estimation of the full curve shape will still be based on relevant pixels, and the program will try to fit as good a curve shape as it can.

I'm not sure what the contrast/gamma is for the Camera profile setting. What you preferably want is to know is the apparent blur for your normal workflow. It seems that if the Camera profile is part of your normal workflow and indicative for what you base your sharpening on, then the images would be a bit on the bright side.

Hi Bart, thank you for your answers.

My normal workflow is output from Raw Developer in my camera profile (ICC), which is linear, then open in PhotoLine. PL opens the image by default as an RGB image in ProPhoto. I will then set the image type to Lab and attach standard Lab as the profile.Now for the test I have also opened in Photoshop.

Here is what happens:In RD, the white patch of your target reads RGB 167-176-177. So this would be UNDERexposed, and so is the visual impression.

On the processed TIF opened in PhotoLine, the RGB reading is 255, and so is the visual impression: only the bottom 5 gray patches are distinguishable, the rest is white.No change after transition to Lab.

In Photoshop, the visual impression is the same - but the RGB reading is 32000! - 100 for L* in Lab! (of the RGB image; no change to Lab made here).

My normal workflow is output from Raw Developer in my camera profile (ICC), which is linear, then open in PhotoLine. PL opens the image by default as an RGB image in ProPhoto. I will then set the image type to Lab and attach standard Lab as the profile.Now for the test I have also opened in Photoshop.

Here is what happens:In RD, the white patch of your target reads RGB 167-176-177. So this would be UNDERexposed, and so is the visual impression.

Hi Hening,

I am not familiar enough with Raw Developer by Iridient Digital to know what happens there, but could it be that the image is in linear gamma? That would explain its seemingly underexposed look, afterall 176 in linear gamma space, is 208 in gamma 1/1.8 space, which is only slightly on the dark side, but good enough to test with.

Quote

On the processed TIF opened in PhotoLine, the RGB reading is 255, and so is the visual impression: only the bottom 5 gray patches are distinguishable, the rest is white.

No change after transition to Lab.

I'm not familiar at all with Photoline, but it is clearly not acceptable that such a level of exposure gets clipped.

Quote

In Photoshop, the visual impression is the same - but the RGB reading is 32000! - 100 for L* in Lab! (of the RGB image; no change to Lab made here).

ImageJ: same as PhotoLine, RGB reading 65535.

What is going on here??

Well, when Photoline, and Photoshop, and ImageJ, all see clipped data, then it's probably something to do with Raw Converter's TIFF output.

Hi Bart,thank you for your answer. I have figured out that the clipping is introduced by ACR. The upcoming version 2.0 of Raw Developer (overdue since oktober, and exspected in a few days) will have CA removal, both manual and automatic, so I'll wait for that before I proceed with the slanted edge tool.Thank you again for this tool, and for your help using it.Hening

I fooled around a little with your target and tool. Attached are 4 screen shots of the upper half of the target, enlarged to approx. the same size as the screen display of the original.1- original2-TIF developed in Raw Developer with R-L Deconvolution, 10 iterations, radius 0.6=default3-same, r=0.8, my visual optimum4-same, r=1.07 as calculated with your online tool. Using your method, the angle of the slanted edge was calculated to 5.553 degrees.Shots are with a Contax Sonnar 2.8/85 at f/8 on a Canon 5D2.It seems to me that your target performs best as the basis for visual judgement.

I fooled around a little with your target and tool. Attached are 4 screen shots of the upper half of the target, enlarged to approx. the same size as the screen display of the original.

Hi Hening,

I'm not sure whether it's because you are shooting from the computer display instead of a printed test chart, or that something else went wrong, but frankly samples 2 ... 4 look horribly oversharpened (judging by the halo that was produced). Even shot no. 1 (I assume it's a screenprint instead of a photo of the screen) shows an issue due to quick display downsampling below 100% zoom.

Therefore, I'd hesitate to draw any conlusions about the proper Capture sharpening radius based on this attempt. My tool will happily process anything (within reason) that's thrown at it, but that doesn't mean that the outcome isn't influenced by the input quality. On the contrary, it specifically measures that input quality.

That leads to the conclusion that;either the target was compromised too much by using a screen display instead of a high resolution print,or there was an issue with the Raw converter (which also adds those zipper artifacts along sharp edges),or some strange combination of the two.

I suggest you consider getting a printed version of the resolution target, preferably a high resolution inkjet print (approx. 600-720 PPI), or as a minimum use the source file at 100% zoom (just displaying the slanted edge part of the chart) if you insist on using a screen display. But more importantly, I'd try another Raw processor, or different settings, because what I've seen sofar doesn't get me excited (which is puzzling because Iridient Digital's 'Raw Developer' gets a lot of praise amongst it's followers).

thank you for your detailed answer. So I'll try to have the target printed and mounted ASAP. Yes #1 is a screen shot of your png.It surprises me, too, that RD should not be so good. Last time I compared it to ACR CS5, combined with SmartSharpening in PS, RD was absolutely superior. But that was on overall rendering with emphasis on color, not on resolution in particular.This is off-topic, but what is the best raw converter in your view?

thank you for your detailed answer. So I'll try to have the target printed and mounted ASAP. Yes #1 is a screen shot of your png.It surprises me, too, that RD should not be so good. Last time I compared it to ACR CS5, combined with SmartSharpening in PS, RD was absolutely superior. But that was on overall rendering with emphasis on color, not on resolution in particular.This is off-topic, but what is the best raw converter in your view?

Hi Hening,

I'm still surprised if RD can't do better, but of the ones I use myself, RawTherapee, Lightroom/ACR, and Capture One v7 all produce state of the art Raw conversions. Of course they are also quite different, with respect to the target audience they are designed for, and the strength of different features they offer. RawTherapee, although some Mac users report some difficulties in getting it to run, is free to use and might be worth a try to see if that changes the conversion quality much.

You could also sent me a link to your file by PM, which would allow me to check if I can find anything out of the ordinary, with the converters I mentioned.

In the meantime, here is my best effort with CS5, SmartSharpen Gaussian blur 100% r=0.4, viewed at 400%.--edit: hm - it looks a deal better than RD...

Hi Hening,

The link doesn't lead to a file (or at least Rapidshare says it is not found). Anyway, the CS5 conversion + Smart Sharpening already looks a lot better. You can also try Lens blur instead of Gaussian.. Sharpening in ACR would offer some more possibilities (Radius/Amount/Detail, together with masking, and noise reduction if needed) to tweak the settings.

sorry for the download failure. They have changed some things since I last used them. I had to activate "Direct download" and can now access the file. Please try again.

Hi Hening,

Unfortunately the file is not available. I suppose the https:// suggests that it requires a userID/password to access the file with this URL.

Quote

I have contacted a local print service asking to print and mount your target.

This is interesting in its own right. Depending on the technology they use to produce prints, the resolution target may reveal a thing or two about their setup. A local service that I used for quick photochemical prints found out that the RGB laserbeams of their Fuji Frontier were not aligned as well as could be ...

You will be able to judge the actual on-paper-resolution, by measuring the diameter of the central blurred disk of the 'star', and adjust the PPI of print jobs you send them in the future accordingly.

I have now received my target printed and mounted flat. How bright (in EVs) should it be illuminated for shooting? (I assume the exposure should be adjusted for the middle gray area).

Hi Hening,

I usually aim for the white patch of the gray steps at the edge to land around RGB 235,235,235, but a bit darker is no problem. I use as linear a tonecurve as possible during Raw conversion, because I also use a linear tone curve at the final Raw conversion of my images. I also use my regular (almost default) contrast and brightness settings. If you tend to use different settings you can use those, but do understand that contrast influences the final sharpening as well.

thanks for your fast reply. It seems that you don't reply to the first part of my question, which refers to illumination when shooting. I mean I can illuminate the middle gray of the target to different EV values, and I thought there might be a recommendable range. In fact it was not me who thought - the operator of my print service did.

thanks for your fast reply. It seems that you don't reply to the first part of my question, which refers to illumination when shooting. I mean I can illuminate the middle gray of the target to different EV values, and I thought there might be a recommendable range. In fact it was not me who thought - the operator of my print service did.

Hi Hening,

I don't see how the absolute level of illumination could have an effect, since it's the ISO setting, and the aperture, and exposure time, that determine where on the tone curve the image of the target will be placed. We're not shooting film with an S-curve shaped response curve, but a digital sensor with a linear response curve, and the gamma pre-compensation that will be applied afterwards is to compensate for the human response and the display gamma.

His thought - at least in my adaption - was: The performance of a lense depends on brightness of illumination of the subject, independent of post-processing. Say the target is illuminated to EV 6 or 10. I can compensate for the lower illumination by increasing the exposure time, achieving the same middle gray - but that will not help the lens to see lines. Or is it me who got it wrong?

His thought - at least in my adaption - was: The performance of a lense depends on brightness of illumination of the subject, independent of post-processing. Say the target is illuminated to EV 6 or 10. I can compensate for the lower illumination by increasing the exposure time, achieving the same middle gray - but that will not help the lens to see lines. Or is it me who got it wrong?

Hi Hening,

The level of illumination makes no difference to the lens, it's purpose is just a matter of refracting photons. Their quantity doesn't really matter, their wavelength does.

What his thoughts might have been, is that ideally one would do the deconvolution sharpening while in linear gamma space. When we do our Capture Sharpening however, we usually are already in an approx. gamma 1/2.2 space which stretches shadows, and compresses highlights. Therefore it does matter where on the gamma curve we measure contrast, and thus exposure level (not illumination level) does matter.

I've taken two precautions to work around that. One is the grayscale stepwedge which, when properly exposed, spans a large range from reflected black to reflected white. It would be quite easy to already see if the exposure level was correct, otherwise we'd lose shadow or highlight detail. When properly exposed, like I assume one's regular images are, then the white patch should land around [235,235,235], the medium gray around [128,128,128], and black somewhere around [10,10,10], but these are not absolute goals because different Raw converters produce different tonalities. The point is that there is a good spread of brightness levels with visible details from dark to light. The second precaution I took was to evaluate that entire range (whatever it happens to be) as it also forms at the sub-pixel edge transitions, and fit a curve to it. I do not measure contrast, but a contrast transition curve shape.

And as that curve shape shows, the transition is usually very well balanced and symmetrical along the brightness range, and almost perfectly follows a symmetrical differential Gaussian curve. The only deviation that can pop-up is some lens glare which raises the response of the shadows and lowers contrast. It is taken into account when fitting the curve, but it won't dominate the outcome.

thank you for your detailed reply. There is no end to surprises in these matters, for the ignorant. That the brightness of illumination made a difference sounded right to me because in MTF curves, resolution and contrast are mutually dependent. (If I got THAT right...)

thank you for your detailed reply. There is no end to surprises in these matters, for the ignorant. That the brightness of illumination made a difference sounded right to me because in MTF curves, resolution and contrast are mutually dependent. (If I got THAT right...)

Hi Hening,

Almost right, and that perhaps explains the confusion. It's actually the 'transfer of contrast' (=modulation) and resolution that are dependent. The function describes how an input modulation (100%, even if it has modest absolute contrast) and a given resolution is reduced to an output modulation, as a function of resolution. It doesn't describe, at least not directly, total scene contrast. It's the deterioration of modulation of the specific spatial frequency component that is described.