Thought I'd mention it since I heard from several people who had reluctantly had to stop using it.

Hi Keith,

Those people should visit LuLa more frequently, it seems . But I do agree, FocusMagic is special. It also makes a very nice automatic choice between enhancing the image signal, more than the noise (although it always helps to feed it relatively low noise images).

Back in the day, I used Focus Magic routinely, but because it fell behind the curve in computing terms, I switched to Topaz InFocus, which did an acceptable job. Now I use Photo Ninja as my raw convertor. PN uses deconvolution sharpening that is the best that I have ever seen, and it does it in the raw stage. It can also be used as a Photoshop filter to work on TIFFs or JPGs. So, I have no need or desire to reintroduce Focus Magic into my workflow. For motion blur, InFocus covers that, although I have rarely used it.

at what point in the workflow would it be best to use FM-before you uprez for a print or after you have upsized? Most of the images I have tried with it using the Detect button require a blur width of 2. Does this mean that my images aren`t very sharp to begin with?

at what point in the workflow would it be best to use FM-before you uprez for a print or after you have upsized? Most of the images I have tried with it using the Detect button require a blur width of 2. Does this mean that my images aren`t very sharp to begin with?

Hi Mike,

A radius setting of one or two is normal for shots which are in focus, taken with moderate apertures. Capture sharpening depends on the aperture used, lens quality, and focus accuracy. Also the Raw conversion process may require some post conversion sharpening.

You can use it at the 100% size, but you can also postpone that if you need to up-sample (which I prefer to do), e.g. for output. That way you reduce the risk of creating artifacts that will be amplified by the up-sampling algorithm. You may run into a Photoshop(?) file limitation of 2GB on sharpening of up-sampled files, in which case you can use FM on selections that divide the image into tiles.

Question for Bart or anyone familiar with both Topaz deconvolution sharpening tool and Focus magic.

I have been using Focus Magic for about 2 months now after reading some of the positive posts on this site. I find it works very well in combination with some of the photokit creative sharpening settings.

My question, is Focus Magic using a deconvolution algorithm? And has anyone compared it to the Topaz Deconvolution plugin?

Yes, Focusmagic uses deconvolution. It does some other optimizations under the hood as well, e.g. to reduce the tendency of such algorithms to also 'enhance' noise, and it attempts to avoid blocking artifacts.

Quote

And has anyone compared it to the Topaz Deconvolution plugin?

Yes. Topaz Labs Infocus uses other (deconvolution) algorithms, which sometimes produce better results than Focusmagic can achieve, but Focusmagic usually does a better job. Infocus will create ringing artifacts fairly quickly, but that also has to do with selecting too large a radius.

Both applications, especially Infocus, do a very good job when one first up-samples the source data (e.g. 3x, bi-cubic smoother can be acceptable), deconvolve it with a proportionally larger radius, and then down-sample it again. At the final size another mild deconvolution run will finish the job. Of course, when one needs to up-sample for large format output, there is no benefit in down-sampling the intermediate result.

Both applications, especially Infocus, do a very good job when one first up-samples the source data (e.g. 3x, bi-cubic smoother can be acceptable), deconvolve it with a proportionally larger radius, and then down-sample it again. At the final size another mild deconvolution run will finish the job. Of course, when one needs to up-sample for large format output, there is no benefit in down-sampling the intermediate result.

Cheers,Bart

Dear Bart,

I've read your comments on the above process a few times, and considering your expertise I'd love to try it out. It is however a little confusing to me with all the steps and possible options during these steps. I'm sorry to bother you, but could you provide us with a step by step tutorial on this process, assuming we have the correct software installed already? In my case, I have all the software you discuss and am shooting Nikon D800e, Sigma DPM series, Ricoh GR.

Why does up-sampling, deconvolve sharpening, then down-sampling again give superior results over simply deconvolve sharpening the original file at original size?

I've read your comments on the above process a few times, and considering your expertise I'd love to try it out. It is however a little confusing to me with all the steps and possible options during these steps. I'm sorry to bother you, but could you provide us with a step by step tutorial on this process, assuming we have the correct software installed already? In my case, I have all the software you discuss and am shooting Nikon D800e, Sigma DPM series, Ricoh GR.

Why does up-sampling, deconvolve sharpening, then down-sampling again give superior results over simply deconvolve sharpening the original file at original size?

I've read your comments on the above process a few times, and considering your expertise I'd love to try it out. It is however a little confusing to me with all the steps and possible options during these steps. I'm sorry to bother you, but could you provide us with a step by step tutorial on this process, assuming we have the correct software installed already?

Hi, no problem.

1. Take an unsharpened Raw conversion. No sharpening artifacts can exist, because none are created.2. Up-sample to e.g. 300%, preferably with a very good algorithm that creates very few artifacts. While not perfect, Bu-cubic Smoother will often do (although it does produce halo) good enough, especially if the prior image was not sharpened yet.3. Use a deconvolution sharpener, PS Smart Sharpening, or Topaz Labs Infocus, or FocusMagic. Expect to use a radius that is also 3x larger than you would use otherwise.4. Down-sample to the original image dimensions, regular bi-cubic will do, but better algorithms are always beneficial.5. Add a final bit of very small radius deconvolution sharpening to offset the down-sampling blur.

Quote

In my case, I have all the software you discuss and am shooting Nikon D800e, Sigma DPM series, Ricoh GR.

Why does up-sampling, deconvolve sharpening, then down-sampling again give superior results over simply deconvolve sharpening the original file at original size?

First of all, we have to consider that it would not help or be necessary if we had a perfect model of the blur PSF, and high precision data values. If we did have such a PSF and image data, convolution with that PSF would be exactly reversible by deconvolution with that same PSF. We also need to understand that our original image is not a perfect representation of the projected image. Our image is stored in a sort of rectangular grid of what seems to look like squares.

What we are actually looking at is a grid with brightness samples taken at those grid positions, more like 'blobs of brightness' than squares. By interpolating we create an image that looks more like those blobs, with smooth transitions between them. We usually do not add resolution, so we need not worry too much about adding aliasing when we down-sample at a later stage.

When we deconvolve this up-sampled representation we also have more intermediate brightness positions available to model the transitions between the original samples in a smooth way. It is also easier to see if we apply too much deconvolution, because we are looking at larger areas with information that should remain somewhat smooth, not blocky.

Deconvolution will affect all spatial frequencies, also lower frequencies (giving more punch to the entire image, a sort of clarity on a feature size level, not just contrast), but the most effect will be there where the (original + up-sample) blur matches the deconvolution PSF. Since we can push the deconvolution pretty far on the enlarged image without creating immediately noticeable artifacts, we can possibly get more effective results (also at lower spatial frequencies).

When we now down-sample again, the risk of introducing down-sampling artifacts is pretty low, because we have not really created resolution in excess of what we originally started with (we're still under-sampled), but the resolution of the original image was restored to a higher resolution level. Down-sampling with e.g. Bi-cubic will still blur the image a bit, but it will be sharper and more accurate that what we started with. To remove that down-sample blur we can do a final small radius sharpening.

This procedure works best on well-behaved original image data, AA-filtered and not yet sharpened. Using 16-bit/channel data helps but having more interpolated pixels also benefits originals with only 8-b/ch.

As always, one can combine this procedure with the use as a Luminosity Blend-if layer, just for the sharpening. That will reduce the effect of noise, and allow the use of masks, e.g. to mask out smooth sky regions. Just do the sharpening up/down sample on a copy of the image, and use the result as a layer. Before down-sampling one can disable the layer because it will only add to the risk of aliasing upon down-sampling.

>First of all, we have to consider that it would not help or be necessary if we had a perfect model of the blur PSF, and high precision data values. If we did have such a PSF and image data, convolution with that PSF would be exactly reversible by deconvolution with that same PSF.

Would using your tool for determining the sharpening radius give such (near) perfect model of the blur PSF? Or do you use both the tool and the procedure described above? As far as I remember from trying FM, one can not enter radius by the numbers, but has to do it by visual trying, which I found deterring given the very little preview window. However, your link in post #4 has now guided me to your post #15 in that thread, encouraging me to give FM a new try on next occasion.

Would using your tool for determining the sharpening radius give such (near) perfect model of the blur PSF? Or do you use both the tool and the procedure described above?

Hi Hening,

Yes, using my Capture Sharpening tool would give something very close to a perfect PSF, but we still are dealing with squarish approximations of brightness. Therefore, my Capture Sharpening tool will find the best compromise in that scale of representation. It will be easier to find a good representation if we had more samples, which is what up-sampling can offer, as long as we avoid the creation of up-sampling artifacts, and for that we may need to skip/postpone initial Capture sharpening till after the up-sampling.

Quote

As far as I remember from trying FM, one can not enter radius by the numbers, but has to do it by visual trying, which I found deterring given the very little preview window. However, your link in post #4 has now guided me to your post #15 in that thread, encouraging me to give FM a new try on next occasion.

I tend to crank up the FM Amount all the way up to 300% initially, and then slowly increase the radius until the image stops becoming sharper and suddenly starts to show bolder edges, then I back down the radius one unit followed by selecting a more reasonable Amount, maybe 150-200%, which can be tolerated on the more blurry up-sampled image. That way it is easier to find the optimum radius in FM, even with the small preview. Typical values I get for a 300% upsampled image, are approx. radius 5 or 6 and amount 200%, but YMMV also depending on the Raw conversion quality. Sometimes the 'Forensic' deconvolution method works better than the 'Digital Camera' setting.

1. Take an unsharpened Raw conversion. No sharpening artifacts can exist, because none are created.2. Up-sample to e.g. 300%, preferably with a very good algorithm that creates very few artifacts. While not perfect, Bu-cubic Smoother will often do (although it does produce halo) good enough, especially if the prior image was not sharpened yet.

Question: is there a better plugin for such up-sampling work such as Perfect Resize or Blow Up and if so, which settings are ideal?

3. Use a deconvolution sharpener, PS Smart Sharpening, or Topaz Labs Infocus, or FocusMagic. Expect to use a radius that is also 3x larger than you would use otherwise.4. Down-sample to the original image dimensions, regular bi-cubic will do, but better algorithms are always beneficial.

Question: is there a better plugin for such down-sampling work such as Perfect Resize or Blow Up and if so, which settings are ideal? You mention better algorithms such as which?

5. Add a final bit of very small radius deconvolution sharpening to offset the down-sampling blur.

Dear Bart,

1. Thank you!2. Please see my questions in red above 3. Where does luminance and color noise reduction (if needed) go in the recipe? I typically use color noise reduction in almost all bayer sensor images, but rarely use luminance noise reduction except at high ISOs.

1. Thank you!2. Please see my questions in red above 3. Where does luminance and color noise reduction (if needed) go in the recipe? I typically use color noise reduction in almost all bayer sensor images, but rarely use luminance noise reduction except at high ISOs.

Thanks!

Hi,

You're welcome.

Unfortunately there are no plugins (that I know of) that do very good up-sampling and down-sampling, there's always something lacking. There are some that do good up-sampling for that specific purpose, e.g. Photozoom Pro or Blow-up, but they may create/add detail that will get in the way of deconvolution. For down-sampling most applications do not use linear gamma space to do the downsampling in, which can cause other color-blending issues, and many do not use proper down-sampling filters (e.g. Lanczos windowed Sinc or similar). Sigh.

What we really need, both for up-sampling and down-sampling, are better algorithms than most image editing applications have to offer. For superior quality we are forced to do it outside of Photoshop, and use e.g. ImageMagick, but that may be too disruptive in most workflows. ImageMagick allows to use the superior EWA resampling, which considers Elliptical/Circular regions for weighted interpolation, which gives very smooth up-sampled results that keep a lot of original detail intact, a very good starting point for deconvolution. EWA up-sampling works best in gamma adjusted space, so no need to first do a linearization for that.

With that in mind we can try to do slightly less optimal but still decent up-sampling with a Plug-in, but we need to anticipate deconvolution as the next step. One could use an automation Plugin like Photozoom Pro, with the S-Spline Max resize method, but with all three Fine-tuning controls zeroed out. Photozoom Pro is actually supposed to do up-sampling towards a final output end product, e.g. large format output, and for that it's very very good, because it actually adds edge resolution. But for an up-sample/down-sample round-trip we must dumb it down considerably, because the added resolution can cause aliasing artifacts when we down-sample, it adds a bit too much detail for a trouble free subsequent down-sampling. One just needs to try it on one's own images and subject matter.

It's a pity that we have to jump through so many hoops and use various plugins to do such a common task as restoring sharpness to our image captures, but for smaller volumes it is doable and the output quality can be improved a lot.