This is how a comparison of these resampling filters works out, both in a logarithmic FFT Power spectrum of White noise, and the same as plotted by spatial frequencies (the average radial profile of the plots), see attachments. I've used a 300% upsampling with different methods/filters as a comparison, not because 300% is more special in any way than say 290% or 310%, but more for safety because it allows to upsample and (deconvolution) sharpen a D800 file in Photoshop while staying below 2GB which may trigger memory issues with some plugins, yet big enough to see issues.

To add my interpretation based mostly on these graphs: The EWA methods indeed produce circularly symmetric results with equal sharpness at all angles, in contrast to the tensor type of resampling in horizontal and vertical directions, which creates higher diagonal resolution in a rectangular pixel grid. The EWA sharpness is uniform at all angles, and there are progressively also relatively few aliasing artifacts added to the interpolated (additional) pixels.

Default EWA Robidoux filtering results in a relatively high modulation of spatial frequencies at and slightly beyond the Nyquist frequency of the original image content. It also has a relatively low modulation in the region where aliasing can be expected, but it tends to show a bit more/sharper stair-stepping after deconvolution sharpening.The EWA Lanczos variants have a sharper modulation cut-off at the original image's Nyquist frequency which should reduce the contribution of high modulation aliasing just beyond Nyquist. However, as usual with Lanczos types of filters, there will be some ringing artifacts at sharp edges unless additional local regularization is performed.

It's not easy to declare a simple quality ranking, because it depends on image content. In general I prefer the EWA resampling because of its uniform sharpness, irrespective of edge orientation. The EWA Lanczos filtered methods all preform quite well, except for the ringing artifacts which may be an issue for some types of image content. The default EWA Robidoux filter seems to produce a bit less ringing, but more stair-stepping (which may be less of an issue with e.g. a final output resampling to 600 or 720 PPI, but more of an issue with significant resampling to 300/360 PPI printed output).

I have not extensively tested all these filters at multiple (non-)integer resampling factors, but I assume there will be not too much of a difference since these filters are not tuned for a single specific spatial frequency only, but more for general resampling. As always, the method one uses should be tested for the intended use, because some artifacts may be less important at certain viewing distances, and some subject matter benefits more than others.

Bart:Although the Robidoux filter, like all well-designed Elliptical Weighted Averaging methods, is isotropic when resizing and preserving the aspect ratio, it is applied to filter values laid out on a grid, which breaks the rotational symmetry. Do I understand correctly that the Robidoux noise power spectra plots were not the results of averaging in all directions, like you did for the Bicubic Smoother ones? I do understand that the difference may be small, but if possible I'd like to compare apples to apples.

Hi Nicolas,

All the Power spectrum plots are averaged radial profiles, with equal weighting at all angles. I just used the ImageJ 'Radial Profile' plug-in to generate the chart data which was copied and pasted in Excel to combine multiple plots. As the FTT power specta show (do remember that they are logarithmic by ImageJ default, so with boosted lower amplitudes), the EWAs have much cleaner performance beyond the original image's Nyquist frequency, and the tensor Lanczos filter also is 'cleaner' than the Photoshop Bicubic Smoother filter for aliasing artifacts (but not for ringing artifacts unless addressed additionally).

The Robidoux filter, as can be seen in the logarithmic FFT Power Spectrum in the previous post, is indeed rotationally not as uniform as the EWA Lanczos versions. Upsampling of a zoneplate (rings) image, produces very clean results with 'EWA Lanczos Radius' and 'EWA Lanczos Sharpest 4' filters, the 'EWA Lanczos Sharper' shows more stair-stepping than both.

So for normal image content, not excessively sharpened yet, these may be very good for general up-sampling of properly AA-filtered image content. The risk of ringing is reduced when edge sharpness in the original image is moderate, or when adaptively reducing the amount of amplitude boost by choosing a different filter.

For example, even though it is not mentioned in http://www.imagemagick.org/Usage/filter/nicolas/ which is slowly going stale, I now recommend EWA LanczosSharpest 4 for some types of content, and I am now more fond of EWA Robidoux (more for downsampling than upsampling, though) and EWA LanczosRadius than I previously was.

The reason there is no pre-defined "LanczosSharpest" filter in ImageMagick right now is that I was focusing on 3-lobe methods when the door was open to add named methods, and "LanczosSharpest 3" is not that good.When I have a minute, I may address this state of affairs.

Why not try out some of your concepts and see what works best for you? Doesn't have to be an extensive test, but what works for others in their workflow may not work as well for you.

If we're talking images being sent to some large output (or why size up?), I totally agree. Looking at synthetic images sharpened on-screen with differing methods is interesting. But most care what a print looks like after sizing and you have to make that print. I've tried different techniques and products but the final evaluation is an 8x11 print off my 3880 representing a small part of what would be a much bigger print. At this point, the several 3rd party products tested didn't provide anything other than much slower processing compared to using Lightroom or ACR for initial size and capture sharpening (which is more critical to the results than the actual upsizing processes, at least visibly on a print). Same with step interpolating.

However, what about this comment from Schewe above: "The key is to upsample to above the final size/res you may need, apply sharpening and other tricks (like adding noise/grain) and only down sample to get to your final output PPI (360/720 Epson, 300/600 Canon & HP)." As I understand Schewe's comments, one should upsample in even amounts (2x, 4x, etc.) to whatever it takes to get just larger than the final output size/res, sharpen and apply other final touches, and then downrez to the final output size/res without any additional sharpening or other final processing. Or have I misunderstood?

However, what about this comment from Schewe above: "The key is to upsample to above the final size/res you may need, apply sharpening and other tricks (like adding noise/grain) and only down sample to get to your final output PPI (360/720 Epson, 300/600 Canon & HP)."

Hi David,

As far as I understand Jeff, he generally suggests to upsample if the output resolution is below 300 / 360 PPI, or to 600 / 720 if output resolution is above 300 / 360, and downsample if resolution is above 600 / 720 PPI.

Personally, I would disagree about resampling after sharpening, because resampling in general reduces resolution and sharpening after resampling is usually beneficial.

Quote

As I understand Schewe's comments, one should upsample in even amounts (2x, 4x, etc.) to whatever it takes to get just larger than the final output size/res, sharpen and apply other final touches, and then downrez to the final output size/res without any additional sharpening or other final processing. Or have I misunderstood?

That seems to be his advice, but again I disagree. I have not seen anything that substantiates any benefit of integer/even resampling factors. In fact, most resampling algorithms are totally scale agnostic and will treat any resampling exactly the same way, totally scale invariant. In theory, it would be possible to devise an algorithm that benefits a specific scaling factor slightly more than other factors, but it would be impracticable for general use.

There is a demonstrable benefit to treating down-sampling differently compared to up-sampling, but even that is usually ignored by the generic resampling routines. Most software delivers rather sub-standard quality when it comes down to resampling, unfortunately (because it is avoidable).

As far as I understand Jeff, he generally suggests to upsample if the output resolution is below 300 / 360 PPI, or to 600 / 720 if output resolution is above 300 / 360, and downsample if resolution is above 600 / 720 PPI.

Personally, I would disagree about resampling after sharpening, because resampling in general reduces resolution and sharpening after resampling is usually beneficial.

My response "no scaling after sharpening" was in reference to do image resampling before sharpening...if you were to then downsample then of course you would want to add sharpening because downsampling introduces softness.

Personally, since I use Lightroom (and usually large captures) the only resampling I do is in the LR Print module and I apply output sharpening where LR upsamples 1st and then sharpens...

As for the upsampling %'s, I've pretty much moved away from that (in part because of Bart's tests) and also since I don't do upsampling in Photoshop for the most part-although Photoshop CC's new "Preserve Details" option in Image Size is very interesting. If I were to need to do a massive upsample of a small original, I would prolly take a look at doing it in PS CC.

If we're talking images being sent to some large output (or why size up?), I totally agree. Looking at synthetic images sharpened on-screen with differing methods is interesting. But most care what a print looks like after sizing and you have to make that print. I've tried different techniques and products but the final evaluation is an 8x11 print off my 3880 representing a small part of what would be a much bigger print. At this point, the several 3rd party products tested didn't provide anything other than much slower processing compared to using Lightroom or ACR for initial size and capture sharpening (which is more critical to the results than the actual upsizing processes, at least visibly on a print). Same with step interpolating.

I have not seen anything that substantiates any benefit of integer/even resampling factors. In fact, most resampling algorithms are totally scale agnostic and will treat any resampling exactly the same way, totally scale invariant. In theory, it would be possible to devise an algorithm that benefits a specific scaling factor slightly more than other factors, but it would be impracticable for general use.

Resampled it by 200% and 203% in "image preview" in OSX 10.9 and cropped 100x100 pixel in the upper left corner. Then I zoomed a lot (retina displays are poor for 1:1 pixel peeping) and laid out the images side by side for a snap shot.

My point is that the result of an accurately integer ratio scaling, and a slightly different number produce some differences in the output for this particular kind of input. It is my understanding that any polyphase-type "linear" scaling procedure will have similar (in kind) behaviour.

I believe that the image shows that when you define the output pixel grid to be a nice multiple of the input grid, you get an output that is periodic with a short period, for "aliased"/periodic input. When it is a less nice multiple, the output period (or the "sampling" of the filter kernel) is larger. This does not make it any less mathematically correct, but it can make for visible differences that might not be desired.

I do not claim that this is a big visible problem for natural images. Usually, there will be some natural variation (e.g. variable distance/focus) that cause the pattern to be non-uniform anyways.

I believe that the image shows that when you define the output pixel grid to be a nice multiple of the input grid, you get an output that is periodic with a short period, for "aliased"/periodic input. When it is a less nice multiple, the output period (or the "sampling" of the filter kernel) is larger. This does not make it any less mathematically correct, but it can make for visible differences that might not be desired.

I do not claim that this is a big visible problem for natural images. Usually, there will be some natural variation (e.g. variable distance/focus) that cause the pattern to be non-uniform anyways.

I agree with this too (With the minor caveat that for this to work, one needs to use the commonly used "align corner pixel corners" image geometry convention. With the "align corner pixel centers" image geometry convention, which I recommend, all other things being equal, for upsampling over the other one, what you need is one less than the numbers of pixels in corresponding directions to be an integer ratio. Such "short periodic coefficient sequences" can help preserving uniform "texture" if it is there; it can also add unwanted periodicity, hence "macro" features, where you don't want them. Such artifacts are less of an issue with good low pass filters (EWA Lanczos, for example) than with, say, bilinear. And they are more of an issue when the resize ratio is close to 1.)

Personally, since I use Lightroom (and usually large captures) the only resampling I do is in the LR Print module and I apply output sharpening where LR upsamples 1st and then sharpens...

As for the upsampling %'s, I've pretty much moved away from that (in part because of Bart's tests) and also since I don't do upsampling in Photoshop for the most part-although Photoshop CC's new "Preserve Details" option in Image Size is very interesting. If I were to need to do a massive upsample of a small original, I would prolly take a look at doing it in PS CC.

Like Jeff, I do most of my printing from Lightroom. It is very convenient and I am generally pleased with the results, even though they may not be optimal, according to Bart (who is very knowledgeable). After reading this thread, I did re-examine my LR printing workflow and learned that if one does not check the resolution box to specify a printing resolution and checks the dimensions in the guides panel, the linear size and pixel dimensions of the printed image are shown (as illustrated below). In his LR4 book, Martin Evening suggests not checking the resolution box in which case LR will send the full resolution to the printer and the downsizing will be done by the printer driver. I think it would be better to initially leave the resolution box unchecked and obtain a readout of the print resolution. With the Epson printer, one would then enable print resolution and use 360 ppi if the indicated resolution is less than 360 and 720 for a resolution greater than 760.

For best quality, it might be preferable to resize in Photoshop. For upressing I do have PhotoZoom Pro (recommended by Bart). If one is printing 8 x 10 inch with a 36 MP image, downsizing is needed. In Photoshop, one can use Bicubic Sharper (perhaps with a pre-blur) for convenience, but is there a plug available which could do a better job. Or I could use ImageMagic with a separate command line step. What are others doing?

If one is printing 8 x 10 inch with a 36 MP image, downsizing is needed. In Photoshop, one can use Bicubic Sharper (perhaps with a pre-blur) for convenience, but is there a plug available which could do a better job. Or I could use ImageMagic with a separate command line step. What are others doing?

That is interesting. Bart recommends regular bicubic, not bicubic sharper. He also recommends Topaz detail for sharpening. I recently acquired the program on Bart's recommendation, but have not yet mastered its learning curve. Does anyone know what downsizing algorithm would be used in LR?

That is interesting. Bart recommends regular bicubic, not bicubic sharper.

Hi Bill,

That's correct, even though the specific answer mentioned above was in relation to upsampling, deconvolution sharpening, and down-sampling again to the original size. Down-sampling image detail that exceeds the highest spatial frequencies of the smaller size will lead to aliasing artifacts, and the various resampling filters produce different artifacts as a result. Bi-cubic sharper produces horrible aliasing, even regular bicubic down-sampling improves with a prior blurring of the highest spatial frequencies. In the specific up/down-sampling suggestion, there will be little image detail to cause trouble (even with bicubic) because there is not much aliasing potential.

Quote

He also recommends Topaz detail for sharpening. I recently acquired the program on Bart's recommendation, but have not yet mastered its learning curve.

With the 'Detail' plugin it's important to grasp the difference between the regular Detail and the Boost sliders for all detail sizes. The Boost sliders target the low contrast detail (but with a correlation to the Detail strength setting), the regular sliders target higher contrast detail of the specific size one is modifying (enhancing or reducing). Since micro-detail is by definition of lower contrast, because it has a lower MTF, the Boost slider can compensate for that a bit as well. One has to avoid smooth surfaces such as defocused areas or sky, and Detail 3 has masking capabilities to achieve that if one is working out of e.g. Lightroom without layer functionality like in Photoshop.

As for Output sharpening use, this video offers a reasonably good workflow explanation (using the older version of Detail, the newer version offers more tools such as masking thus reducing the need for Photoshop), although I'd also pay more attention to the "Deblur" panel, because it can mitigate some of the up-sampling blur. Good up-sampling generally should try to avoid adding big halos, and therefore produces low detail contrast which can be compensated for by deconvolution. This video (again of the older, more limited version) does a good job of explaining the basic controls. Some of those controls are using an innovative approach towards changing luminosity (contrast), also the luminosity component of (complementary) colors.

Detail 3 also allows to specifically address shadows and/or highlights with different settings from the overall settings. It's important to note that the "Overall", "Shadow", and "Highlight" selections allow individual settings that will be combined! Shadows are usually dark because they receive less light, and that light is usually more diffuse/ambient than directional. If we have important detail in the shadows, but we do want to keep them dark, then we can use the shadow set of detail controls and regain some more detail there. That looks much more natural than adding too much fill light to accentuate subject structure in the shadows. Likewise, we may want to specifically target light sky structures with the highlight set of controls.

Quote

Does anyone know what downsizing algorithm would be used in LR?

I'm not sure, but it's quite decent. It's certainly better than regular Bi-cubic, and it produces much less aliasing than most filters/algorithms.