AuthorTopic: Most favourable up-ressing percentage (Read 38164 times)

Not the best Subject heading to describe my question. And am looking at this topic from a layman's perspective so my question may seem somewhat naive, however...

Disregarding the printing stage for a moment.

I have read over the various algorithms associated to up-ressing and am wondering if there is any difference in quality between up-ressing an image by whole numbers i.e. 2x, 3x, 4x etc, as opposed to fractional numbers i.e. 1.5x, 2.3x or any other fractional number.

I realize the various algorithms are probably designed to arrive at the best product, regardless, but it seems to me that whole number enlarging would/should have some advantage as it relates to control of artifacts and the quality of reproduction.

As an example, and I don't know if there is an algorithm that already does this (nearest neighbour??) but if one took an image and up-ressed it 2x in both length and height (so 400% in area), by taking the first pixel (if width is a - z and height is 1 - 10), which would be a1 and when up-ressed, that pixel would now be a1, a2, b1 and b2. A2 would now be c1, c2, d1, d2 and on it goes. As each original pixel is a single tone/hue it seems to make sense that replicating this exact tone/hue in 4 pixels would result in a larger image with the exact tone/hue and, perhaps even more importantly, sharpness as the original (so if perfectly sharpened to begin with, no sharpening would be required after up-ressing). Logic (at least in my simple mind) suggests this method would not introduce artifacts nor would the tone/colours of the new image need to "invented". The same could be done at 3x or 4x etc.

I realize restricting up-ressing to certain sizes would be impractical in some applications.

And I'm sure someone will provide an answer that will cause me to go "Doh, of course, why didn't I think of that?"

Not the best Subject heading to describe my question. And am looking at this topic from a layman's perspective so my question may seem somewhat naive, however...

Disregarding the printing stage for a moment.

I have read over the various algorithms associated to up-ressing and am wondering if there is any difference in quality between up-ressing an image by whole numbers i.e. 2x, 3x, 4x etc, as opposed to fractional numbers i.e. 1.5x, 2.3x or any other fractional number.

Hi Marv,

Fundamentally, interpolation is the process of using known data to estimate values at unknown locations. There are several methods in common use to do those estimations. Most of those methods preserve the values of the original locations if they happen to coincide with the original locations. The real issue becomes, how credible are the estimates at the unknown locations.

Quote

I realize the various algorithms are probably designed to arrive at the best product, regardless, but it seems to me that whole number enlarging would/should have some advantage as it relates to control of artifacts and the quality of reproduction.

Unfortunately, in most natural images, that is not the case. As said, the values at the original locations will usually be preserved as closely as possible, but in the case of exact integer magnification factors there is zero information available in between the original locations.

So where does that new information come from then? Well, that's where the pixels that are surrounding the new location come into play. Have a look at the attached example cases of a single line fragment where one pixel needs to be inserted, and can give hugely different results (e.g. from 40 to 200), also (especially) depending on the surrounding pixel values that are not immediate neighbors.

The simplest interpolation just copies a the nearest known pixel value, Nearest Neighbor interpolation. That will not invent new values, it only uses the ones already known for sure, but they are probably quite wrong for the new location. It does indeed cause horrible edge detail, stair-stepping and other blocking patterns.

A somewhat more accurate interpolation is a bi-linear interpolation (like case 1 in the example), which takes a kind of average of the 4 closest (in 2D space) surrounding pixel values. While that produces better results, and fewer (edge) artifacts, it also is very soft, and low contrast.

An already much better interpolation method is by taking a weighted average of the 16 nearest pixel values with a BiCubic interpolation algorithm, which makes a smoother/non-linear transition between the different values.

As more and more surrounding pixel values are taken into account, and their contribution is weighted more elaborately, the original contrast and sharpness is preserved best at the new locations, while remaining smooth edged. It also becomes less and less relevant to only use integer magnification factors. However, the price to be paid is that the guess can also be wrong enough to produce noticeable artifacts, especially around abrupt jumps in contrast amidst an otherwise smooth surrounding (ringing artifacts). Clever algorithms allow dynamic adaptation of the interpolation algorithm to the local contrast situation.

Here is a more technical write-up about some commonly used algorithms, with some examples for comparison. These algorithms are universal enough that they don't really care about integer magnification factors or those with decimal fractions.

If you are using scaling that can be classified as "linear (multirate) filtering based on Nyquistian (re) sampling", there are some consequences of using small integer factors. This cathegory includes most "traditional" resampling.

A practical, fast image scaler may be more accurate at certain "phases" than others. You never know except trial and error, but chances are that it will be closer to the idealized filter response for a factor of 2.0 than a factor of 2.01. In a good (IQ-focused) design this should not matter that much.

All scaling has artifacts. If your input contains highly regular (typically aliased) high-frequency material (fences, roof-tiles, birds feathers,...), these artifacts may give rise to patterns that are quite visible. If you keep your resampling factor to an integer, these patterns will tend to have a small period, that may be less annoying.

For most (properly sampled) real images, this should probably should be low on your list of worries.

All scaling has artifacts. If your input contains highly regular (typically aliased) high-frequency material (fences, roof-tiles, birds feathers,...), these artifacts may give rise to patterns that are quite visible. If you keep your resampling factor to an integer, these patterns will tend to have a small period, that may be less annoying.

I agree, it's been my experience that upsampling (using your tool of choice) is better done in amounts like 2X, 4X because intermediate %'s can cause artifacts as hjulenissen hints.

The key is to upsample to above the final size/res you may need, apply sharpening and other tricks (like adding noise/grain) and only down sample to get to your final output PPI (360/720 Epson, 300/600 Canon & HP).

Here's an older article about upsampling that is still useful-The Art Of The Up-Res. I've had no problem upsampling 200% and if the original is photographically excellent, 400%. Higher upsampling can be done with exotic tools and techniques and the success largely is dictated by the final output. A glossy paper can render enormous detail and large upsampling can suck. Textured watercolor or canvas can hide a lot of resampling issues.

Correct (unless one wants to settle for a very low resolution version), in practice one attempts to create an interpolated result with values at the new locations that can exceed the local values, unlike a bi-linear result which has very low contrast. However, some algorithms produce better results than others, in most cases (including non-integer magnification factors) of properly processed continuous tone images.

Quote

If your input contains highly regular (typically aliased) high-frequency material (fences, roof-tiles, birds feathers,...), these artifacts may give rise to patterns that are quite visible. If you keep your resampling factor to an integer, these patterns will tend to have a small period, that may be less annoying.

For up-sampling, that depends on the original image content, which should not be sharpened enough to be aliased (IOW proper capture sharpening, and not more than that). It also depends on the resampling algorithm, but that can be tested quite easily.

To test the quality of the resampling algorithm, one can use an image filled with uniform (white) noise, and preform a Fast Fourier Transform (FFT) on it after resampling. That will reveal potential issues with the algorithm (although it requires a bit of knowledge about interpretation of Fourier transforms, i.e. lossless spatial domain to frequency domain transforms).

Here are e.g. the results of 3 different algorithms/filters on a 200% magnification, and the yellow circle indicates the Nyquist frequency (the default ImageJ result displays the Logarithm of the transform, to amplify the visibility of any weaker signal levels towards the corners):

The Bicubic Smoother algorithm at the left is performing it's interpolation in the horizontal and vertical direction (possibly in 2 separate passes, for speed reasons). That also results in a higher diagonal resolution, the central noisy area extends closer to the Nyquist limit. Also clear is that that approach creates (mostly) horizontal and vertical (overshoot/ringing/aliasing) artifacts, and limits the horizontal and vertical resolution to some 62% of the new dimensions (so we have gained a little hor/ver resolution, but also some artifacts).

The middle FFT is of a 200% up-sampled noise image with ImageMagick, using the Elliptical Weighted Average (EWA) resampling method (-distort Resize). That EWA resampling results in more balanced resolution in all directions and angles. The Lanczos filter windowed resampling, produces the same horizontal and vertical resolution and less diagonal resolution, but also less aliasing (the signal levels outside the Nyquist circle are lower), but is also not (ringing) artifact free in the horizontal and vertical dimensions.

The example at the right is also an EWA resampling in ImageMagick, but this time the Robidoux windowing filter was used, which produces much higher horizontal and vertical resolution (equal to the diagonal resolution of an orthogonal resampling method), but also some more ringing and some aliasing in all directions, although mostly in horizontal and vertical directions.

Just for fun, I also tested Benvista's Photozoom Pro although it's main forte is adding new edge detail resolution. Here is the result of a 200% up-sample with the S-Spline Max algorithm:

It too shows signs of orthogonal resampling, but no very prominent ringing or aliasing artifacts. There is some noise aliasing, but very evenly spread in all directions. The actual up-sampled noise looks quite pleasant and well defined. Again, Photozoom's forte is in adding edge detail and resolution, but that doesn't show in an image with only random noise.

The key is to upsample to above the final size/res you may need, apply sharpening and other tricks (like adding noise/grain) and only down sample to get to your final output PPI (360/720 Epson, 300/600 Canon & HP).

No scaling after sharpening, right?

I believe that DVDs could have looked a lot better today if they had a reasonable, linear response. Rather, I believe that many are heavily sharpened for 480/576-line TVs that were the norm once, making it hard to do good resampling to todays 1080p displays.

For up-sampling, that depends on the original image content, which should not be sharpened enough to be aliased (IOW proper capture sharpening, and not more than that). It also depends on the resampling algorithm, but that can be tested quite easily.

Ideally, a scaler should be able to take _any_ image content at, say, 1000x1000 pixels, and blow it up to 1042x1042 while keeping its appearance as close as possible to the original (apart from size, of course).

Applications such as PhotoAcute rely upon "improper sampling" (along with image series that are near duplicates, but with minute random shifts) to produce a single "scaled" output image that is claimed to be 2x - 4x the resolution.

I agree, it's been my experience that upsampling (using your tool of choice) is better done in amounts like 2X, 4X because intermediate %'s can cause artifacts as hjulenissen hints.

Hi Jeff,

That's a bit strange. Even when using the Bicubic Smoother up-sampling algorithm, which is not very good when compared to some others, testing does not reveal a significant difference when factors of 2x or more are used.

Here follows a stacked pyramid of up-sampled images (uniform noise, therefore complex high contrast amplitudes in random directions), from 400% re-sampling at the bottom to 200% resampling at the top, with 10% increments.

When you click on the image, it should load an animated GIF version.

As you may see, the 5 pixel wide borders of the exposed lower images show that only the contrast gets a bit lower (because pixels are spread over more pixels), and that the diagonals remain relatively sharper than the horizontal and vertical edges (as was shown earlier in the Fourier Transform, due to the relatively Sqrt(2) higher diagonal resolution in an orthogonal resampling). That also means that these up-sampled images can be sharpened to recover the contrast loss due to the up-sampling (which does not add new detail, besides potential artifacts).

More importantly, there is no visible indication of suddenly better interpolation quality at the 200, and 400% image sizes.

Once the image is resampled by 200% or more, additional/intermediate pixel positions are added, the original source pixels just get over-sampled at their new locations. The only losses are due to the resampling algorithm (adding various artifacts), not specifically due to the amount of non-integer up-sampling.

I've also attached 4 samples of an up-sampled (200%, 210%, 220%, and 230%) quadrant of a zoneplate with original detail from very low at the bottom right corner, to 0.5 cycles per pixel (1 cycle per 2 pixels) in the diagonal direction at the top left corner. The resampling was done with ImageMagick with EWA resampling (-distort Resize) with the default Robidoux filter, to allow comparison with your own preferred method.

If I know I'm going large, I start with my highest-res camera. If available in ACR, I may up-res the image there. If that isn't enough, then I'll interpolate in Photoshop 6.x, usually using "automatic" interpolation. Sometimes, I'll use Perfect Resize. There's a number of different programs and plug-ins out there, most that you can download and try.

Try several different ways and see what works best for you and you'll do fine!

there is no visible indication of suddenly better interpolation quality at the 200, and 400% image sizes.

Here is another way of looking at it, I've plotted the noise power spectra of the up-sampled uniform noise image. The images were upsampled with the state of the art EWA algorithm, with the default ImageMagick Robidoux filter.

I've scaled the horizontal dimensions in the chart to the 200% upsampled version. All that basically happens, is that the lower spatial frequencies become a bit easier to quantify (because there are more pixels to do so), and there is more micro detail signal beyond Nyquist (because there are more pixels in the larger images). It is not adding any real detail, just more accurate artifacts (halo, ringing) but at a lower amplitude than the actual detail.

Again, there are no significant benefits for image quality from using only integer resampling factors. Good algorithms are flexible enough to perform quite well under all scenarios.

Again, there are no significant benefits for image quality from using only integer resampling factors. Good algorithms are flexible enough to perform quite well under all scenarios.

Cheers,Bart

I'd say that the scaling algorithms of ImageMagick seems to be state-of-the-art when it comes to "traditional" scaling. Most people do scaling within whatever application they do their image processing work within, and those will often be of a simpler design than those of mr. Robidoux.

I'd say that the scaling algorithms of ImageMagick seems to be state-of-the-art when it comes to "traditional" scaling. Most people do scaling within whatever application they do their image processing work within, and those will often be of a simpler design than those of mr. Robidoux.

-h

I have no problem with Bart using what he feels is a "state-of-the-art" algorithm to make his example.

While some of the math and examples he uses are, at times, lost on me, he has shown, in this post and others, that there are significantly better interpolation algorithms than the state-of-the-"past" Bicubic implemented by Adobe.

If the requirement is to "do scaling within whatever application", Perfect Resize and Photozoom Pro provide superior routines and repeatable, consistent results than can be received using "secret incantations" with bicubic.

While ImageMagick Robidoux filter might require a "round trip" with a tiff, I suspect it is quite easier than some of the methods suggested using bicubic.

The last method Bart has mention, and the one I use, is Qimage Ultimate. WHich, while outside the image processor used, is by far the simplest (once learned)...but that is a subject for a different thread.

I'd say that the scaling algorithms of ImageMagick seems to be state-of-the-art when it comes to "traditional" scaling. Most people do scaling within whatever application they do their image processing work within, and those will often be of a simpler design than those of mr. Robidoux.

Hi,

While that may be the case (although the "-distort Resize" method was actually already available for several years, but is now simpler to execute), it makes one wonder why e.g. Photoshop doesn't offer more state of the art resampling ...? A number of the newer features, e.g. lens distortion correction and perspective/keystone correction, are now based on a mediocre resampling foundation.

Besides all that, the above demonstrated insensitivity to either fractional or integer magnification even applies to Bicubic Smoother, so it is not only a feature of better resampling algorithms. To demonstrate that, I've also done the Noise Power Spectrum analysis on the same image after resampling it with Bicubic Smoother. Again, no sudden jumps in quality for 200% or 400% compared to the intermediate fractions, the higher spatial frequencies gradually lose absolute per-pixel resolution because there are more pixels for the same source detail:

As indicated, the pattern of detail loss is similar for all magnification factors of 200% or more. There is a difference with the earlier mentioned EWA circular sampling which causes symmetrical resolution in any direction, because the Bicubic algorithms create a higher diagonal resolution than in the horizontal/vertical directions (as shown earlier in the Fourier transforms). There will also be more ringing and aliasing in the horizontal and vertical direction, compared to the EWA sampling (as indicated by the Fourier transforms shown earlier). Therefore the spectra for Bicubic resampling as shown are an average for all directions, and will fluctuate with direction.

For those wondering what the Noise spectrum of the original uniform noise image that was used for the resampling looks like, it's basically a horizontal line (all spatial frequencies, covering the full spectrum, are present without attenuation or boosts).

Here is another way of looking at it, I've plotted the noise power spectra of the up-sampled uniform noise image. The images were upsampled with the state of the art EWA algorithm, with the default ImageMagick Robidoux filter....

Bart:Although the Robidoux filter, like all well-designed Elliptical Weighted Averaging methods, is isotropic when resizing and preserving the aspect ratio, it is applied to filter values laid out on a grid, which breaks the rotational symmetry. Do I understand correctly that the Robidoux noise power spectra plots were not the results of averaging in all directions, like you did for the Bicubic Smoother ones? I do understand that the difference may be small, but if possible I'd like to compare apples to apples.