Hadn't heard that before - and will certainly give it a try to see what difference (if any) it makes. (I assume you meant dpi, not ppi, as the ppi of a file "fed to the printer" makes no difference whatsoever. It is the actual dimensions of the file that might make a difference.).

Uh...no, that's not at all correct...ink jet printers report a specific resolution in DPI to the OS level print pipeline. For Epson that's normally 360 and HP/Canon it's 300 DPI. If you send an image that is at a different PPI>DPI ratio, guess what? The print pipeline will resample your image to the reported resolution. While I can't confirm it, the thought is at this point that resampling is either Nearest Neighbor (most likely) or Bi-Linear. So, do you really want your images resampled by the OS print pipeline that way?

No, you really, really DO want to send the correct resolution to the printer. for Epson it's either 360 or 720 PPI (the 720 is with Finest Detail selected) and 300/600 PPI for Canon/HP (depends on the printer resolution setting).

Given that the sensor resolution between 12 and 24MP is only different by a factor of about 40% it doesn't take too much post-processing to even images up, I reckon. I once did a shoot-out between my 3.4MP Sigma and a 12MP micro four-thirds, using QuickMTF to measure sharpness and MTF. Pixel pitch 9.12um versus 4.3um. On a per-pixel basis, the Sigma was a clear winner. However, when 12MP images were downsized to 3.4MP, it was hard to tell the difference.

Given that the sensor resolution between 12 and 24MP is only different by a factor of about 40% it doesn't take too much post-processing to even images up, I reckon. I once did a shoot-out between my 3.4MP Sigma and a 12MP micro four-thirds, using QuickMTF to measure sharpness and MTF. Pixel pitch 9.12um versus 4.3um. On a per-pixel basis, the Sigma was a clear winner. However, when 12MP images were downsized to 3.4MP, it was hard to tell the difference.

Yes, that does appear to be the popular view. However, I'm not certain how it is applicable when the desired output is generally less than 2K wide. I would have thought that any re-sampling should match the desired output. Up for large prints, perhaps both up and down for A4, and certainly down for web shots. In other words, are you sure that up-sampling applies to all image comparisons?

Quote

Downsampling looses resolution but keeps some contrast but it also introduces aliasing artifacts, upsizing introduces less artifacts.

I believe that downsampling does not always introduce aliasing artifacts. Take, for example, an image having a sinusoidal pattern at less than half of the sensor's Nyquist frequency. Downsizing 50% would not produce any artifacts at all in that admittedly theoretical case. But the same statement would be true of a cloudscape, would it not?

What artifacts are introduced by up-sampling? Apart from blur, that is ;-)

Quote

If you compare large pixels with small pixels at actual pixels the large pixels also win. Just don't forget that quantity has a quality of it's own.

If you don't print, you don't need many megapixels. Full HD i about 2 mega pixels.

What I used to do to choose a print dimension say A2 or 70x100 cm at a given PPI (My lab suggests 200 PPI for 70x100 cm) and resize both comparison images to that size. That is the data that would go to the printer.

I believe that downsampling does not always introduce aliasing artifacts. Take, for example, an image having a sinusoidal pattern at less than half of the sensor's Nyquist frequency. Downsizing 50% would not produce any artifacts at all in that admittedly theoretical case. But the same statement would be true of a cloudscape, would it not?

What artifacts are introduced by up-sampling? Apart from blur, that is ;-)

I believe that downsampling does not always introduce aliasing artifacts.

In the generally, useful cases, where images have wide bandwidth (spatial detail) and filters have less than infinite stop-band attenuation you would tend to get some aliasing. The visibility of those artifacts is a matter of debate, of course.

Quote

What artifacts are introduced by up-sampling? Apart from blur, that is ;-)

There are many kinds of up-sampling algorithms. Linear scaling is best understood, and bicubic is an important subset of linear scaling algorithms. By varying two parameters in the bicubic formula, you get the classic trade-off presented by Mitchell in 1988:

The optimal scaling algorithm may depend on the kind of input (circuit diagrams? pixel art? noisy cell-phone? OLPF-less MFDB?) and the viewing method (rgb triplet LCD display? inkjet printer?). Some might expect the scaling to do pre/post sharpening/smoothing to compensate for input/output deficiencies, but I think that such things ought to be done outside of the scaler.

Once you go into highly non-linear, signal-adaptive up-scaling, "everything" is possible. A smart scaler might guess that a particular object in an image was a pine tree and substitute it for a high-resolution pine-tree in a local database. This might work if the guess is right (and a suitable replacement is found). However, if the object actually was something else, the artifact could be perceived as bad.

I think viewing distance is to some extent irrelevant, at least for fine art. To me a high quality fine art print should "look good" at any distance, even at nosing distance, it's a sign of a quality product. "Looking good" is a matter of taste, but to me any visible jaggies, aliasing, oversharpening, paintery-like look from fractal upsizing or whatever or other digital artifacts is *much* harder to accept than say visible film grain or just a generally soft image.

The best way to make a print look good at nosing distance is to have so much resolution in the original that you don't need to drop below 200 ppi or so, otherwise upsizing artifacts may become visible when you nose the print. I think this is one of the main advantages of having a high resolution digital back, that you can print (fairly) large without having to worry about digital scaling artifacts becoming visible up close.

I actually prefer to sacrifice some sharpness on "correct viewing distance" which may require over-sharpening halos to make the image look good up close.

PPI = pixels per inch. It is what you feed te printer with. Each printer, based on driver settings will expect a specific PPI. If it does not get that, it will interpolate whatever it is fed to get what it wants.

Not really.

What matters is the actual dimensions of the file in pixels - e.g. 6400px x 4800 px. That the file may have a ppi tag attached to it only determines how it will be displayed on a monitor. As far as the printer is concerned, it should treat the file identically irrespective of any ppi setting. It is if the image dimensions (i.e. the total number of pixels) are inadequate that it will invoke interpolation. However, as you correctly suggest, how the printer prints it will be determined by the dpi setting applied by the printer.

I am interested in "pixels per viewing distance" as a measure of what our visual systems detects.

It may not be easy to pin it to a single number because it depends on (assuming 'optimal' viewing conditions) contrast, resolution and the individual's eyes (and degree of optical correction).

For a meaningful (and fixed contrast, to eliminate one variable in the) measurement, one could use a resolution target that allows quantification at multiple scales of magnification. One could use a print of a target with many resolution levels, such as a 'Siemens' star target. When printed as instructed, 600 or 720 PPI to a 130mm square output size, at the perimeter of the 'star', the resolution is: (144 / Pi) / 100mm = 0.458 cycles/mm (~0.916 lines or pixels/mm, or 0.916 x 25.4 = 23.27 PPI).

When we increase our viewing distance to the target, there comes a point that the outer perimeter will no longer be resolved by our eyes. For me that point is approx. at 4.50 metres (4500mm) viewing distance, or 15x normal reading distance of 300mm. When we multiply 15 x 23.27 PPI we would get a resolution of 349 PPI at reading distance. This is for regular visual resolution, not Vernier acuity. Higher contrast features would give higher resolution and thus require higher PPI, and lower resolution obviously gets away with lower PPI.

That allows to determine the minimum required PPI at any distance (assuming our eyes have the same angular resolution at various distances). The additional printer resolution capability, 600 or 720 PPI, is used to allow Vernier acuity, higher contrast detail, and sharpening.

Quote

Hopefully one is close to the effective full image width of 81cm, since my observations in galleries suggests that this is a common range for the viewing of large prints. (Paintings by the way are typically viewed from further away, further than the "normal" distance of image diagonal length. But most paintings are very low res. by photographic standards!)

At 81cm the above situation would suggest 4500/810 = 5.56, multiplied by 23.27 PPI that gives 129 PPI resolution, and I'd use the double of that to allow for Vernier acuity / higher contrast detail / sharpening, so 258 PPI as a minimum for that viewing distance. Closer viewing will produce a sense of lacking resolution, for my eyes anyway.

Printing that amount of minimum detail would of course require up-sampling to the native printer resolution, to avoid suboptimal interpolation by the printer driver, and allow artifact free sharpening at that output resolution for the specific print material.

Thanks, yes, I have used that target before. It does not go well with too much beer

I notice it does have a little bit of moiré built-in (due to quantization of the drawing algorithm ouput?)

Hi Ted,

There should be no moiré visible, but the image must be viewed at 100% zoom, and is sensitive to gamma variation caused by viewing angle of the LCD display. When you move your eyes to the right viewing angle (only possible for one corner or edge at a time), there should only be sinusoidal waves with increasing spatial frequency towards the corners (where it's 2 pixels/cycle, i.e. diagonal Nyquist frequency).

Before resampling it must be (converted to) RGB mode, and 16-bits per channel helps. It's intended as a stress test for downsampling. An interesting upsampling test is that of a single pixel dot (e.g. RGB[200,200,200] on an RGB[50,50,50] background, or vice versa).

What matters is the actual dimensions of the file in pixels - e.g. 6400px x 4800 px. That the file may have a ppi tag attached to it only determines how it will be displayed on a monitor. As far as the printer is concerned, it should treat the file identically irrespective of any ppi setting. It is if the image dimensions (i.e. the total number of pixels) are inadequate that it will invoke interpolation. However, as you correctly suggest, how the printer prints it will be determined by the dpi setting applied by the printer.

Actually, no, that guy isn't right...first off, it's correct that the essential resolution is expressed as xxxx number of pixels by xxxx number of pixels for a digital image. However, to actually make a print, you have to define the size dimensions of a print as xx units by xx units at a given resolution. An image that is 2000 pixels by 3000 pixels can make a print that is 200 inches by 300 inches at 10 PPI or 6.66 inches by 10 inches at 300 PPI or 2.77 inches by 4.16 inches at 720 PPI...so, it's pretty clear that unit by unit at a specific resolution DOES matter, right?

Then, in terms of ink jet printers, the driver report the printer's resolution at a specific Dots per Inch (DPI). For Epson that's normally 360 DPI and 300 DPI for Canon/HP. If you send an image with a PPI that is different than the reported DPI of the printer, the print pipeline resamples the image to get the correct DPI into the printer. From there the printer takes the supplied DPI (resampled if need be) and does an error diffusion dither (stochastically) to arrive at the printers's resolution which is actually DPI but should be called droplets per inch because the original DPI is broken up into droplets. So an Epson is expecting a 360 PPI incoming resolution, the driver dithers the dots into multiple droplets based on the resolution setting set by the printer driver (such as 1440 x 720 or 2880 x 1440).

Note: The Epson CAN report 720 DPI to the print pipeline and the Canon/HP can report 600 DPI depending on the driver settings.

So, in point of fact, the image PPI is important and should match the reported printer DPI if you don't want the image resampled by the print pipeline because the resampling done by the pipeline sucks. The current assessment is that the resampling done by Mac and Windows pipelines are Nearest Neighbor, which really sucks but is used for speed. You would be far better off either setting the image PPI to a specific PPI resolution such as 360/720 or 300/600 and let the print dimensions float or resa,ple the image so the image size is as wanted and upsample to either 360 if the original native resolution is under that or up to 720 if the native is above 360 but below 720 PPI.

That sound like a good starting point, and fits fairly well with the guideline of 12MP (or as I prefer, about 4000 pixels in the long dimension) for viewing for a distance comparable to the image diagonal.

Quote

... and I'd use the double of that to allow for Vernier acuity / higher contrast detail / sharpening, so 258 PPI as a minimum for that viewing distance.

I am not sure that the stricter Vernier acuity standard is relevant to much (non-technical) photography; if anything the hard black-to-white transitions of those test patterns are sharper and more easily resolvable by our eyes than almost any edges in photographs of real-world objects. If I were interested in photographing from a drone at 15,000ft and then reading license plates with low-contrast color schemes, my standards would be higher.

There are plenty of "real-world" tests of video resolution. I.e. stick a real-world LCD/Plasma display in front of one or more persons and lowpass filter/downsample the material until the viewer can distinguish it from some reference. By choosing a sufficiently high screen resolution, small display size and large viewing distance, one might assume that the limitations of the display resolution become irrelevant, and only the variable degradation matters.

Such displays can have higher contrast than paper, but the spatial characteristic is quite different. I believe that "1 minute of arc" pretty well sums the experiments up.

The BBC did some tests assuming the 2.7 meter mean/median (?) distance from the tv in British homes, assuming that the distance to the tv would not change with increasing (economic) tv sizes. They concluded that 1280x720 pixels transmitted would be sufficient up to something like 50" diagonal, and that a little sharpening could compensate for lower resolution. Other tests tend to assume that the viewing distance is 3x or 4x the height of the display.

I saw a very interesting demo at MIT tech museum, where they embedded two images into one. At close distance one might see e.g. a woman. At larger distances one might see a landscape. The impressive part was how few artifacts were present. I believe that they exploited that our contrast sensitivity function is "peaky" at some intermediate frequency, and is reduced for low and high frequencies:I assume that tonemapping and the "zoned LED backlight LCD" technology exploits similar traits.

To muddy the waters a bit, I'll point out that there are a material number of people whose acuity significantly exceeds what we call normal. Look at the introduction of this paper. We don't have the same level of statistics on these "super see-ers" that we do on people with "normal" vision or worse, because screening tests typically stop at 20/20 (I've seen reports of people who could see at 20/8), and if they go on after that, the steps get pretty big.

Here's a paper that attempts to estimate a probability density function (histogram, to photographers) for visual acuity.

My take-home lesson from these studies is that just because I can't see it doesn't mean that somebody else can't, and, if I am able, I tend to err on the side of too much data. I simulate what someone with really good eyes can see by getting close.