Here is the f16 raw I uploaded. At the bottom of the page there is a blue link with the actual file. Sendspace is flogging their toolbar, they try to get you to click on their toolbar or add links. MAKE SURE TO HOVER YOUR MOUSE OVER THE LINK TO MAKE SURE IT IS THE NEF FILE, NOT THEIR ADDS.

I did use your pattern with the edge blocks cropped off. This let me print it bigger on a 600dpi laser 8.5"x11" sheet. I up-rezed it in PS to generate the required number of pixels.

Hi Arthur,

Frankly, I feel a bit uneasy when people start resampling a carefully designed pattern, and then get unexpected results ...

I have never, until now, seen an f/16 image that defies physics like your example. Even RawTherapee, which is very good (I like the Amaze algorithm very much), can't make a silk purse (aliasing, so detail beyond Nyquist) out of a sow's ear (no signal / only noise ratio due to diffraction).

Quote

In the top left corner of the screenshot you can see the full image where the piece of paper is a small rectangle. Anyone can calculate range with the focal length from that. I would estimate from memory 70 to 80 ft away. I am far enough away that the whole center is blurred out. I am sure that is not an issue. I can send the raws to you or someone at the site for people to convert with their own preferred software. The shots are of no artistic merit so copyright is not an issue! I can also convert a raw image you send me to see if it is the software.

I suspect it's rather something with the target than the Raw conversion process. A laser printer is also not the best device to produce continuous tone images with subtle gradients. The slanted edges might be usable though, but they were cropped off.

Quote

This is the reason i made the post as A - diffraction should be an issue, B - the pictures show not much difference, A and B should not both be correct unless diffraction is not the overriding issue.

Exactly. Something is wrong, although it would be nice if there was no ill effect from diffraction, but unfortunately diffraction actually does hurt image quality.

Quote

In my past study of artifacts (the color noise thing) I went down the rathole of looking at a wide assortment of de-bayer methods. It quickly became a mess of academic papers that were beyond where I wanted to go. One useful thing did come out of that, an idea of the difficulties in de-bayering. I suspect the local area search of the best algorithms followed by rules of interpretation may be making the diffraction issue secondary. Especially when lines are involved predictions (fake detail) are pushed forward. This will tend to beat the standard ISO resolution type charts.

That would be the case, but algorithms like Amaze are (just like most others) optimizing a delicate balance between artifacts and detail. They do not invent detail where there is none to begin with. They are not in the business of doing single image super-resolution which uses resized samples of features that are located elsewhere in the image. All these demosaicing algorithms need some signal in a restricted local area (5x5 or 7x7 samples) to produce a luminosity estimate that differentiates features from surrounding background.

Yes, I've been trying to get that 64bit Win 7 version for a while already, but the zip file is reported to be corrupt. I'll wait for an official build on the RawTherapee website.

Again, it's not the Raw converter that is suspect.

Cheers,Bart

P.S. I've received a reaction from the author of the website that's mentioned, and that he's checked his file and found it to be okay. So I'll try downloading with another browser. Anyway, the previous RT version I already have installed also gives very high resolution results, so I don't expect that to make much difference. 'Amaze' is amazing in how clean the high resolution conversion result is, and 3 FC suppression steps also takes care of most of the false color artifacting.

It does look like your camera and lens combination are a marriage made in heaven ...

The lines in the printout are converging into finer detail. Nothing is going to change that. You see the lines get finer to the point they blur together. The target is distance invariant. The pixels are not going to resolve more by re-sampling. If you dont trust the laser printout look at the fine lines on the cracked paint. I put the target on that building in the park for exactly that reason, the random fine lines of the paint cracks.

You do understand that RT has built in deconvolution? In the unsharpened images it is not turned on. In the one sharpened image I attached later it is using R-L deconvolution. It is also using "micro-contrast" and "contrast by detail" which I assume is wavelets. You can see the difference at the edge of the target where the page is white. On the unsharpened files the image just goes white off the pattern. In the sharpened file you see the pattern bleed into the white area. That is invented detail. Any of these methods that use "variable gradients" are making predictions. Roger Clark talked about invented detail in digital years back when he did the comparison velvia drum scanned vs digital. He showed zooms of reeds where some of the apparent digital detail did not exist in the drum scan at higher res. I am not talking artifacts, I am talking a few ghost stalks of reeds.

I believe diffraction is no longer an issue. It obviously has not gone away, it is being predicted out by gradient type de-bayer along with all the sharpening type routines. By definition de-bayer has to figure out how to fill holes. The best routines are filling the diffraction blur hole.

The lines in the printout are converging into finer detail. Nothing is going to change that. You see the lines get finer to the point they blur together. The target is distance invariant. The pixels are not going to resolve more by re-sampling.

Hi Arthur,

While that is correct, upsampling does lose microdetail contrast. Therefore the target contrast may be a bit (can't tell if and how much) lower at the highest level of detail, making it easier for the camera to not develop aliasing. Because it's not possible to compare with the original, I can't judge if and how much of an influence it has. All I know (from other PM exchanges) is that a lower resolution target can influence the outcome of the Slanted edge score, which is of course much more sensitive than the visual star target. It was even possible to detect a difference in blur sigma between the left and right side of the target, because it was shot at a 1 degree angle off perpendicular.

Quote

If you dont trust the laser printout look at the fine lines on the cracked paint. I put the target on that building in the park for exactly that reason, the random fine lines of the paint cracks.

Well that's another issue that's often overlooked, diffraction first kills the lowest contrast microdetail, before it kills the higher contrast microdetail. One may be able to restore a certain level of detail, but some is already lost. From the looks of it, your camera+lens+Rawconverter combination seems to do a very good job, and strike a nice balance. Good for you.

Quote

You do understand that RT has built in deconvolution? In the unsharpened images it is not turned on. In the one sharpened image I attached later it is using R-L deconvolution.

Not only do I understand it, I pointed it out to a lot of folks who didn't know that.

Quote

It is also using "micro-contrast" and "contrast by detail" which I assume is wavelets. You can see the difference at the edge of the target where the page is white. On the unsharpened files the image just goes white off the pattern. In the sharpened file you see the pattern bleed into the white area. That is invented detail.

Yes, the amount of control is super useful, and effective. Not something for those who get intimidated easily by such features though.

Quote

Any of these methods that use "variable gradients" are making predictions. Roger Clark talked about invented detail in digital years back when he did the comparison velvia drum scanned vs digital. He showed zooms of reeds where some of the apparent digital detail did not exist in the drum scan at higher res. I am not talking artifacts, I am talking a few ghost stalks of reeds.

They are artifacts though, and Roger didn't say they weren't (it's mentioned at the bottom of this section). The demosaicing algorithms back then were not as advanced as what we have available today.

Quote

I believe diffraction is no longer an issue. It obviously has not gone away, it is being predicted out by gradient type de-bayer along with all the sharpening type routines. By definition de-bayer has to figure out how to fill holes. The best routines are filling the diffraction blur hole.

I wouldn't generalize a specific (specific very sharp lens / camera sensor with mild AA-filter and a not too small 5.97 micron sensel pitch / a very effective demosaicing algorithm / a specific level of contrast) situation, as if it were universally applicable.

What the star target learns us is that for this combination of components, apparently f/16 still produces good visual detail, approaching the Nyquist limit. Deconvolution sharpening with a relatively small radius can boost the signal to noise ratio to even less of a contrast loss near the limiting resolution, which can help e.g. with producing large output. Even the aliasing seems to be behaving quite nicely, thanks to the Amaze algorithm, so some of it may go unnoticed as false detail.

It looks like a very fortunate combination, congratulations. Diffraction is less of a consideration when you use this lens, so you can focus on other elements that make the shot.

With all respect for the many things I have learned from your posts, I do not think it is this lens or lens/camera combo. Without question it is a fine lens and a fine camera. It was DxOs top rated lens in their article normal to short tele which is why I got it. However great it is, like you said before, it cannot overcome physics. It must be the software routines.

Here is the typical level of detail I get with the old Minolta 50 macro and this software. It's similar for the 100 or 300 f4G. At higher f ratios it needs more sharpening. Any good camera and sharp prime lens from any of the manufacturers will produce similar with a good tripod and remote release. Anyone getting much worse detail under those circumstances is using the wrong software.

Edit: Let me put it this way, I used to use other software then use IPlus to sharpen and remove noise. When I have tried that with this version of RT the image does not improve. Artifacts grow. When I try to remove noise it doesnt help. I get a more natural looking (fine noiseless film look) at the expense of a much softer image. There is no point in saving the result.

With all respect for the many things I have learned from your posts, I do not think it is this lens or lens/camera combo. Without question it is a fine lens and a fine camera. It was DxOs top rated lens in their article normal to short tele which is why I got it. However great it is, like you said before, it cannot overcome physics. It must be the software routines.

Okay, here's (attached) a case in point (which I've maybe hinted at too cursory) to consider. Check out the wood grain structure at the patch in the approx. 8 o'clock position relative to the star target of the file you made available. It has lower contrast than the star target, and it has fine low micro-contrast detail, and it kind of fades in and out of being resolved and not resolved. I'd hate to deliver such (non-)detail to a commercial customer who's passion is in wood grain related materials ...

It would be interesting to compare to the 'better' apertures with the same Raw conversion settings.

You picked something that you know would be particularly hard for a de-bayer, low contrast, fine red lines. 3 pixels of 4 will be filled by the de-bayer from being filtered out. I will upload the raws for you. You have me curious too.

Anyone using this, remember to hover your mouse over the link at the bottom of the page to make sure it is a .nef .

Thanks for making it available. The relative differences are subtle (when we disregard the precipitation) only a tiny bit in favor of f/8, as could be expected because the f/16 shot was still pretty good. The difference between these f/8 and f/16 shots would under most circumstances not be detectable in print.

Personally, to show diffraction effects, I draw a line at the aperture where 'diameter' of the diffraction pattern exceeds 1.5x the sensel pitch. In this camera's case that would be f/6.3, where the diffraction pattern hardly affects neighboring sensels when features are aligned with the sensel grid. If the residual aberrations are well corrected, then that's where I expect the lens' sweetspot to be, with this AA-filter and sensor. It would be close to producing the highest resolution possible per pixel, have a good correction of residual lens aberrations (also in the corners), and probably as little vignetting as possible.

Well that's another issue that's often overlooked, diffraction first kills the lowest contrast microdetail, before it kills the higher contrast microdetail. One may be able to restore a certain level of detail, but some is already lost. From the looks of it, your camera+lens+Rawconverter combination seems to do a very good job, and strike a nice balance. Good for you.

What the star target learns us is that for this combination of components, apparently f/16 still produces good visual detail, approaching the Nyquist limit. Deconvolution sharpening with a relatively small radius can boost the signal to noise ratio to even less of a contrast loss near the limiting resolution, which can help e.g. with producing large output. Even the aliasing seems to be behaving quite nicely, thanks to the Amaze algorithm, so some of it may go unnoticed as false detail.

Bart,

Your star chart is an excellent tool for quantitative analysis of resolution, but I remember from a previous discussion with you, it is high contrast and measures resolution near the Rayleigh limit (usually stated to represent ~10% MTF, but somewhat higher according to an analysis that you published). In real world photography, we often deal with lower contrast than is present in the star target, and the quantitative analysis should be supplemented by subjective analysis of the image for micro-contrast and "sparkle". DigLloyd (a pay site, but well worth the modest cost) publishes extensive abundantly illustrated subjective studies using top lenses such as the Coastal Optics 60 mm f/4 APO and the Zeiss 135 mm f/2 APO on various cameras with and without low pass filters, including the Nikon D7100 which lacks a low pass filter and the pixel pitch corresponds to a 51 MP full frame sensor (allegedly coming from Nikon later this fall).

In general, he concludes that for a D800e type sensor, f/5.6 is the smallest aperture yielding maximal image quality, even after aggressive deconvolution sharpening. He does not state the deconvolution algorithm he uses or the settings. In his preview of the new Nikon 80-400 mm f/4.5-5.6, he states that for critical work, the lens is essentially a one aperture deal and it had better be good at f/5.6.

Okay, here's (attached) a case in point (which I've maybe hinted at too cursory) to consider. Check out the wood grain structure at the patch in the approx. 8 o'clock position relative to the star target of the file you made available. It has lower contrast than the star target, and it has fine low micro-contrast detail, and it kind of fades in and out of being resolved and not resolved. I'd hate to deliver such (non-)detail to a commercial customer who's passion is in wood grain related materials ...

It would be interesting to compare to the 'better' apertures with the same Raw conversion settings.

The above observation points out that subjective analysis does provide information beyond the resolution limit derived from your star chart. Deconvolution sharpening is very helpful in recovering contrast at smaller apertures (large f/stop numbers), but as you have pointed out, it can not recover low contrast high frequency data. For critical work, I suggest that an aperture of 5.6 is likely optimal with the D800e. What do you think?

If you are opening these in RT you see the center gray blur circle shrink as you turn on each of the following features Sharpening RL, Micro-contrast, Detail by contrast. Also go back and forth between RL sharpening and USM. The RL makes the pattern go black and white high contrast with not much propagation of the lines in. The USM seems to leave the whole thing a bit gray with much stronger prediction of the lines, the faint blur circle moves in quite a bit more.

The above observation points out that subjective analysis does provide information beyond the resolution limit derived from your star chart. Deconvolution sharpening is very helpful in recovering contrast at smaller apertures (large f/stop numbers), but as you have pointed out, it can not recover low contrast high frequency data. For critical work, I suggest that an aperture of 5.6 is likely optimal with the D800e. What do you think?

Hi Bill,

For a D800/D800E sensor array with an approx. 4.88 micron senselpitch, f/5.6 produces a diffraction pattern (green light) with a diameter of almost exactly 1.5x the sensel pitch. It's the magic number I've been mentioning, where the onset of visible diffraction starts (in the low contrast microdetail). That leads to a simple rule of thumb, sensel pitch in microns x 1.11 = F-number where visible diffraction (for green wavelengths) begins.

However, some lenses perform even better in the center at f/4 because they have very few residual lens aberrations even wide open, and f/4 creates even less diffraction than f/5.6. But still, the corners of the image may benefit from stopping down a bit further, so it ultimately boils down to a compromise based on the intended use of the lens. A portrait lens can perhaps get away with softer corners, but a lens for architecture or reproduction can not.

You got me thinking with that one. Can you turn the reasoning on its head?

Thinking ahead to sensors with really fine pitches, can you come up with a similar rule of thumb that takes the pixel pitch as the input and and spits out the f-stop beyond which you'll never (or almost never, you chose the contrast when to stop worrying) see any aliasing? By "beyond", I mean numerically larger, or of smaller diameter. Assume a Bayer CFA and no anti-aliasing filter.

Thinking ahead to sensors with really fine pitches, can you come up with a similar rule of thumb that takes the pixel pitch as the input and and spits out the f-stop beyond which you'll never (or almost never, you chose the contrast when to stop worrying) see any aliasing? By "beyond", I mean numerically larger, or of smaller diameter. Assume a Bayer CFA and no anti-aliasing filter.

Do note that Jacobson's calculations do not account for the averaging effect of our area sampling sensels, which may take an additional toll of perhaps 1/3rd of an aperture stop. He also considers the additional effect of the aperture's diffraction (which is harder to generalize). Anyway, an absolute physical maximum of 3.6 x sensel pitch is also a useful figure to know (no aliasing possible when modulation of higher spatial frequencies is zero).

So, for diffraction affected resolution, it seems we have a range of 1.11 to 3.6 times the sensel pitch (in microns) as our physically limited creative playground. This all is of course hugely simplified, as if we shoot with monochromatic light, but it does provide useful benchmarks and reality checks.

A while back I wrote a spreadsheet to calculate diffraction for macro shots which was informative but slightly boring so I changed it to calculate MTF from wavelength, aperture (infinite focus) and pixel pitch. Much more fun.