Diffraction Limit Discussion Continuation

The prior thread on this interesting if somewhat heated discussion maxxed out before I could post some additional information. I'll leave it to others to address the theoretical differences. I want to return to the issue of evidence. What evidence is there to support the proposition that pixel density does or does not affect the optimal aperture setting for a given sensor size? Some (including the calculator at the Cambridge in Colour website) indicate that the effect of diffraction sets in sooner for sensors with smaller (more) pixels than for sensors with larger (fewer) pixels. Others, including such luminaries as Bobn2, Anders, and Great Bustard, say, "no, no, no" pixel size is not a meaningful factor. So we have a pretty clear-cut difference of opinion that should be relatively easy to resolve with objective lens tests performed on same-sized sensors with differing pixel sizes/densities using the same lens tested at various aperture settings. Right?

I noted several times in the prior thread that DXOMark provides those tests. DXOMark is fairly unique in that most of its lens tests are conducted on a range of cameras. The problem is that it's clunky to find the appropriate data points and plot them. I also noted that in every case that I've looked at, the DXOMark data supports the proposition that pixel size is a non-factor. I've looked at over a dozen lenses tested on several dozen cameras and have yet to find contrary examples. However, it's a big database and I'd encourage others to look as well.

One objection that's been raised is that the DXOMark measurements are too coarse because they're only taken at full f/stop settings. The speculation is that the peak acutance might be occurring somewhere between the measured stops and therefore we can't rely on the DXOMark data. Having looked at enough examples, this struck me as pretty preposterous. Surely, the peaks wouldn't always average out exactly to the same major f-stop setting. Putting that aside, I thought that the data points we do have should be sufficient to infer with some degree a certainty exactly where the peaks occur. I'm certainly no math whiz (far from it!) but I can throw data into an Excel chart and see how the trendlines curve. Below are two examples chosen to illustrate the point. One is M43-based because - after all - that's what this forum is all about; and one is based on an extreme case comparing a 12mp camera (the Nikon D3) to a 36mp camera (the D800). Charts below. Fire away...

One objection that's been raised is that the DXOMark measurements are too coarse because they're only taken at full f/stop settings. The speculation is that the peak acutance might be occurring somewhere between the measured stops and therefore we can't rely on the DXOMark data. Having looked at enough examples, this struck me as pretty preposterous. Surely, the peaks wouldn't always average out exactly to the same major f-stop setting. Putting that aside, I thought that the data points we do have should be sufficient to infer with some degree a certainty exactly where the peaks occur.

There certainly is not sufficient data here to draw any such inference. You have 4 data points, and the curves you depict are at least cubics, requiring estimating 4 parameters. This problem has zero degrees of freedom and is not amenable to statistical analysis. You can draw no meaningful inferences.

Even if you were to restrict yourself to a quadratic approximation (with 1 degree of freedom), you would end up with completely meaningless (and completely insignificant) statistical results. Here is a regression using a quadratic:

As you can see, the only significant coefficient is the constant term (t = 12.257), while the f-stop terms are completely statistically insignificant with t's, respectively, at 0.559 and -0.760. With their insignificance, so also goes any inference about the maximum.

I'm certainly no math whiz

Understood, and agreed.

(far from it!) but I can throw data into an Excel chart and see how the trendlines curve. Below are two examples chosen to illustrate the point. One is M43-based because - after all - that's what this forum is all about; and one is based on an extreme case comparing a 12mp camera (the Nikon D3) to a 36mp camera (the D800). Charts below. Fire away...

I have developed a reluctance to shoot at small apertures, largely as a result of reading posts about diffraction limits. However when I look at a large number of pictures in my library I fairly often wish I had shot at a smaller aperture for greater DOF. I rarely wish I had shot at a larger aperture, the exception being motion blur. My conclusion is controlling DOF trumps any consideration of diffraction.

There's always a balance at play. Assuming you want the entire frame to be as sharp as possible, you want to stop down for a greater DOF, as elements of the scene outside the DOF will, by definition, be soft.

It's relatively easy to see DOF differences - this part of the photo is sharper than that part of the photo. It's relatively harder to see diffraction differences, because they cause an overall decrease in sharpness, so we don't have a this and that to compare when looking at the photo. It's easier to discern comparative differences.

That's why I tend to agree with thk0.

We also want to stop down since this reduces the aberrations in the lens. This is why, for example, a lens is sharper at f/2.8 than f/1.4. However, at some point, the increasing diffraction softening outweighs the lessening lens aberrations (usually by f/4 - f/5.6, with the edges typically lagging behind the center by a stop).

Aside from diffraction softening, stopping down also results in a lower shutter speed (at the risk of more motion blur and/or camera shake) or less light falling on the sensor, which increases noise. So, for example, while someone might shoot a scene at f/5.6 in good light, they would shoot the same scene at f/2.8 in poor light, even though f/5.6 will give the deeper DOF as well as being at a sharper aperture.

Bob's got it right, but I can reconcile the two camps - those who believe pixel density has no impact on diffraction and those that are convinced that it does.

Yes, you are missing something, which is that using a sensor with a finer pixel pitch does not change the f-number at which the resolution starts to drop due to diffraction. It changes the resolution that you get, but the drop starts at exactly the same f-number. Here for example is the MTF50 of a Nikon lens on two different Nikon cameras with different pixel pitch:

Notice that in both the resolution falls away after f/5.6. The 24MP camera extracts more resolution from the lens than the 12MP one.

Notice also there is no defined 'limit' where the resolution suddenly falls due to diffraction, it is a smooth and even drop-off. The 'limit' is just a bogus idea. McHugh has taken a well defined optical term - a 'diffraction limited' system is one so good that diffraction is the only limit on its performance - turned it inside out and made it into something senseless.

-- hide signature --

hide signature --

Bob

If you examine a flat view of the prior thread and search all the text on all eight pages, you will not find any of these strings:

And that's what's been missing from this discussion - a consideration of the impact of enlargement factor on diffraction's ability to inhibit a desired print resolution.

Bob's statement is absolutely true as long as long as the enlargement factor imposed by the ratio of the final image size to sensor size remains constant. If someone decides to exploit a higher pixel count on a like-sized sensor to make a larger print (without simultaneously enforcing a proportionately greater viewing distance), otherwise using the same f-Number, the Airy disk diameters at the sensor will suffer a greater enlargement factor in the final image and thus, could inhibit your desired print resolution.

It's the larger enlargement factors that one might use with higher pixel densities on a like-sized sensor (without somehow enforcing proportionately greater viewing distances), that can require a diffraction-savvy shooter to use a wider aperture, to maintain a given print resolution. If you're making the same-sized print, from the same-sized sensor, to be viewed at the same viewing distance, pixel density has no impact on diffraction. At best, pixel density's impact on diffraction is only incidental with the higher enlargement factors that are encouraged.

It's funny that no one seems to notice that Sean McHugh's advanced-mode Diffraction Limit Calculator includes, as its first variable, a user-specifiable input for "Max Print Dimension." Ask yourself why that variable is there. Are we all making 10-inch prints (the default value)?

His "Eyesight" variable is an attempt at determining your desired print resolution.

"Camera Type" is going after your sensor size.

Max Print Dimension / Max Sensor Dimension = Enlargement factor.

Yes - enlargement factor must be considered when determining the f-Number at which diffraction will begin to inhibit a desired print resolution.

And yet, almost no one includes enlargement factor or viewing distance in discussions of either DoF or diffraction. Why?

Small Sensors can give us the same DoF and diffraction as larger sensors,but with faster shutter speeds at smaller f-Numbers, and thus,fewer "diffraction-free" f-Numbers from which to choose.

Large Sensors can give us the same DoF and diffraction as smaller sensors,but with slower shutter speeds at larger f-Numbers, and thus,more "diffraction-free" f-Numbers from which to choose.

It's the higher enlargement factor required by small sensors having the same pixel count as larger sensors that forces the use of smaller apertures (smaller Airy disks at the sensor) before magnification, to produce like-sized, like-resolution prints.

To answer the OP (of the original thread), Jim Pilcher, see this post and those that follow:

By the way, I absolutely love the way Jim included this text in his OP:

"I'm unaware of the calculations and perceptual considerations."

You are not alone, Jim, but at least you intuitively understand that perception is something that must be taken into consideration! (Hint: Enlargement factor, viewing distance, and desired print resolution.)

The prior thread on this interesting if somewhat heated discussion maxxed out before I could post some additional information. I'll leave it to others to address the theoretical differences. I want to return to the issue of evidence. What evidence is there to support the proposition that pixel density does or does not affect the optimal aperture setting for a given sensor size? Some (including the calculator at the Cambridge in Colour website) indicate that the effect of diffraction sets in sooner for sensors with smaller (more) pixels than for sensors with larger (fewer) pixels. Others, including such luminaries as Bobn2, Anders, and Great Bustard, say, "no, no, no" pixel size is not a meaningful factor. So we have a pretty clear-cut difference of opinion that should be relatively easy to resolve with objective lens tests performed on same-sized sensors with differing pixel sizes/densities using the same lens tested at various aperture settings. Right?

I noted several times in the prior thread that DXOMark provides those tests. DXOMark is fairly unique in that most of its lens tests are conducted on a range of cameras. The problem is that it's clunky to find the appropriate data points and plot them. I also noted that in every case that I've looked at, the DXOMark data supports the proposition that pixel size is a non-factor. I've looked at over a dozen lenses tested on several dozen cameras and have yet to find contrary examples. However, it's a big database and I'd encourage others to look as well.

One objection that's been raised is that the DXOMark measurements are too coarse because they're only taken at full f/stop settings. The speculation is that the peak acutance might be occurring somewhere between the measured stops and therefore we can't rely on the DXOMark data. Having looked at enough examples, this struck me as pretty preposterous. Surely, the peaks wouldn't always average out exactly to the same major f-stop setting. Putting that aside, I thought that the data points we do have should be sufficient to infer with some degree a certainty exactly where the peaks occur. I'm certainly no math whiz (far from it!) but I can throw data into an Excel chart and see how the trendlines curve. Below are two examples chosen to illustrate the point. One is M43-based because - after all - that's what this forum is all about; and one is based on an extreme case comparing a 12mp camera (the Nikon D3) to a 36mp camera (the D800). Charts below. Fire away...

While I think the DxO data can be used to test whether the peak is independent of sensor resolution, the way I would do it would be the following:

Take a sample of lenses. Record for each where the peak occurs for bodies with different pixel counts along with the pixel count of the body in question. Run a regression with some measure of peak f-stop (e.g., log2((1/f-number)^2) as the dependent variable and some measure of sensor resolution (e.g., the square root of the pixel count) as the independent. If we (you, I, and a whole bunch of others) are right, the relationship should be approximately zero. Ideally, the regression should take the form of so-called multi-level analysis to get the standard errors right.

Not that I'd really find it worth the effort (or I might do it myself). In my view, we already have the evidence to tell which theory is right.

I think part of the problem in the previous thread is people talking about different things using the same terms, or the same thing using different terms, or disagreeing about which tiny mathematical improvements should be regarded as real world differences, and which should be viewed as negligible changes. Rather than rehash the existing discussion, I'd like to approach things from a slightly different tack.

If we take the system resolution equation r = (l^-1/2 + s^-1/2)^-1/2 where r is the system resolution, l i the lens resolution, and s is the sensor resolution, then we can draw the following conclusions:

In other words, as you improve sensor resolution, the overall system resolution increases, but can never be greater than the lens resolution. Similarly, as lens resolution improves, overall system resolution improves, but can never exceed the sensor resolution. In other words, there are hard limits on the resolution of the system.

Now imagine a system with resolution r_1 featuring a lens with resolution l_0, and a sensor with resolution s_1 = l_0 * 10^1/2. Also imagine a system with resolution r_2, featuring the same lens, but a different sensor with resolution s_2 = l_0 * 10^-1/2. Therefore s1 = 10 * s_2

r_1 = l_0 * (10/11)^1/2

r_2 = l_0 * (1/11)^1/2 = s_2 * (10/11)^1/2

The resolution of r_1 is effectively equal to the resolution of the lens, while the resolution of r_2 is effectively equal to the resolution of the sensor s_2.

Increasing the resolution of the sensor by an order of magnitude results in a swing from system resolution being determined predominantly by the resolution of the sensor to the system resolution being determined primarily by the lens.

Both systems should have their peak sharpness at about the same aperture, but the limiting factor in the resolution will be different for the two systems. Increase sensor resolution further beyond s_1 and won't get a high resolution than l_0 so it's effectively diffraction limited. Decrease the resolution below s_2 and it's predominantly the sensor resolution that determines the system resolution so diffraction is largely irrelevant. Hence some of us saying that r_1 is more limited by diffraction than r_2 even if peak sharpness is at the same aperture and r_1 is always greater than r_2.

I think part of the problem in the previous thread is people talking about different things using the same terms, or the same thing using different terms, or disagreeing about which tiny mathematical improvements should be regarded as real world differences, and which should be viewed as negligible changes. Rather than rehash the existing discussion, I'd like to approach things from a slightly different tack.

If we take the system resolution equation r = (l^-1/2 + s^-1/2)^-1/2 where r is the system resolution, l i the lens resolution, and s is the sensor resolution, then we can draw the following conclusions:

In other words, as you improve sensor resolution, the overall system resolution increases, but can never be greater than the lens resolution. Similarly, as lens resolution improves, overall system resolution improves, but can never exceed the sensor resolution. In other words, there are hard limits on the resolution of the system.

Now imagine a system with resolution r_1 featuring a lens with resolution l_0, and a sensor with resolution s_1 = l_0 * 10^1/2. Also imagine a system with resolution r_2, featuring the same lens, but a different sensor with resolution s_2 = l_0 * 10^-1/2. Therefore s1 = 10 * s_2

r_1 = l_0 * (10/11)^1/2

r_2 = l_0 * (1/11)^1/2 = s_2 * (10/11)^1/2

The resolution of r_1 is effectively equal to the resolution of the lens, while the resolution of r_2 is effectively equal to the resolution of the sensor s_2.

Increasing the resolution of the sensor by an order of magnitude results in a swing from system resolution being determined predominantly by the resolution of the sensor to the system resolution being determined primarily by the lens.

Both systems should have their peak sharpness at about the same aperture, but the limiting factor in the resolution will be different for the two systems. Increase sensor resolution further beyond s_1 and won't get a high resolution than l_0 so it's effectively diffraction limited. Decrease the resolution below s_2 and it's predominantly the sensor resolution that determines the system resolution so diffraction is largely irrelevant. Hence some of us saying that r_1 is more limited by diffraction than r_2 even if peak sharpness is at the same aperture and r_1 is always greater than r_2.

Glad to see that you finally worked out the implications of the formula I gave you.

Substantively, I have only two comments: That peak sharpness will occur at exactly rather than approximately the same aperture and that "my/our" side is hardly the one to blame for any conceptual or terminological misunderstandings.

I think part of the problem in the previous thread is people talking about different things using the same terms, or the same thing using different terms, or disagreeing about which tiny mathematical improvements should be regarded as real world differences, and which should be viewed as negligible changes. Rather than rehash the existing discussion, I'd like to approach things from a slightly different tack.

If we take the system resolution equation r = (l^-1/2 + s^-1/2)^-1/2 where r is the system resolution, l i the lens resolution, and s is the sensor resolution, then we can draw the following conclusions:

In other words, as you improve sensor resolution, the overall system resolution increases, but can never be greater than the lens resolution. Similarly, as lens resolution improves, overall system resolution improves, but can never exceed the sensor resolution. In other words, there are hard limits on the resolution of the system.

Now imagine a system with resolution r_1 featuring a lens with resolution l_0, and a sensor with resolution s_1 = l_0 * 10^1/2. Also imagine a system with resolution r_2, featuring the same lens, but a different sensor with resolution s_2 = l_0 * 10^-1/2. Therefore s1 = 10 * s_2

r_1 = l_0 * (10/11)^1/2

r_2 = l_0 * (1/11)^1/2 = s_2 * (10/11)^1/2

The resolution of r_1 is effectively equal to the resolution of the lens, while the resolution of r_2 is effectively equal to the resolution of the sensor s_2.

Increasing the resolution of the sensor by an order of magnitude results in a swing from system resolution being determined predominantly by the resolution of the sensor to the system resolution being determined primarily by the lens.

Both systems should have their peak sharpness at about the same aperture, but the limiting factor in the resolution will be different for the two systems. Increase sensor resolution further beyond s_1 and won't get a high resolution than l_0 so it's effectively diffraction limited. Decrease the resolution below s_2 and it's predominantly the sensor resolution that determines the system resolution so diffraction is largely irrelevant. Hence some of us saying that r_1 is more limited by diffraction than r_2 even if peak sharpness is at the same aperture and r_1 is always greater than r_2.

Glad to see that you finally worked out the implications of the formula I gave you.

there,s nothing there that I hadn't already said to you in other ways.

Substantively, I have only two comments: That peak sharpness will occur at exactly rather than approximately the same aperture and that "my/our" side is hardly the one to blame for any conceptual or terminological misunderstandings.

Anders, I avoided assigning blame to anyone and put it down to misunderstanding. Don't be in ass in response.

If you substantively agree then you'll also agree that when the sensor resolution drops below a certain level, the aperture size won't perceptibly reduce the system resolution in the same way as a system with a sensor resolution similar to the lens resolution. And once you get to a high enough sensor resolution, diffraction overwhelmingly determines system resolution so that the whole system is effectively diffraction limited from wide-open.

I tried to put that to you earlier, but you seemed dismissive of the idea. I'm unclear about whether that is still the case..

I think part of the problem in the previous thread is people talking about different things using the same terms, or the same thing using different terms, or disagreeing about which tiny mathematical improvements should be regarded as real world differences, and which should be viewed as negligible changes. Rather than rehash the existing discussion, I'd like to approach things from a slightly different tack.

If we take the system resolution equation r = (l^-1/2 + s^-1/2)^-1/2 where r is the system resolution, l i the lens resolution, and s is the sensor resolution, then we can draw the following conclusions:

In other words, as you improve sensor resolution, the overall system resolution increases, but can never be greater than the lens resolution. Similarly, as lens resolution improves, overall system resolution improves, but can never exceed the sensor resolution. In other words, there are hard limits on the resolution of the system.

Now imagine a system with resolution r_1 featuring a lens with resolution l_0, and a sensor with resolution s_1 = l_0 * 10^1/2. Also imagine a system with resolution r_2, featuring the same lens, but a different sensor with resolution s_2 = l_0 * 10^-1/2. Therefore s1 = 10 * s_2

r_1 = l_0 * (10/11)^1/2

r_2 = l_0 * (1/11)^1/2 = s_2 * (10/11)^1/2

The resolution of r_1 is effectively equal to the resolution of the lens, while the resolution of r_2 is effectively equal to the resolution of the sensor s_2.

Increasing the resolution of the sensor by an order of magnitude results in a swing from system resolution being determined predominantly by the resolution of the sensor to the system resolution being determined primarily by the lens.

Both systems should have their peak sharpness at about the same aperture, but the limiting factor in the resolution will be different for the two systems. Increase sensor resolution further beyond s_1 and won't get a high resolution than l_0 so it's effectively diffraction limited. Decrease the resolution below s_2 and it's predominantly the sensor resolution that determines the system resolution so diffraction is largely irrelevant. Hence some of us saying that r_1 is more limited by diffraction than r_2 even if peak sharpness is at the same aperture and r_1 is always greater than r_2.

Glad to see that you finally worked out the implications of the formula I gave you.

there,s nothing there that I hadn't already said to you in other ways.

There most certainly is: The recognition that the point along the aperture range where peak image resolution occurs is independent of sensor resolution.

Substantively, I have only two comments: That peak sharpness will occur at exactly rather than approximately the same aperture and that "my/our" side is hardly the one to blame for any conceptual or terminological misunderstandings.

Anders, I avoided assigning blame to anyone and put it down to misunderstanding.

Yes I saw that. So I pointed out what was missing.

Don't be in ass in response.

I am not being an ass. You decidedly are by calling me one for absolutely no good reason.

If you substantively agree then you'll also agree that when the sensor resolution drops below a certain level, the aperture size won't perceptibly reduce the system resolution in the same way as a system with a sensor resolution similar to the lens resolution. And once you get to a high enough sensor resolution, diffraction overwhelmingly determines system resolution so that the whole system is effectively diffraction limited from wide-open.

What I substantively agree with is the following:

When the sensor resolution is much lower than lens resolution, variations in lens resolution will have but a small impact on image resolution (but still always such that increased lens resolution leads to increased image resolution). When it is the other way around (lens resolution much lower than sensor resolution), variations in lens resolution will have a much stronger impact on image resolution.

I tried to put that to you earlier, but you seemed dismissive of the idea. I'm unclear about whether that is still the case..

Where was I dismissive about the idea as I spelled it out above? Please provide specific references (the post/posts you have in mind and the passage/passages in those posts).

One objection that's been raised is that the DXOMark measurements are too coarse because they're only taken at full f/stop settings. The speculation is that the peak acutance might be occurring somewhere between the measured stops and therefore we can't rely on the DXOMark data. Having looked at enough examples, this struck me as pretty preposterous. Surely, the peaks wouldn't always average out exactly to the same major f-stop setting. Putting that aside, I thought that the data points we do have should be sufficient to infer with some degree a certainty exactly where the peaks occur.

There certainly is not sufficient data here to draw any such inference. You have 4 data points, and the curves you depict are at least cubics, requiring estimating 4 parameters. This problem has zero degrees of freedom and is not amenable to statistical analysis. You can draw no meaningful inferences.

Even if you were to restrict yourself to a quadratic approximation (with 1 degree of freedom), you would end up with completely meaningless (and completely insignificant) statistical results. Here is a regression using a quadratic:

As you can see, the only significant coefficient is the constant term (t = 12.257), while the f-stop terms are completely statistically insignificant with t's, respectively, at 0.559 and -0.760. With their insignificance, so also goes any inference about the maximum.

Substantively, I have only two comments: That peak sharpness will occur at exactly rather than approximately the same aperture.

It's not actually quite true, or at least, always quite true - because the system MTF is the convolution of the lens and camera MTF, and the MTF of a birefringent AA filter can, I believe, be a bit aperture dependent. But in reality, such an effect is too small to worry about.

I think part of the problem in the previous thread is people talking about different things using the same terms, or the same thing using different terms, or disagreeing about which tiny mathematical improvements should be regarded as real world differences, and which should be viewed as negligible changes. Rather than rehash the existing discussion, I'd like to approach things from a slightly different tack.

If we take the system resolution equation r = (l^-1/2 + s^-1/2)^-1/2 where r is the system resolution, l i the lens resolution, and s is the sensor resolution, then we can draw the following conclusions:

In other words, as you improve sensor resolution, the overall system resolution increases, but can never be greater than the lens resolution. Similarly, as lens resolution improves, overall system resolution improves, but can never exceed the sensor resolution. In other words, there are hard limits on the resolution of the system.

Now imagine a system with resolution r_1 featuring a lens with resolution l_0, and a sensor with resolution s_1 = l_0 * 10^1/2. Also imagine a system with resolution r_2, featuring the same lens, but a different sensor with resolution s_2 = l_0 * 10^-1/2. Therefore s1 = 10 * s_2

r_1 = l_0 * (10/11)^1/2

r_2 = l_0 * (1/11)^1/2 = s_2 * (10/11)^1/2

The resolution of r_1 is effectively equal to the resolution of the lens, while the resolution of r_2 is effectively equal to the resolution of the sensor s_2.

Increasing the resolution of the sensor by an order of magnitude results in a swing from system resolution being determined predominantly by the resolution of the sensor to the system resolution being determined primarily by the lens.

Both systems should have their peak sharpness at about the same aperture, but the limiting factor in the resolution will be different for the two systems. Increase sensor resolution further beyond s_1 and won't get a high resolution than l_0 so it's effectively diffraction limited. Decrease the resolution below s_2 and it's predominantly the sensor resolution that determines the system resolution so diffraction is largely irrelevant. Hence some of us saying that r_1 is more limited by diffraction than r_2 even if peak sharpness is at the same aperture and r_1 is always greater than r_2.

Glad to see that you finally worked out the implications of the formula I gave you.

there,s nothing there that I hadn't already said to you in other ways.

There most certainly is: The recognition that the point along the aperture range where peak image resolution occurs is independent of sensor resolution.

Substantively, I have only two comments: That peak sharpness will occur at exactly rather than approximately the same aperture and that "my/our" side is hardly the one to blame for any conceptual or terminological misunderstandings.

Anders, I avoided assigning blame to anyone and put it down to misunderstanding.

Yes I saw that. So I pointed out what was missing.

Don't be in ass in response.

I am not being an ass. You decidedly are by calling me one for absolutely no good reason.

If you substantively agree then you'll also agree that when the sensor resolution drops below a certain level, the aperture size won't perceptibly reduce the system resolution in the same way as a system with a sensor resolution similar to the lens resolution. And once you get to a high enough sensor resolution, diffraction overwhelmingly determines system resolution so that the whole system is effectively diffraction limited from wide-open.

What I substantively agree with is the following:

When the sensor resolution is much lower than lens resolution, variations in lens resolution will have but a small impact on image resolution (but still always such that increased lens resolution leads to increased image resolution). When it is the other way around (lens resolution much lower than sensor resolution), variations in lens resolution will have a much stronger impact on image resolution.

I tried to put that to you earlier, but you seemed dismissive of the idea. I'm unclear about whether that is still the case..

Where was I dismissive about the idea as I spelled it out above? Please provide specific references (the post/posts you have in mind and the passage/passages in those posts).

I think that the semantic argument that this discussion tends to end up in (maybe when the 'peak aperture shifts' people realise that they are wrong) is missing the point of the misinformation and damage to practice that this meme causes. Whatever they wish to decide that they really meant in this abstruse semantic discussion, there are many photographers who look at sites such as CiC, and posts here inspired by it, and end up believing that a low pixel count camera will give them sharper results at small apertures than will a high pixel count camera (absurdities like 'D800 unusable above f/5.6'). While I'm quite prepared to believe that wasn't what they really meant, I'd be more impressed had they made that point in the original posts where they claimed there was a 'diffraction limit'.

The prior thread on this interesting if somewhat heated discussion maxxed out before I could post some additional information. I'll leave it to others to address the theoretical differences. I want to return to the issue of evidence. What evidence is there to support the proposition that pixel density does or does not affect the optimal aperture setting for a given sensor size? Some (including the calculator at the Cambridge in Colour website) indicate that the effect of diffraction sets in sooner for sensors with smaller (more) pixels than for sensors with larger (fewer) pixels. Others, including such luminaries as Bobn2, Anders, and Great Bustard, say, "no, no, no" pixel size is not a meaningful factor. So we have a pretty clear-cut difference of opinion that should be relatively easy to resolve with objective lens tests performed on same-sized sensors with differing pixel sizes/densities using the same lens tested at various aperture settings. Right?

I noted several times in the prior thread that DXOMark provides those tests. DXOMark is fairly unique in that most of its lens tests are conducted on a range of cameras. The problem is that it's clunky to find the appropriate data points and plot them. I also noted that in every case that I've looked at, the DXOMark data supports the proposition that pixel size is a non-factor. I've looked at over a dozen lenses tested on several dozen cameras and have yet to find contrary examples. However, it's a big database and I'd encourage others to look as well.

One objection that's been raised is that the DXOMark measurements are too coarse because they're only taken at full f/stop settings. The speculation is that the peak acutance might be occurring somewhere between the measured stops and therefore we can't rely on the DXOMark data. Having looked at enough examples, this struck me as pretty preposterous. Surely, the peaks wouldn't always average out exactly to the same major f-stop setting. Putting that aside, I thought that the data points we do have should be sufficient to infer with some degree a certainty exactly where the peaks occur. I'm certainly no math whiz (far from it!) but I can throw data into an Excel chart and see how the trendlines curve. Below are two examples chosen to illustrate the point. One is M43-based because - after all - that's what this forum is all about; and one is based on an extreme case comparing a 12mp camera (the Nikon D3) to a 36mp camera (the D800). Charts below. Fire away...

Diffraction softening is unavoidable at any aperture, and worsens as the lens is stopped down. However, other factors mask the effects of the increasing diffraction softening: the increasing DOF and the lessening lens aberrations. As the DOF increases, more and more of the photo is rendered "in focus", making the photo appear sharper. In addition, as the aperture narrows, the aberrations in the lens lessen since more of the aperture is masked by the aperture blades. For wide apertures, the increasing DOF and lessening lens aberrations far outweigh the effects of diffraction softening. At small apertures, the reverse is true. In the interim (often, but not always, around a two stop interval), the two effects roughly cancel each other out, and the balance point for the edges typically lags behind the balance point for the center by around a stop (the edges usually suffer greater aberrations than the center). In fact, it is not uncommon for diffraction softening to be dominant right from wide open for lenses slower than f/5.6 equivalent on FF, and thus these lenses are sharpest wide open (for the portions of the scene within the DOF, of course).

In other words, there is absolutely no reason, whatsoever, to expect that an 85 / 1.4G would peak at the same aperture as a 25 / 1.4, regardless of the format or pixel size.

What you would want to do is compare the same lens on different sensors. What you will find is that not only is the peak aperture the same regardless of pixel size, but that the higher pixel count sensor will resolve more at every aperture.

In other words, pixel size has no effect on diffraction, but pixel count has a definite effect on resolution, although the resolution advantage of more pixels asymptotically vanishes as you stop the lens down.

I think that the semantic argument that this discussion tends to end up in (maybe when the 'peak aperture shifts' people realise that they are wrong) is missing the point of the misinformation and damage to practice that this meme causes. Whatever they wish to decide that they really meant in this abstruse semantic discussion, there are many photographers who look at sites such as CiC, and posts here inspired by it, and end up believing that a low pixel count camera will give them sharper results at small apertures than will a high pixel count camera (absurdities like 'D800 unusable above f/5.6'). While I'm quite prepared to believe that wasn't what they really meant, I'd be more impressed had they made that point in the original posts where they claimed there was a 'diffraction limit'.

I don't recall anyone saying that a low pixel count camera will be sharper than a high res camera (if by sharpness you mean overall image detail). In fact I recall plenty of instances where people have affirmed that although diffraction is visible earlier in high pixel count cameras, they still have more overall detail than a low pixel count camera. What you're saying sounds more like misinformation by suggesting that anyone is making that claim.

I think that the semantic argument that this discussion tends to end up in (maybe when the 'peak aperture shifts' people realise that they are wrong) is missing the point of the misinformation and damage to practice that this meme causes. Whatever they wish to decide that they really meant in this abstruse semantic discussion, there are many photographers who look at sites such as CiC, and posts here inspired by it, and end up believing that a low pixel count camera will give them sharper results at small apertures than will a high pixel count camera (absurdities like 'D800 unusable above f/5.6'). While I'm quite prepared to believe that wasn't what they really meant, I'd be more impressed had they made that point in the original posts where they claimed there was a 'diffraction limit'.

I don't recall anyone saying that a low pixel count camera will be sharper than a high res camera (if by sharpness you mean overall image detail).

maybe that hasn't been said explicitly (can't be bothered to go through the whole thread to search) but undoubtedly that is how this bogus 'diffraction limit' gets understood.

In fact I recall plenty of instances where people have affirmed that although diffraction is visible earlier in high pixel count cameras, they still have more overall detail than a low pixel count camera. What you're saying sounds more like misinformation by suggesting that anyone is making that claim.

I'm basing on what I see understood by it. If we all agree that you still get more detail, what is the point of worrying about this 'diffraction limit' at all?

I think that the semantic argument that this discussion tends to end up in (maybe when the 'peak aperture shifts' people realise that they are wrong) is missing the point of the misinformation and damage to practice that this meme causes. Whatever they wish to decide that they really meant in this abstruse semantic discussion, there are many photographers who look at sites such as CiC, and posts here inspired by it, and end up believing that a low pixel count camera will give them sharper results at small apertures than will a high pixel count camera (absurdities like 'D800 unusable above f/5.6'). While I'm quite prepared to believe that wasn't what they really meant, I'd be more impressed had they made that point in the original posts where they claimed there was a 'diffraction limit'.

I don't recall anyone saying that a low pixel count camera will be sharper than a high res camera (if by sharpness you mean overall image detail).

maybe that hasn't been said explicitly (can't be bothered to go through the whole thread to search) but undoubtedly that is how this bogus 'diffraction limit' gets understood.

Exactly. For example, to the uneducated on diffraction, if you say one system is "diffraction limited at f/4" and another system is "diffraction limited at f/5.6", they take it to mean that the system that is "diffraction limited at f/4" will deliver a lower resolution photo at f/5.6 than the system that is "diffraction limited at f/5.6".

In fact I recall plenty of instances where people have affirmed that although diffraction is visible earlier in high pixel count cameras, they still have more overall detail than a low pixel count camera. What you're saying sounds more like misinformation by suggesting that anyone is making that claim.

I'm basing on what I see understood by it. If we all agree that you still get more detail, what is the point of worrying about this 'diffraction limit' at all?

Exactly. First of all, for a given lens, if the peak aperture is f/4 on one sensor, then it will be f/4 on all other sensors. More importantly, however, is that it doesn't matter what the peak aperture is unless the whole of what you want in the DOF is within the DOF.

Specifically, if you need f/8 to get everything you want within the DOF, then f/8 resolves better than f/4, even if f/4 is the peak aperture of the system. On the other hand, if f/4 resolves significantly better than f/8, then the photographer may choose instead to use focus stacking, scene permitting.

So, to recap, if the peak aperture of a lens on one sensor is f/4, then it will be f/4 for all other sensors, but the higher MP sensor will resolve more detail, all else equal. However, if DOF constraints require a smaller aperture, then the smaller aperture will give better results than the peak aperture for a single exposure.

there,s nothing there that I hadn't already said to you in other ways.

There most certainly is: The recognition that the point along the aperture range where peak image resolution occurs is independent of sensor resolution.

I never denied that. I said that at low resolutions it's more of a plateau that a peak, so you effectively get the same resolution at smaller apertures.

Substantively, I have only two comments: That peak sharpness will occur at exactly rather than approximately the same aperture and that "my/our" side is hardly the one to blame for any conceptual or terminological misunderstandings.

Anders, I avoided assigning blame to anyone and put it down to misunderstanding.

Yes I saw that. So I pointed out what was missing.

Don't be in ass in response.

I am not being an ass. You decidedly are by calling me one for absolutely no good reason.

You felt it necessary to assign blame and point fingers when I had hoped the conversation could have a fresh start.

If you substantively agree then you'll also agree that when the sensor resolution drops below a certain level, the aperture size won't perceptibly reduce the system resolution in the same way as a system with a sensor resolution similar to the lens resolution. And once you get to a high enough sensor resolution, diffraction overwhelmingly determines system resolution so that the whole system is effectively diffraction limited from wide-open.

What I substantively agree with is the following:

When the sensor resolution is much lower than lens resolution, variations in lens resolution will have but a small impact on image resolution (but still always such that increased lens resolution leads to increased image resolution). When it is the other way around (lens resolution much lower than sensor resolution), variations in lens resolution will have a much stronger impact on image resolution.

I tried to put that to you earlier, but you seemed dismissive of the idea. I'm unclear about whether that is still the case..

Where was I dismissive about the idea as I spelled it out above? Please provide specific references (the post/posts you have in mind and the passage/passages in those posts).

I'm not interested in dissecting the previous discussion. I'd rather go forwards than backwards. What I said before and think is most relevant to the present discussion, is that lower resolution sensors have more of a plateau than a peak, so you get the same resolution effectively at lower apertures as you would at the theoretical peak resolution aperture. Therefore a low res sensor isn't diffraction limited at the same aperture as a high res camera.

Let me illustrate what I mean. Here's a hypothetical lens attached to a number of different hypothetical sensors covering a large range of resolutions. The units for resolution are the same throughout and the table below calculates the systems resolution for each aperture for each sensor.

When we plot the data on a chart, it looks like this:

The highest resolution sensors are virtually indistinguishable from the lens resolution. Below is a table showing what percentage of lens resolution is reached at different apertures by different sensors.

With the low res sensors, resolution is pretty flat across the range. The table below shows the percentage of sensor resolution used at each aperture by each sensor.

If we looked at the numbers with a high enough precision, we would indeed find peak resolution at the same aperture for every lens. In practice though, there is no noticeable difference in resolution at any aperture for the lowest resolution sensors. With high res sensors, you'll gain sharpness by using the peak aperture, but as sensor resolution decreases, you get less and less advantage from the peak aperture and suffer less of a penalty for stopping down.

Here's one last table that shows the % of peak resolution that you get at each aperture:

If we said that a resolution difference of 0.1% was the limit of perception, then for all practical purposes, peak resolution is indeed a plateau that stretches over several apertures for smaller sensor sizes.

Mathematically of course, there is always a peak aperture independent of the sensor. And in practice, the relationship between lens and sensor resolution for a system may be such that no combination of lens and sensor produces a plateau.

I think that the semantic argument that this discussion tends to end up in (maybe when the 'peak aperture shifts' people realise that they are wrong) is missing the point of the misinformation and damage to practice that this meme causes. Whatever they wish to decide that they really meant in this abstruse semantic discussion, there are many photographers who look at sites such as CiC, and posts here inspired by it, and end up believing that a low pixel count camera will give them sharper results at small apertures than will a high pixel count camera (absurdities like 'D800 unusable above f/5.6'). While I'm quite prepared to believe that wasn't what they really meant, I'd be more impressed had they made that point in the original posts where they claimed there was a 'diffraction limit'.

I don't recall anyone saying that a low pixel count camera will be sharper than a high res camera (if by sharpness you mean overall image detail).

maybe that hasn't been said explicitly (can't be bothered to go through the whole thread to search) but undoubtedly that is how this bogus 'diffraction limit' gets understood.

The diffraction limit exists, it's just not well understood.

In fact I recall plenty of instances where people have affirmed that although diffraction is visible earlier in high pixel count cameras, they still have more overall detail than a low pixel count camera. What you're saying sounds more like misinformation by suggesting that anyone is making that claim.

I'm basing on what I see understood by it. If we all agree that you still get more detail, what is the point of worrying about this 'diffraction limit' at all?

I don't worry about it personally. But someone asked a question and I found it interesting to think and write about.

It is a genuine concern for those working with telescopes and microscopes more so than ordinary photographers.

The prior thread on this interesting if somewhat heated discussion maxxed out before I could post some additional information. I'll leave it to others to address the theoretical differences. I want to return to the issue of evidence. What evidence is there to support the proposition that pixel density does or does not affect the optimal aperture setting for a given sensor size? Some (including the calculator at the Cambridge in Colour website) indicate that the effect of diffraction sets in sooner for sensors with smaller (more) pixels than for sensors with larger (fewer) pixels. Others, including such luminaries as Bobn2, Anders, and Great Bustard, say, "no, no, no" pixel size is not a meaningful factor. So we have a pretty clear-cut difference of opinion that should be relatively easy to resolve with objective lens tests performed on same-sized sensors with differing pixel sizes/densities using the same lens tested at various aperture settings. Right?

I noted several times in the prior thread that DXOMark provides those tests. DXOMark is fairly unique in that most of its lens tests are conducted on a range of cameras. The problem is that it's clunky to find the appropriate data points and plot them. I also noted that in every case that I've looked at, the DXOMark data supports the proposition that pixel size is a non-factor. I've looked at over a dozen lenses tested on several dozen cameras and have yet to find contrary examples. However, it's a big database and I'd encourage others to look as well.

One objection that's been raised is that the DXOMark measurements are too coarse because they're only taken at full f/stop settings. The speculation is that the peak acutance might be occurring somewhere between the measured stops and therefore we can't rely on the DXOMark data. Having looked at enough examples, this struck me as pretty preposterous. Surely, the peaks wouldn't always average out exactly to the same major f-stop setting. Putting that aside, I thought that the data points we do have should be sufficient to infer with some degree a certainty exactly where the peaks occur. I'm certainly no math whiz (far from it!) but I can throw data into an Excel chart and see how the trendlines curve. Below are two examples chosen to illustrate the point. One is M43-based because - after all - that's what this forum is all about; and one is based on an extreme case comparing a 12mp camera (the Nikon D3) to a 36mp camera (the D800). Charts below. Fire away...

Diffraction softening is unavoidable at any aperture, and worsens as the lens is stopped down. However, other factors mask the effects of the increasing diffraction softening: the increasing DOF and the lessening lens aberrations. As the DOF increases, more and more of the photo is rendered "in focus", making the photo appear sharper. In addition, as the aperture narrows, the aberrations in the lens lessen since more of the aperture is masked by the aperture blades. For wide apertures, the increasing DOF and lessening lens aberrations far outweigh the effects of diffraction softening. At small apertures, the reverse is true. In the interim (often, but not always, around a two stop interval), the two effects roughly cancel each other out, and the balance point for the edges typically lags behind the balance point for the center by around a stop (the edges usually suffer greater aberrations than the center). In fact, it is not uncommon for diffraction softening to be dominant right from wide open for lenses slower than f/5.6 equivalent on FF, and thus these lenses are sharpest wide open (for the portions of the scene within the DOF, of course).

In other words, there is absolutely no reason, whatsoever, to expect that an 85 / 1.4G would peak at the same aperture as a 25 / 1.4, regardless of the format or pixel size.

I wasn't comparing the Nikkor to the PanaLeica. They're two separate charts with the comparison in each chart between two cameras with the same sensor size but different pixel sizes/counts. I picked the Nikon comparison because of the significant difference in pixel size between the D3 and the D800 and then picked the Oly comparison because this is an M4/3 site.

That's not the error in my analysis. The error is just plain bad math as Golly and Anders have informed me. That's what I get for trying to waltz with the leftbrains here!

What you would want to do is compare the same lens on different sensors. What you will find is that not only is the peak aperture the same regardless of pixel size, but that the higher pixel count sensor will resolve more at every aperture.

Which is what I tried to show with the vertical blue line in each chart.

In other words, pixel size has no effect on diffraction, but pixel count has a definite effect on resolution, although the resolution advantage of more pixels asymptotically vanishes as you stop the lens down.