As always we have descended into "I-need-120MP-on-a tiny area-cuz-I-can-argue-it-will-work".

Prior to the 1DX and 5D-III release, the same crowd (you know who you are, you have spent hours typing pages on here selling the same old 3 day old fish) screamed for 40+ MP and were bitterly disappointed when Canon went the low MP route for both bodies.

Its not about what you "want"...its about what they can sell in a profitable way in a competitive market. Most pros own a 1DX... not 7D... so much for the high MP whining. Every flagship that Nikon and Canon have released so far have been lower MP while they release high MP APC and consumer grade bodies for the "My-MP-is-Bigger-than-your-MP" crowd.

I guess learning comes a tad slow... but there is no harm in asking....please continue

I wouldn't disagree with that. Optical resolution specifically refers to a "systems" ability to resolve detail of "some subject". If, by magnifying the subject, the system can resolve part of the subject in more detail, then yes, optical resolution increased. That is, effectively, magnification. However, the spatial resolution of the system does not necessarily change, and it is spatial resolution we are discussing.

Technically speaking, if you can increase focal length while maintaining or increasing aperture, then spatial resolution could concurrently increase along with optical resolution (magnification). But the process of adding a TC precludes that option, since focal length increases but entrance pupil remains the same, therefor necessitating a smaller relative aperture, which in turn dictates a lower spatial resolution for the lens (which is only one component of the system...sensor spatial resolution remains the same, thus...sensor outresolves lens.)

...If you use a TC or multiple TC's that reduce your aperture to f/8, then according to the laws of physics spatial resolution becomes limited (specifically to around 86lp/mm)

You're refering to diffraction artifacts here, but for example a bad 1.4TC on an f/2.8 lens introduces new optical elements and hence the potential for a reduction in resolution of the optical system, still avoiding any diffraction related side effects caused by a narrow aperture.

The additional optics have the potential to introduce new optical aberrations, which themselves have the potential to reduce spatial resolution. Assuming the optics of a bad TC introduce enough optical aberrations to overpower the effects of diffraction...well, that only means that you have even LESS resolution than if you were working with a diffraction-limited lens, not more. Optical aberrations have the potential to be far more devastating to IQ than diffraction, so even with an f/2.8 lens, if you are using a crappy 1.4x TC with the lens wide open, I'd expect the results to be worse than if you used a good TC on an f/4 lens.

As for the question posed by PerfectSage, there does appear to be a real and practical answer, or at least rule of thumb, which would guide one towards the goal of advantaging all of that 116 lp/mm resolving power of the 7D sensor, and that is to choose optics that will present an image to the sensor with enough inherent detail. if the source image truly does not contain the detail, the sensor will not find any that isn't there. Whether or not that goal is a good one or not can be debated of course

Technically speaking, there is an asymptotic relationship in terms of spatial resolution. You can never actually achieve the same spatial resolution as the highest resolving component in an optical system. As you approach it, you begin to experience diminishing returns. Lets say you have a lens capable of resolving 86lp/mm. Nothing you ever do can ever allow you to resolve 86.1lp/mm...your upper bound is the resolution of the lens itself. At best, you could reach 85.99999999999... lp/mm, assuming you had a sensor with literally infinite resolution. You would need something like an f/0.3 lens to resolve around 115lp/mm of resolution, and approach the 116lp/mm of the 7D. Total "system spatial resolution" is derived from the RMS of the "blur circle" of each component in an optical system. The size of the airy disc at a given aperture in the lens, blur introduced by any and all TC's, the size of a pixel in the sensor, and if you want to get really accurate, the size of the blur introduced by low-pass and IR cut filters. Taking the RMS of each of those will give the the size of the blurry disc of a single point light source resolved by the entire system. Taking the reciprocal of that divided by two will give you the spatial resolution of the system as a whole in lp/mm.

With a good lens and the 7D, the actual system spatial resolution is only going to be around 70-90lp/mm at best, and probably closer to 50lp/mm on average (accounting for varying apertures and varying lenses of varying quality.)

in which Jrista reports that for the 116 lp/mm resolving power of the 18mp sensor itself, "The extremely high resolution of the 7D also means that outside of the best of the most recent Canon L-series lenses, namely Mark II's and new designs like the 8-15mm L Fisheye, the 7D is very likely outresolving most lenses except for their very centers" Sorry Jrista I yanked that out of the above thread without quoting properly.

It's baloney, though. I use TCs on lenses on those 18MP 1.6-crop sensors, and the TCs greatly enhance detail retained compared to the bare lens. I've used 1.4x and 2x stacked on a T2i with a 70-200/2.8L IS II - that's the equivalent of a (18MP)*(1.6^2)*(2^2)*(1.4^2) = 369MP full-frame camera shooting through the bare lens. Even a 100-400L will retain more detail using a 2x versus a 1.4x on that sensor. That's like a 184MP full-frame sensor.

Secondly, adding pixels will always retain more detail through the same optics than having fewer. The function is asymptotic. There's no "the sensor is out-resolving and so there's no point" type limit beyond which you can't cross.

Hello there,and as we say in Sweden, I feel again YOU

No Jrista, you can get out more resolution from a good lens

Again, you are missing the point of debate. I'm not talking about simply getting more resolution by using a better lens. I'm debating the notion that adding a TC is the same exact thing as using a higher resolution sensor, or that the two can be discussed in terms of megapixels.

Am happy that this is in the 1d style bodies. As I believe such hi mpx is for hi end use and pretty useless for general photographers.

what elitest hogwash! -100 for such an arrogant and ill concieved postI for one dont want an overbulky 1D style body as many others do not want them eitherthere is no reason or need for them these days. infact the Smaller 5D style body is better for a high MP audience since its more compact and lighter its less bulk and weight to carry around hiking to get to that magic landscape shooting location to take advantage of the billions of pixels.and in the studio with that many megapixels its going to be shot like a medium format. On a tripod and tethered capturing the whole area then cropping later as desired.

both Canon and Nikon could put up with a sensor with higher resolution,but what would they gain from it ?

My argument was not to support the insane MP requests by a vocal group of enthusiasts...the post was to say pros and Canon know MP evolves in the context of sensor size and demands of what the pro market's quality needs are (including DR, fps, ISO performance) are. Both Nikon and Canon have shown by their choice of low MP for pro bodies what faction drives their innovation on flagships. It’s NOT high MP.

They do release "enthusiast" bodies with pumped up MP... but apparently they aren't moving quickly enough to cram more MP in that tiny wafer of a APC sensor for the high MP Crowd

My point is Canon will move at its own pace, not based on the vocal few who think more is always better.

You still have a very skewed idea, or simply bad terminology, in describing what you are actually experiencing with a TC, though, Lee Jay. Your previous argument in that other thread, that the virtual image of the sensor shrinks when it is observed by looking through the lens into the camera is not indicative of what is really occurring. A teleconverter does not change how many megapixels you have, nor does it change the resolution of the lens.

No, but it has the same effect as doing either one.

The effects are different.

They are the same. Smaller pixels and longer focal length, both given the same aperture diameter, do the same thing. See for yourself:

If you have an 18mp sensor and a 36mp sensor, and use the same lens on both. The effect of switching from the 18mp sensor to the 36mp sensor has the effect of potentially doubling spatial resolution for the entire area of the object being photographed. (Let's assume for a moment that you have a perfect lens at a very wide aperture, so diffraction is not a problem.) On the other hand, adding a 2x teleconverter has the effect of enlarging the subject, such that a smaller area of that subject is being photographed at the same spatial resolution.

That's a separate issue (FOV/vignetting) having nothing to do with resolving power.

As we discussed in our last debate, the spatial resolution of whatever is projected by the lens, as well as the spatial resolution of the sensor, are pretty limited. If you use a TC or multiple TC's that reduce your aperture to f/8, then according to the laws of physics spatial resolution becomes limited (specifically to around 86lp/mm), which is WELL below the fixed luminance spatial resolution of pretty much any APS-C sensor these days.

I think what you are doing is accounting for the "entire" size of your subject. If you magnify a part of the moon such that only that one part fits on an 18mp sensor, the "effective size of the whole moon if it were to be measured in megapixels would require a 184mp FF sensor to image in it's entirety." You could look at it that way, but it is extremely confusing, and running about stating "It's like having a 369MP FF sensor" is not really true, and I WILL argue that point whenever you bring it up.

That's fine, and you'll be wrong each time. This is the way people do it in astrophotography, where resolution is what you are after. "Image scale" is determined by arc-second per pixel, and the lens is measured by aperture diameter. TCs leave the aperture unchanged and decrease arc-seconds per pixel. More pixels leave the aperture unchanged and decrease arc-seconds per pixel. Same thing.

The arc seconds per pixel remain the same, but the result is not the same.[/quote]

As I demonstrated above with actual samples, it is.

Quote

In one case, arc seconds per pixel decrease for the same number of pixels. In the other case, arc seconds per pixel decrease for MORE PIXELS. They are definitely different things.

Only because of FOV, which is another topic than resolving power.

Quote

The only case where they would be the same is if you always and explicitly included the notion that you were CROPPING the larger sensor's image to the same area and dimensions of the smaller sensor. In which case, and only in which case, would the results be exactly the same thing.

Good, I'm glad you finally agree with me. Seems we're done here.

Quote

In every case, using an better sensor that actually has more resolution will always be better than using a teleconverter, because you can resolve more detail of the same subject.

And because the smaller pixels don't have the optical aberrations of a teleconverter. However, given that I can't do anything to my camera to reduce it's pixel size, adding a teleconverter is the closest thing to simulate the same effect, albeit with a slightly lower performance due to aberrations.

You're also missing a crucial bit of this.

Let's say your lens is capable of 100lp/mm at MTF 50. If your sensor already resolves 100lp/mm, you constantly imply that there's not much point in going to smaller pixels. That is utterly and totally false.

My point is Canon will move at its own pace, not based on the vocal few who think more is always better.

And more power too them, too! I'd rather have a well thought out high MP camera that offered users useful features, than something that turned out to be far too difficult to use, or produced images of such an immense size as to be unusable for most photographers.

Question from a 18MP user who rarely feels the need for more. How many Megapixels is enough for the high MP hogs in the 35mm sensor format? Lets take two L lenses specifically... EF 24mm f1.4 L MII and 70-200 2.8L MII. (what you will use it for, is a totally different question, but lets keep it simple).

I find it funny that people are saying pros would not want this. Anyone that has been in a working media centre at a major international sporting event knows just how much cropping often goes into the pics they use.

Plus, computers these days - even laptops - can handle such large sizes. And if you are spending US$6-10,000 on a body you most likely can afford to get a decent laptop in the config needed for such file sizes. Remember, you may take thousands of pictures at an event, but you do not use them all!!

I would welcome such a camera in a 1 body. Main thing that would interest me is how far they can push the fps in such a monster. If it is just 4-5 I would not be interested but if they can push it to 6-8 or even more ;-) then ok.

You still have a very skewed idea, or simply bad terminology, in describing what you are actually experiencing with a TC, though, Lee Jay. Your previous argument in that other thread, that the virtual image of the sensor shrinks when it is observed by looking through the lens into the camera is not indicative of what is really occurring. A teleconverter does not change how many megapixels you have, nor does it change the resolution of the lens.

No, but it has the same effect as doing either one.

The effects are different.

They are the same. Smaller pixels and longer focal length, both given the same aperture diameter, do the same thing. See for yourself:

You are only thinking pixel size, which I guess is one way to look at it. The OUTPUT of the two systems is entirely different, though. In the case of a denser sensor, you get a more detailed image of a LARGER area of your subject. In the case of a less dense sensor combined with a TC, you get a more detailed image of a SMALLER area of your subject. The two are not the same, even if in an abstract context the arc seconds per pixel is equivalent. In terms of the actual product of the two systems, the higher density sensor is always the better system. Additionally, adding a TC does not increase "spatial" resolution, it increases "system" resolution, which is a different concept.

If you have an 18mp sensor and a 36mp sensor, and use the same lens on both. The effect of switching from the 18mp sensor to the 36mp sensor has the effect of potentially doubling spatial resolution for the entire area of the object being photographed. (Let's assume for a moment that you have a perfect lens at a very wide aperture, so diffraction is not a problem.) On the other hand, adding a 2x teleconverter has the effect of enlarging the subject, such that a smaller area of that subject is being photographed at the same spatial resolution.

That's a separate issue (FOV/vignetting) having nothing to do with resolving power.

If you are referring to optical system resolution, rather than spatial resolution, then I agree. However you keep applying the units "lp/mm" to system resolution, which feels like a major conflation to me. Assuming the optical spatial resolution of the entire lens setup (original lens + TC) remains the same (which is generally impossible when adding a TC, as it reduces your REALTIVE aperture, which implicitly means your optical spatial resolution of THE ENTIRE LENS SETUP is reduced), the final system spatial resolution will be lower than that of the lens or the sensor, as it is the root mean square of the blur each individual component.

As we discussed in our last debate, the spatial resolution of whatever is projected by the lens, as well as the spatial resolution of the sensor, are pretty limited. If you use a TC or multiple TC's that reduce your aperture to f/8, then according to the laws of physics spatial resolution becomes limited (specifically to around 86lp/mm), which is WELL below the fixed luminance spatial resolution of pretty much any APS-C sensor these days.

f=1/(0.00055* = 227lp/mm at MTF = 0. Using MTF=50, as you did above, is arbitrary and of little value in this context.

Quote

The notion that a consumer-grade camera can resolve anything at MTF ZERO is ludicrous.

Yeah...that's the definition of MTF 0. It's the asymptote.

Yes, it is the asymptote. It is also a purely theoretical construct. I read an interesting quote today:

Quote from: various

In theory there is no difference between theory and practice. In practice there is.

You have to take into account the realities that exist in practice that don't exist in pure theory. In reality, the average consumer-grade camera couldn't detail at MTF 0% as a strong enough signal for it to be differentiated from noise (photon noise.) At MTF 9%, you would be hard pressed to know for certain what detail was noise, and what wasn't noise, the two are going to interfere with each other a lot.

In the context of scientific astrophotography, the imaging devices used are orders of magnitude more expensive than a consumer-grade sensor. They are supercooled, have quantum efficiencies that surpass 80%, and S/N ratios that would make some of the Nikon D800 fanboys eyes pop out of their skulls. The analysis of star airy patterns at MTF 0% requires some pretty sophisticated, and incredibly expensive, equipment. It isn't valid in the context of discussions about consumer-grade gear that are barely reaching 50% Q.E. and have relatively atrocious S/N ratios.

The notion that a camera can usefully resolve anything at MTF 9% (Reighley) is also pretty ridiculous.

Except that we do so all the time, in astro stuff.

You need to back that up with some actual examples that are properly analyzed for MTF. MTF 0% means 0% contrast. At that point, you are literally analyzing the specific shape of the spot resolved for a point light source like a star to make EDUCATED GUESSES about the nature of a star. Is that star a single star? Is it a binary star? Might it be a tertiary star system? Those analyses are also performed algorithmically by computers, and the stars need to be isolated against a dark backdrop, so the shape and waveform of the airy disc that was resolved is as clear and separate from background noise as possible. It has no application in general "photography", where we are resolving a system of point light sources to create a continuous signal. Even in the case of hobbyist astrophotography, you are not resolving point light sources for a scientific purpose...you are resolving stars (plural), nebula, galazies, nova, etc. to produce a photograph for aesthetic purposes.

Different contexts. And, therefor, different STANDARD systems by which we measure spatial resolution. I use MTF 50% because it is the industry standard MTF that major products, like Imatest, use.

MTF 50% is of specific value because MTF 50% IS STILL and WILL CONTINUE to be used today as the standard benchmark for image resolution of meaningful sharpness, either by a lens or a sensor.

Meaningful image sharpness is meaningless term. People that espouse this ridiculous property also claim that reducing pixel count gets you are sharper image, which is impossible.

If no one ever complained about the sharpness of properly stabilized photos taken with a camera like the 7D, then I would agree with you. Simple fact of the matter is that once your sensor spatial resolution starts to outresolve your subject, details DO appear less-sharp than if the same photo, with the same lens, was taken with a sensor with larger pixels. It is an AESTHETIC, real-world thing. Not a theoretical thing. It is a matter of perception, not statistical measurement. Statistically, no matter how you slice and dice it, the 7D resolves more, and has the capability to resolve more detail as sharply as a sensor with larger pixels. Perceptually, the 7D tends to produce soft results in a non-normalized context (i.e. pixel peeping) than sensors with larger pixels.

That is 100% factually incorrect. The optics DID change...you added a teleconverter.

I added it behind the lens. It doesn't change the performance of the lens at all.

Ok, now you are mincing words. Lets be specific and accurate here. The sensor, that tiny, wonderous little device setting inside the mirror box of your camera...that is what is actually resolving the image projected by your "lens setup". It doesn't care if the "original lens" remained unchanged. It cares about the entire "lens setup", which includes not only the "original lens", but also a "teleconverter". The entire "lens setup" is what matters in the context of the discussion at hand....the SPATIAL resolution of sensors, lens setups, and the optical system as a whole.

In the context of the SENSOR, if you add a TELECONVERTER to a LENS, the optics ABSOLUTELY DO CHANGE. The sensor doesn't sit between the lens and the TC...the sensor sits behind BOTH the lens and the TC. Lets stop playing games now.

The only case where they would be the same is if you always and explicitly included the notion that you were CROPPING the larger sensor's image to the same area and dimensions of the smaller sensor. In which case, and only in which case, would the results be exactly the same thing.

In every case, using an better sensor that actually has more resolution will always be better than using a teleconverter, because you can resolve more detail of the same subject.

And because the smaller pixels don't have the optical aberrations of a teleconverter. However, given that I can't do anything to my camera to reduce it's pixel size, adding a teleconverter is the closest thing to simulate the same effect, albeit with a slightly lower performance due to aberrations.

You're also missing a crucial bit of this.

I'm not missing anything. You seem to still be missing my point. Using a teleconverter will produce the same arc seconds per pixel, but for a smaller area of the subject. My debate with you is the way you directly equate the use of a TC with the use of a higher resolution sensor. The two are not equivalent, not by orders of magnitude. One may "simulate" the other, however it is quite plain and simply not "as good" as the alternative. You lose something with a TC that you do not lose with a higher resolution sensor. Claiming that a TC is "the exact same thing" based solely on the notion that arc seconds per pixel is the same is factually incorrect, and highly misleading.

Let's say your lens is capable of 100lp/mm at MTF 50. If your sensor already resolves 100lp/mm, you constantly imply that there's not much point in going to smaller pixels. That is utterly and totally false.

No, I don't imply that there is no point in going to smaller pixels. I believe I even did the math on this the last time we had this debate. My only argument is that you begin to experience diminishing returns by going to smaller pixels, and then within the context of lenses below a certain aperture (I believe I used f/4 as a rough market for the cutoff point, below which you definitely get diminishing returns as sensor resolution increases.)

For discussions sake, let's take your 100lp/mm lens, and 100lp/mm sensor. In terms of individual blur, both exhibit a 5 micron blur circle. The system blur would be:

sqrt(0.005^2 + 0.005^2)That comes out to a system blur of 0.007um, or in terms of lp/mm, about 70lp/mm. Now, lets say we double the sensor's resolution. We now have a sensor capable of 200lp/mm. Our system blur changes, but not by the same ratio. Its 0.0056um, or about 89lp/mm. Lets say we switch to a 400lp/mm sensor. We are up to 97lp/mm. We still haven't reached 100lp/mm, but our sensor is now a 552mp sensor, with 1.25um pixels! Diminishing returns. Even at a mere 200lp/mm, our sensor is 138mp with 2.5um pixels.

My argument before was that it is a highly costly endeavor to improve system resolution by increasing sensor resolution. Especially given the fact that by the time we reach around f/3.5, diffraction (in a literal perfect lens) is limiting the maximum spatial resolution of whatever optical setup we are using to below 200lp/mm MTF 50 anyway, so using a 500mp sensor is rather meaningless outside of the context of optically perfect (diffraction limited) ultra fast lenses. Even a $2500 50mm f/1.2 lens exhibits some optical aberrations as narrow as f/3.5-f/4.

Oh, and BTW...astrophotography, assuming that, with a subject such as the moon, you could resolve details of lower contrast than 50% and not end up having to pose the question "Is that detail noise, or is it the moon?", that is a rather specialized case for a small niche of photographers. In the broader context of "all photography", MTF 50 has to be the baseline for measurement, so insisting we use MTF 0% of MTF 9% is asking me to make evaluations or apply mathematics to a very tiny percentage of photographers overall. (Assuming you don't actually have to pose that question...noise or subject?)

I think what you are doing is accounting for the "entire" size of your subject. If you magnify a part of the moon such that only that one part fits on an 18mp sensor, the "effective size of the whole moon if it were to be measured in megapixels would require a 184mp FF sensor to image in it's entirety." You could look at it that way, but it is extremely confusing, and running about stating "It's like having a 369MP FF sensor" is not really true, and I WILL argue that point whenever you bring it up.

That's fine, and you'll be wrong each time. This is the way people do it in astrophotography, where resolution is what you are after. "Image scale" is determined by arc-second per pixel, and the lens is measured by aperture diameter. TCs leave the aperture unchanged and decrease arc-seconds per pixel. More pixels leave the aperture unchanged and decrease arc-seconds per pixel. Same thing.

The arc seconds per pixel remain the same, but the result is not the same.

As I demonstrated above with actual samples, it is.

The example you posted is not what you think it is. Multiple things changed between those two images. For other readers sakes, from the following link:

The physical lens and the sensor changed in those two sample images for the express purpose of maintaining framing, which is not the same thing as what we are discussing here. In the context of your sample image, sensor resolution increased while focal length decreased. In the context of our discussion, sensor resolution either remains the same while focal length increases, or sensor resolution increases while focal length remains the same. We are discussing the following:

Camera A:400mm lens + 2x TC18mp Sensor with 4.3 micron pixels

Camera B:400mm lens36mp sensor with 2.15 micron pixels

And the question in context is: Is the output of those two systems the same?

The answer is NO. Simply adding a TC does not change the number of megapixels your camera has. It only magnifies the subject. The output of Camera A will be a PART of the subject, in high detail. The output of Camera B will be THE WHOLE subject, in high detail.

(Sorry for all the posts...damnable security block seems to trigger on large replies.)

Since I currently use the 5D2 for my fashion and portrait photography, I would need something that's a significant upgrade to make it worthwhile. The 5D3 is a great camera, but at the end of the day, isn't much of an upgrade from the 5D2. The 1DX is interesting but what am I buying? FPS. And that really doesn't matter much to me. A 50MP with 16-bit DR would be super. I think Canon could blow everyone, including MF manufacturers, out of the water if they put some real muscle in their R-D for this camera.

But this next camera will be a total pro camera and unless you're making lots of money from these pics, is not in the arena for most photographers.