I suspect Canon will venture into the c. 40mpx market but not for any of the reasons yet mentioned. I think they will do it because it will sell lens. You put some of the older L-series lens (let alone non-L) onto a 40+ mpx body and you will soon be screaming for better lens.

And no I can't scientifically back that statement up but I experianced first hand the IQ "old" lens could produce on the 18mpx 7D when I upgraded to that

I am not sure this makes any sense. Whether you stick a 300mm ii 2.8 L lens on a 5 MP body or 22 MP body it is a great lens. Stick it on a 40 MP body and I think the same result happens.

I can see IQ being a factor for some bodies (mirrorless more than anything) but for SLR's I don't think current lenses with higher MP count sensors (40+ as you alluded to) would alter IQ.

Am I wrong here?

Increased pixel density means the sensor is putting more stress on the resolving power and aberration correction of the lens, in other words: more pixels reserved for showing each and every bit of aberration. Furthermore, the lenses have a resolution limit expressed in lpm (lines per millimeter) or lppm (line pair per millimeter). Take a 36mm wide sensor and put 8000 pixels on the wide side, and your lens will need to resolve 1.425x as many linepairs as it would for a 21 MP sensor or the image will look softer. Someone else can probably explain it better, but the basic idea is: pixels / sensor size = pixel density. The bigger the pixel density, the smaller the pixels. What comes with smaller pixels you can look up elsewhere, I don't know how to explain it without writing a thousand pages on it.

LOL

I don't know how many times I'll have to debunk this myth. But here it goes again. First off, let's define a few things.

Lens resolution: The spatial resolving power of the lens (in lp/mm)Sensor resolution: The spatial resolving power of the sensor (in lp/mm)System (or output or image) resolution: The measurable spatial resolution of the images produced by lens+sensor (in lp/mm)

System resolution is the result of a convolution of what the lens resolves with the spatial grid of the sensor. Both components have an intrinsic blur. This blur is generally approximated by a gaussian function, a spot of light that follows some kind of bell curve (peaked in the middle, falloff as you move away from the middle of the spot). To actually compute the REAL system resolution of a lens and sensor, you would need to know the actual PSF or Point Spread Functions of both. That kind of information is difficult to come by, and greatly complicates the math to get a small amount of additional precision. We can approximate system resolution by using this function:

blur = 1l / (100lp/mm * 2l/lp)blur = 1l / (200l/mm)blur = 0.005mmSo, to directly derive the measurable spatial resolution of an output image from the spatial resolutions of a lens and a sensor, we simply combine these two formulas. First, let's assume a diffraction limited lens at f/8. Since it is diffraction limited, the lens will be exhibiting perfect behavior, so we'll be getting 86lp/mm. We have a 5µm pixel pitch in our sensor...let's just assume the sensor is monochrome for now, which means our sensor is 100lp/mm. If we run the formula:

The image resolution with a diffraction limited f/8 lens and a 5 micron pixel pitch is 65lp/mm. That is a low resolution lens. One which most people would claim is "outresolved by the sensor". Such terminology is a misnomer...sensors don't outresolve lenses, lenses don't outresolve sensors...the two work together to produce an image...the convolution of the two produces the output resolution, the resolution of our actual images, and it is that output that we really care about.

So, let's assume we now have a diffraction limited f/4 lens. Our lens spatial resolution is now 173lp/mm. Quite a considerable improvement over our f/8 lens. It is actually double the resolving power of an f/8 lens. Same formula:

Our image resolution with a diffraction limited f/4 lens is 87.7lp/mm. That is a 35% improvement. In this case, most people would say the "lens outresolves the sensor". But again, that is a misnomer. The two are still working together in concert to produce an image. The results of the image have improved. Now, lets say we still have our f/8 lens, and we now have a sensor with half the pixel pitch. Were using 2.5 micron pixels. Same formula:

Our image resolution jumps to 79.4. Well, supposedly, the sensor is "far outresolving the lens" at this point...and yet, the spatial resolution of our images has still improved considerably. By over 22%, to be exact. The fact that our sensor is capable of resolving considerably more detail than our lens does make the lens the most limiting factor...however it does NOT mean that using "the same old crappy lens" is useless on a newer, higher resolution sensor. Our results have still improved, by a meaningful amount. It is not necessary to build a new lens to take advantage of our improved sensor.

Lets take this one step farther. We are using our same f/8 lens. It isn't a great lens, it's decent, for it's generation. At f/4 it is not diffraction limited, but it performs pretty well. Let's assume it is capable of resolving 150lp/mm instead of 173lp/mm. If we run out formula again:

Wow. Our crappy old lens which isn't even diffraction limited at f/4, combined with our greatly improved ultra high resolution sensor, is still giving us a lot of bang for our buck! Our image resolution is up to a whopping 122lp/mm! That is an improvement of over 53% over our f/8 performance. Well, let's say we finally break down and buy a better lens, one that is diffraction limited at f/4:

Hmm...well, things haven't changed much. Relative to our older lens, we now have 133lp/mm. Unlike the previous jump of 53%, we have now gained a 9.5% improvement in resolving power. Ten percent improvement isn't something to shake a stick at, but our previous older lens that isn't diffraction limited at f/4 still performs remarkably well on our ultra high resolution sensor. To eek out any more performance, we would have to get a lens that was diffraction limited at a wider aperture. At apertures wider than f/4, optical aberrations begin to dominate, and achieving significantly improved results is more difficult. Additionally...you only get the improved resolving power at apertures wider than f/4...if you regularly shoot scenes at diffraction limited apertures of f/4 and smaller, then the only real way to improve the resolution of your photographs themselves is with a higher resolution sensor.

Pushing sensor resolution to obscene levels is a lot easier than pushing lens resolving power to obscene levels. Upping sensor resolution is the far more cost effective means, and therefor the one that tends to appeal to the masses (regardless of whether they know why.)

WOW!Good explanation....

Now I just have to wait for Canon to make a FF camera with the same pixel size as an iPhone... 445Megapixels!

I don't know how many times I'll have to debunk this myth. But here it goes again. First off, let's define a few things.

By all means. I did say "maybe someone else can explain it better."

Don't take it personally. Your not the first to assume the "lens outresolves sensor" myth. This won't be the last time I have to debunk it either...although I may just bookmark this page so I can copy and paste in the future.

...Pushing sensor resolution to obscene levels is a lot easier than pushing lens resolving power to obscene levels. Upping sensor resolution is the far more cost effective means, and therefor the one that tends to appeal to the masses (regardless of whether they know why.)

I'm not going to even check your math, I trust it's correct! Well, that's good then. I'm all for higher resolution, I just had the feeling many lenses wouldn't be able to keep up. I don't mind being wrong on that one!

One additional little tidbit, I forgot to mention before. Image/output/system resolution is ultimately limited by the lowest common denominator. If your lens can only resolve 86lp/mm, it will ultimately not matter how far you push sensor resolution...you'll never resolve more detail than 85.99999999999999999999...lp/mm. System resolution has an asymptotic relationship with the resolving power of the least capable component of the system. Now, in the original example, and f/8 lens and a 5 micron pixel pitch, output resolution was 65lp/mm. Doubling sensor resolution pushed us up to 79.4lp/mm. Doubling sensor resolution again would get us much closer to 86lp/mm. Were at 1.25µm pixels now...that's pretty small. If we wanted to "double" resolution again, we would have 0.625µm, or 625nm pixels. Those are too small. Were reaching the point now where we are beginning to filter out red light.

You eventually reach the point of diminishing returns with sensor resolution if the lens is the limiting factor. Now, it doesn't matter how good the lens is...if you need to use f/8, you need to use f/8, and you'll never get more than 86lp/mm even with the best lens and the best sensor humanity is ever capable of producing. The only option at that point to achieve more resolution is to start taking more radical measures. Use f/4 and stack for focus. Maybe build a camera capable of always using a lens at it's fastest diffraction limited aperture, and use clever post-lens optics and software algorithms to produce whatever depth of field you need at the resolution of that maximum diffraction limited aperture. This is kind of where Lytro is pioneering something new. Their concept was consumerized, but it is possible they have the foundation of the future of ultra high resolution photography in their pockets (I don't know for sure, depends on exactly how their technology works and how applicable it is to different kinds of cameras.)

flowers

You eventually reach the point of diminishing returns with sensor resolution if the lens is the limiting factor. Now, it doesn't matter how good the lens is...if you need to use f/8, you need to use f/8, and you'll never get more than 86lp/mm even with the best lens and the best sensor humanity is ever capable of producing. The only option at that point to achieve more resolution is to start taking more radical measures. Use f/4 and stack for focus. Maybe build a camera capable of always using a lens at it's fastest diffraction limited aperture, and use clever post-lens optics and software algorithms to produce whatever depth of field you need at the resolution of that maximum diffraction limited aperture. This is kind of where Lytro is pioneering something new. Their concept was consumerized, but it is possible they have the foundation of the future of ultra high resolution photography in their pockets (I don't know for sure, depends on exactly how their technology works and how applicable it is to different kinds of cameras.)

Okay, I got it. That makes sense. I think there would be no shortage of optical problems if the camera had pixels the size of the longer end of the light they're collecting! And I don't even want to imagine the S/N ratio... I read about Lytro recently, it was fascinating! Maybe I'm being sentimental, but to me that would feel like "faking DOF"! The significance is huge, but it would feel so different if I had to use it in practice. Personally I prefer everything to happen optically that can happen optically!

You eventually reach the point of diminishing returns with sensor resolution if the lens is the limiting factor. Now, it doesn't matter how good the lens is...if you need to use f/8, you need to use f/8, and you'll never get more than 86lp/mm even with the best lens and the best sensor humanity is ever capable of producing. The only option at that point to achieve more resolution is to start taking more radical measures. Use f/4 and stack for focus. Maybe build a camera capable of always using a lens at it's fastest diffraction limited aperture, and use clever post-lens optics and software algorithms to produce whatever depth of field you need at the resolution of that maximum diffraction limited aperture. This is kind of where Lytro is pioneering something new. Their concept was consumerized, but it is possible they have the foundation of the future of ultra high resolution photography in their pockets (I don't know for sure, depends on exactly how their technology works and how applicable it is to different kinds of cameras.)

Okay, I got it. That makes sense. I think there would be no shortage of optical problems if the camera had pixels the size of the longer end of the light they're collecting! And I don't even want to imagine the S/N ratio... I read about Lytro recently, it was fascinating! Maybe I'm being sentimental, but to me that would feel like "faking DOF"! The significance is huge, but it would feel so different if I had to use it in practice. Personally I prefer everything to happen optically that can happen optically!

With Lytro it does happen optically. There is actually a special optical array in front of the sensor. They do longer exposures, and over the duration of the exposure time, they are actually gathering information in "three" dimensions. A lytro image is not just a bunch of pixels in two dimensions, it actually contains more information that allow their software to do it's thing. It isn't just software trickery, it is a combination of optical ingenuity and software algorithms that achieve the ability to change DOF in post.

Lytro is a limited application of the concept, though. If you play with some of their examples, you'll find that there are a number of discrete options for DOF, it isn't really a continuum. Improvements on the technology could make it more effective, bring in enough information that you could indeed have more of a continuous three dimensional field that you can tweak in post. The raw data file sizes would become considerably larger, however as time continues to trudge on, processing speed and storage capacity is improving considerably (i.e. CFast 2). I don't think that the Lytro concept would ever become a mainstream, frequently used thing...it would be one of those more niche options for people who really need it.

And there are actually already some options to solve some of these problems. Not quite the way an infinite field lytro-style device does, but tilt/shift lenses can be used to great effect to control your focus. You can either constrain DOF, or expand it such that you could photograph a landscape scene at f/4 or even f/2.8 and have the entire depth of field in focus and at high resolving power. Again, though, this is a purely optical solution, and as such, you tend to pay more for it, especially if you need the capability at multiple focal lengths...so a lytro-type solution could still offer something in a cheaper package.

flowers

With Lytro it does happen optically. There is actually a special optical array in front of the sensor. They do longer exposures, and over the duration of the exposure time, they are actually gathering information in "three" dimensions. A lytro image is not just a bunch of pixels in two dimensions, it actually contains more information that allow their software to do it's thing. It isn't just software trickery, it is a combination of optical ingenuity and software algorithms that achieve the ability to change DOF in post.

Lytro is a limited application of the concept, though. If you play with some of their examples, you'll find that there are a number of discrete options for DOF, it isn't really a continuum. Improvements on the technology could make it more effective, bring in enough information that you could indeed have more of a continuous three dimensional field that you can tweak in post. The raw data file sizes would become considerably larger, however as time continues to trudge on, processing speed and storage capacity is improving considerably (i.e. CFast 2). I don't think that the Lytro concept would ever become a mainstream, frequently used thing...it would be one of those more niche options for people who really need it.

And there are actually already some options to solve some of these problems. Not quite the way an infinite field lytro-style device does, but tilt/shift lenses can be used to great effect to control your focus. You can either constrain DOF, or expand it such that you could photograph a landscape scene at f/4 or even f/2.8 and have the entire depth of field in focus and at high resolving power. Again, though, this is a purely optical solution, and as such, you tend to pay more for it, especially if you need the capability at multiple focal lengths...so a lytro-type solution could still offer something in a cheaper package.

You eventually reach the point of diminishing returns with sensor resolution if the lens is the limiting factor. Now, it doesn't matter how good the lens is...if you need to use f/8, you need to use f/8, and you'll never get more than 86lp/mm even with the best lens and the best sensor humanity is ever capable of producing. The only option at that point to achieve more resolution is to start taking more radical measures. Use f/4 and stack for focus. Maybe build a camera capable of always using a lens at it's fastest diffraction limited aperture, and use clever post-lens optics and software algorithms to produce whatever depth of field you need at the resolution of that maximum diffraction limited aperture. This is kind of where Lytro is pioneering something new. Their concept was consumerized, but it is possible they have the foundation of the future of ultra high resolution photography in their pockets (I don't know for sure, depends on exactly how their technology works and how applicable it is to different kinds of cameras.)

Okay, I got it. That makes sense. I think there would be no shortage of optical problems if the camera had pixels the size of the longer end of the light they're collecting! And I don't even want to imagine the S/N ratio... I read about Lytro recently, it was fascinating! Maybe I'm being sentimental, but to me that would feel like "faking DOF"! The significance is huge, but it would feel so different if I had to use it in practice. Personally I prefer everything to happen optically that can happen optically!

With Lytro it does happen optically. There is actually a special optical array in front of the sensor. They do longer exposures, and over the duration of the exposure time, they are actually gathering information in "three" dimensions. A lytro image is not just a bunch of pixels in two dimensions, it actually contains more information that allow their software to do it's thing. It isn't just software trickery, it is a combination of optical ingenuity and software algorithms that achieve the ability to change DOF in post.

Lytro is a limited application of the concept, though. If you play with some of their examples, you'll find that there are a number of discrete options for DOF, it isn't really a continuum. Improvements on the technology could make it more effective, bring in enough information that you could indeed have more of a continuous three dimensional field that you can tweak in post. The raw data file sizes would become considerably larger, however as time continues to trudge on, processing speed and storage capacity is improving considerably (i.e. CFast 2). I don't think that the Lytro concept would ever become a mainstream, frequently used thing...it would be one of those more niche options for people who really need it.

And there are actually already some options to solve some of these problems. Not quite the way an infinite field lytro-style device does, but tilt/shift lenses can be used to great effect to control your focus. You can either constrain DOF, or expand it such that you could photograph a landscape scene at f/4 or even f/2.8 and have the entire depth of field in focus and at high resolving power. Again, though, this is a purely optical solution, and as such, you tend to pay more for it, especially if you need the capability at multiple focal lengths...so a lytro-type solution could still offer something in a cheaper package.

I think it's just a novelty thing, at least until it's developed considerably more, and even then it might be more useful to scientists than photographers. I can't remember ever looking at a photo I've taken thinking "I wish I could go back and use a different aperture instead". If you didn't get it right the first time, you shouldn't even have the image so there's nothing to hope to change! Of course if it allowed t/s type manipulation as well, then it would be more interesting for photographers as well, no doubt. But you can buy a good t/s lens for $1500 or even $1000 (!), I don't know how much the Lytro costs but I don't think any dedicated camera would be competitive in comparison. A TS is more fun to use than some sliders anyway!