Author
Topic: Canon's 120mp APS-H Sensor (Read 16847 times)

that1guy

Wow...that is crazy. I also thought this was a joke at first. Some interesting speculation from everyone here...kinda fun to read. I was going to complain, but this is so over the top, and my understanding of physics and computers is so small that I think I'm just going to sit back and watch for a bit before making my decision. I'm sure Canon engineers are smarter than me, and they are obviously up to something. Fun times!

canon rumors FORUM

scalesusa

I'd buy one, but we aren't likely to see it in production for years if ever.

There are always those who long for the good old 1mp sensors. They are still around, and they are almost free.

There are also those who think a sensor affects lens diffraction. I assume they got their education from some other poster who is equally uninformed. Diffraction is fixed by the lens (aperture). The sensor does not change it. - not ever. Lenses have diffraction at all apertures, it doesn't start at some magic number.

Now, it is true that a higher resolution sensor can see the diffraction better, just as it provides better resolution of the subject, but a sensor does not affect diffraction.

A 120mp APS-H sensor has about the same pixel density as the 12 mp G11 / S90. The lenses seem to work fine on it, and are not as wide a aperture as are available on DSLR's.

120mp/5.19 cm square = 23.1 mp/square cm, approximately the same as the 12mp G11.

Logged

ELK

Wow...that is crazy. I also thought this was a joke at first. Some interesting speculation from everyone here...kinda fun to read. I was going to complain, but this is so over the top, and my understanding of physics and computers is so small that I think I'm just going to sit back and watch for a bit before making my decision. I'm sure Canon engineers are smarter than me, and they are obviously up to something. Fun times!

+1

Logged

zaph

Looking forward though, doesnâ€™t this have the potential to solve some of the physics involved in making smaller zoom lenses, or actually eliminating them all together in some cases? If you can have a sensor that is 4-6 times bigger than is actually needed for the final output image (lets say 12MP for example) then you could have a single prime and just isolate a 12MP section of the sensor to â€œzoomâ€ in on that area. So rather than shipping with an 18-55 zoom kit, it would just ship with 18mm, and the rest would be done on the sensor. (I know thatâ€™s oversimplifying it, but hey)

(And that is an actual question rather than a comment - could it be used that way?)

I'd buy one, but we aren't likely to see it in production for years if ever.

There are always those who long for the good old 1mp sensors. They are still around, and they are almost free.

There are also those who think a sensor affects lens diffraction. I assume they got their education from some other poster who is equally uninformed. Diffraction is fixed by the lens (aperture). The sensor does not change it. - not ever. Lenses have diffraction at all apertures, it doesn't start at some magic number.

I respectfully disagree. Such a sensor will possibly, if not probably, be implemented in the 1D Mark V. Canon is not revealing which technology is being introduced in this sensor, but advances in this regard are constantly in progress. Every now and then a revolution is introduced in sensor manifacturing technology, and we don't know what's behind this 120 MP monster. We know it has that number of photosites, but we don't know yet how the information is processed.I don't know if anyone really exists who long for the good old 1MP sensors, but sure there's someone who long for 3MP sensors ...Ehm... KR... mhm... Well, not me, even if I would have preferred a 12-14MP sensor in the 60D with the current technology, but please let's drop this argument before an endless and useless discussion begins again, and, after all, we don't know its specs yet.DLA... Scalesusa, I'm not sure if you're referring to my post above since I have mentioned DLA, if so please notice that in any way did I say that sensor affects lens diffraction. I'm perfectly aware of what diffraction is and how and by what this physical phenomenon is generated. I simply stated that it may be possible, sooner or later, to overcome this phenomenon by 'interpreting', or 'differentially reading' (sorry if I'm not able to express myself in a proper English), the diffraction pattern projected by the lens on the sensor and extrapolate and correct it to reconstruct the missing information. After all, in the past most authoritative experts had claimed that it would not have been possible to photolithograph stuctures smaller than the wavelenght of the UV light used in steppers; well, 32nm transistors are currently available and 22nm is the next step, we use a... workaround. Others claimed that we would had never been able to 'see', not to mention to manipulate, single atoms, for both their nature of quantum objects and the fact that they're smaller of the shortest usable wavelenght to probe a surface. But, 20 years ago, the tunnel effect microscope became a reality and today we can actually see, and manipulate, literally doing a 'drag and drop', single atoms: we use a workaround. I'm sure that sooner than expected, we will get rid of DLA effects not by eliminating diffraction, but by reconstructing the missing information.

pierlux, I think scalesusa was referring to my question about DLA; like I said, I don't fully understand it and would not be averse to some schooling in light physics.

I like where you're taking this train of thought with the pixel binning down to a 30 MP sensor, and I think chrome_dude has some interesting thoughts in terms of pixels with different ranges of sensitivity even in the same color channel. it makes this bit of news seems a lot less like canon losing its marbles and more like an earnest scientific (and economic) endeavor.

I am amazed that they're calculating 9.5 fps with that number of pixels; is the pixel binning occurring on-sensor to allow this kind of throughput?

Seriously though, as some others have suggested this could be a real game-changer if it gets implemented. Think of what it would mean for sports and wildlife shooters. Use a 200mm f2.8 lens and then crop the image for the equivalent of an 800mm (or longer).

I'm wondering what the Canon lens division thinks of this. Would the "Big White Lenses" become obsolete?

Would sports shooters need to tether an auxiliary memory drive to their cameras, comparable to the bulk film magazines they used to use with motor drives?

Imagine the fits this would give professional sports teams if any fan could shoot high quality action shots from the stands.

No doubt there are a lot of obstacles standing in the way of implementation. Complementary technologies (memory, lenses, etc. etc.) need to catch up. So, we probably won't see this for several years. But, clearly the march to ever-increasing resolution is accelerating.

Lee Jay

I know I'm not nearly good enough with the physics to know this for sure, but wouldn't a 120 MP APS-H have a ridiculously low bar for lens diffraction?

Adding pixels to the same sensor size and lens combination can never, ever make diffraction worse. In fact, it makes it better.

There are certainly diminishing returns to adding pixels as diffraction takes its bite out of your resolution, but in my own tests of where the point of no further visual benefit of more pixels is located, the 7D's sensor is capable of extracting all the detail visually available in an f/20 lens. That means pixels could get a heck of a lot smaller than even those tiny 7D pixels before you'd be sucking the life out of an f/2.8 supertele. This is consistent with astrophotography experience where people routinely use extremely slow f-numbers to extract all the detail they can get from planetary images. I was surprised (shocked, really) at how much more detail I could get at 3800mm and f/30 compared with 1900mm and f/15 even with 40D-sized pixels. Have a look for yourself: http://photos.imageevent.com/sipphoto/samplepictures/Jupiter%20f30%20versus%20f15%20comparison.jpg

That's f/30 on the left. While you might have thought 40D-sized pixels were already diffraction-limited at f/15, obviously going to f/30 still extracted a lot more detail. That switch is the same as making the pixels half as big.

I'd buy one, but we aren't likely to see it in production for years if ever.

There are always those who long for the good old 1mp sensors. They are still around, and they are almost free.

There are also those who think a sensor affects lens diffraction. I assume they got their education from some other poster who is equally uninformed. Diffraction is fixed by the lens (aperture). The sensor does not change it. - not ever. Lenses have diffraction at all apertures, it doesn't start at some magic number.

Now, it is true that a higher resolution sensor can see the diffraction better, just as it provides better resolution of the subject, but a sensor does not affect diffraction.

A 120mp APS-H sensor has about the same pixel density as the 12 mp G11 / S90. The lenses seem to work fine on it, and are not as wide a aperture as are available on DSLR's.

120mp/5.19 cm square = 23.1 mp/square cm, approximately the same as the 12mp G11.

When I did the math on sensor size vs MP, I got 20MP/cm2 for the newly announced APS-H, slightly lower than the S90/G11/S95 but maybe I made a mistake somewhere - nevertheless, I agree, it's in about the same ball park as existing cameras they already have in terms of pixel density.

The challenge as I see it is designing the rest of the camera to go around that. 120MP at 10fps is 1.2GP/s. The current 1D4 is about 1/5th of that in terms of processing speed.

Logged

Inst

It's probably just a technology demonstrator sensor demonstrating that their techs can scale up the S10/S11 technology to APS-H levels. Actually mass-producing such a device would most likely be outlandishly expensive with current fabrication processes; I wouldn't be surprised to see $500 or above per sensor. Also, can you imagine the MTF needed on lenses for this system? You'd be lens-limited until Canon could hire Carl Zeiss off Sony to produce $10,000 lenses. And how about post-processing the damn thing? You will be dealing with 120 MP raw files. You're probably already shelling out 4 digits for EIZO LCDs, are you looking forward to buying dual GTX480 ($500 or $600 a pop) GPUs to fuel CUDA-based Photoshop CS5, along with a $1000 CPU and probably 16 gigs of ram?

I think the only way to bring this to market would be to market it as a ridiculous sensor toy in an MF-like modular camera; you'd get upgradeable DIGIC processors starting at 1 fps, and then later on as processing improves you'd move up to 8 or 10 fps. Your lenses would be completely inadequate, but if you own a very high-end outfit I suppose you could wow clientele with your $100,000 wanna-be medium format.

Dynamic range does have an inverse correlation to pixel size, but it's not as bad as people make it out to be; 120 MP would have more than 6.6 times the signal to noise ratio than the 7D based on megapixel increases while ignoring the effects of the smaller sensor size.

Ehhh...

Looking at the G11's dxomark data, it would have a 3rd of a stop less full-picture performance or identical performance at ISO 3200 than the existing 1D4, depending on how you treat manufacturer ISOs and between one stop and five thirds of a stop less full-picture performance than the D3s, depending on whether you consider the fact that the D3s's manufacturer ISO 3200 is somewhat below the G11's. In Dynamic Range, using SnR comparisons, which is typically valid for "usable dynamic range", but not total dynamic range, the 120 MP super G11 would scale to about 14 EV, which is above the D3x, I believe.

DLA for the 120mp APS-H should kick in around f/4, I believe, so everyone would be advised to use f/2.8 lenses, further adding to the cost.

Everything considered, I would be willing to buy a 120mp APS-H provided that I had the million dollars needed to pay for the camera, the lens, the computer, and nothing better to spend it on. Even if I were severely lens-limited, compared to the existing cameras, think of the ISO 50 resolution on this baby! It would be able to resolve at least twice the MTF detail over existing devices, and when sensors do catch up you'd be able to do micro-biology with macro lenses.