Yes, I follow your argument apart from one step which actually represents my original question:

>

For semiconductor sensors what matters is the total number of photons per photosite, pretty much regardless of the site area. Make two sensors: say an APS-S and FF ones both of say 16 megapixels. Expose them using the same scene for the same duration using 56/1.2 and 85/1.8 lenses respectively. Each photosite will receive the same NUMBER of photons (the number per unit area is 2.25 times larger in the first case, but the area itself is smaller by the same factor).

I agree with the area and scaling factor part, but my original question was whether it is true that the 56/1.2 will deliver the same number of photons per unit area on the APS sensor as the 85/1.8 lens on a FF sensor.

No, it will receive 2.25 times more photons PER UNIT AREA.

Totally agree that the 56/1/2 will deliver more photons per unit area than the 85/1.8 - and this is my question - if this is the case, how is the 56/1.2 equivalent to the 85/1.8 in terms of the light delivered per unit area - your statement above says they are:

"Make two sensors: say an APS-S and FF ones both of say 16 megapixels. Expose them using the same scene for the same duration using 56/1.2 and 85/1.8 lenses respectively. Each photosite will receive the same NUMBER of photons (the number per.."

Yes, I follow your argument apart from one step which actually represents my original question:

>

For semiconductor sensors what matters is the total number of photons per photosite, pretty much regardless of the site area. Make two sensors: say an APS-S and FF ones both of say 16 megapixels. Expose them using the same scene for the same duration using 56/1.2 and 85/1.8 lenses respectively. Each photosite will receive the same NUMBER of photons (the number per unit area is 2.25 times larger in the first case, but the area itself is smaller by the same factor).

I agree with the area and scaling factor part, but my original question was whether it is true that the 56/1.2 will deliver the same number of photons per unit area on the APS sensor as the 85/1.8 lens on a FF sensor.

No, it will receive 2.25 times more photons PER UNIT AREA.

Totally agree that the 56/1/2 will deliver more photons per unit area than the 85/1.8 - and this is my question - if this is the case, how is the 56/1.2 equivalent to the 85/1.8 in terms of the light delivered per unit area - your statement above says they are:

"Make two sensors: say an APS-S and FF ones both of say 16 megapixels. Expose them using the same scene for the same duration using 56/1.2 and 85/1.8 lenses respectively. Each photosite will receive the same NUMBER of photons (the number per.."

Seems to be a contradiction.....

Where is the contradiction? Each photosite of the full frame sensor would have 2.25 times larger area (the same number (not density) of megapixels), number of photons _per_unit_area_ 2.25 times smaller, the TOTAL NUMBER of photons is the same. What is so difficult?

I read the review before Laroque removed it. What I can see already from his shots is that 56/1.2 has a certain character and sure to become one of the famous lenses around. I can see the character and it reminds me of another lens but I can't put my finger on what. Perhaps something Leica-like in just how the lens is rendering?

Understand that fine...I was not really thinking about the pixel size. That is the bit I was missing. Just curious - do FF pixels typically have 2.25 times the area of APS ones?

Some do, some don't. Nikon's DF and D4 have 16 MP, just like the APS-C Fuji. Others, (D610, Sony A7, Canons) are around 20-24 MP, two others (Nikon D800E, Sony A7R) have 36 MP. On the other hand, recent Nikon APS-C cameras also have 24 MP, Samsung -- 20 MP -- so yes, pretty similar pixel count and hence the photosite area. Factor 2-2.5 area advantage for most FF pixels. Additional advantage of fatter picels that there is less of an area loss at the pixel edges.

I assume that if the FF pixels were the same size as the APS ones, then my argument would be correct (just so I can be sure I understand correctly).

No, it wouldn't, but for a different reason. Yes, in this case every pixel of an FF sensor would be noisier than that of an APS-C sensor, but the total number of pixel would be greater. Once you scale the larger image down (downsample) to the same size as an APS-C one, the noise of the final image would be similar again.

Depends how you define "exposure". In the old film days, it was indeed "exposure PER UNIT AREA" regardless of frame size. Deliver too few photons, and the film is underexposed, milky what not. Deliver too many, chemical changes facilitate turning the entire frame black.

For semiconductor sensors what matters is the total number of photons per photosite, pretty much regardless of the site area. Make two sensors: say an APS-S and FF ones both of say 16 megapixels. Expose them using the same scene for the same duration using 56/1.2 and 85/1.8 lenses respectively. Each photosite will receive the same NUMBER of photons (the number per unit area is 2.25 times larger in the first case, but the area itself is smaller by the same factor). Each photosite (given equal quantum efficiency) will produce the same number of electron-hole pairs (photocurrrent) ergo the same signal and the same shot noise. If you read each sensor with the same preamplifier, equal Read noise will be introduced in both cases, as would be required amplifier gain!

However, a Photomertist would assign the first case say ISO 100, and the second -- ISO 225 based on exposure PER UNIT AREA, even though the physical amplifier gain is exactly the same in both cases.

The origin of your confusion is extrapolation of ISO sensitivity concept to modern sensors -- irrelevant one for digital sensors, the proper metric would be "light delivered to the whole sensor")

I'm pretty much on a decission here because I'd like to invest in a kit prime lenses and I have yet to decide which system. I have a D600 with the 2.8 zooms as workhorses. From the primes I want to have a decidedly different shooting experience. So it may very well become the Fuji with 23/1.4, 35/1.4 and a 90/2 they'll hopefully release someday (but sadly not before 2016, according to the roadmap - why they'll rather stick to big tele zooms I don't know).

Or I chicken and invest on Sigma, the 35 already being excellent, the 50 was announced yesterday and a 135 is very much on the horizon.

But honestly the Fuji route is somehow more tempting, a system more around what I'd expect from prime lens shooting. Aperture rings and stuff. But there is the issue, the XF primes' aperture rings until now looked very floppy to me. So I very much welcome that the 56's announcement states "The aperture ring is designed to ensure it’s easy to detect ‘clicks’ between f-stops"

I guess that stems from user feedback, a thing Fuji is very much known to take serious. So do you think they'll facelift the existing lenses for a stiffer aperture ring? IMO they could go from 1/3 to 1/2 stop detents at the same time, but a ring that can't be turned by the tip of the small finger or by accident would be nice enough for me.

Ok - in the second question I actually forgot to say that I meant the same number of pixels (though I realize this would result in a lot of dead space on the sensor - it was a hypothetical question).

Now I've thought a bit more, I think there is another source of confusion....I have convinced myself again that, whilst I accept that noise levels would be different, if I took a photo with my Fuji and the 56/1.2 at ISO200 and did the same with my (now sold) 5D mk2 with the 85/1.2 at ISO 200, both in aperture priority mode, they would both select much the same shutter speed (I assumed that both Canon and Fuji agree on definition of ISO).

To you (I think) equivalence also includes achieving equivalent noise levels. Pixels are buckets for collecting photons and converting them into electron-hole pairs (as you said). Obviously a bigger bucked can collect more photons (and generate more electron hole pairs). But then the actual number of electron-hole pairs is not used to form the image - rather the signal is digitized when it is read. If both the APS and FF sensor are 14 bit, then in underlying analog signal is converted into one of 2 ** 14 levels. Further, I assume that the levels (at base ISO or amplification) are spread (non-linearly, I know) between 0 and 'full' (max electron-hole pair capacity for that pixel).

To me, a given exposure would result in of the same output level regardless of sensor format - I think this is because the supposedly common definition of ISO.

The advantage of a larger pixel size is the confidence level you have in the level you measure after conversion. More exactly (I work with statistics...), you would be confident that if you repeated the measurement with 10 different pixels they would report the same level for that actual exposure. For example, a uniform area of blue sky - with bigger pixels, you have much more confidence that adjacent pixels would measure the same level after conversion to digital. You have less confidence with smaller pixels - and in fact this lower confidence manifests itself in adjacent pixels reporting more variation in the level they measure. We see this as noise.

Even simpler example - if the read error (I think this is what it is called) is + or - 10 electron-hole pairs for both APS and FF sized pixels, and the APS pixel has a maximum capacity of 100 vs the FF pixel of 1,000 (making the numbers up for illustration), then the likelihood of the analog signal being converted to exactly the same level in the blue sky example is lower for the APS sensor (+ or - 10 out of a total number of 100 is a much bigger error than + - 10 out of 100).

Ok - in the second question I actually forgot to say that I meant the same number of pixels (though I realize this would result in a lot of dead space on the sensor - it was a hypothetical question).

Now I've thought a bit more, I think there is another source of confusion....I have convinced myself again that, whilst I accept that noise levels would be different, if I took a photo with my Fuji and the 56/1.2 at ISO200 and did the same with my (now sold) 5D mk2 with the 85/1.2 at ISO 200, both in aperture priority mode, they would both select much the same shutter speed (I assumed that both Canon and Fuji agree on definition of ISO).

To you (I think) equivalence also includes achieving equivalent noise levels. Pixels are buckets for collecting photons and converting them into electron-hole pairs (as you said). Obviously a bigger bucked can collect more photons (and generate more electron hole pairs). But then the actual number of electron-hole pairs is not used to form the image - rather the signal is digitized when it is read. If both the APS and FF sensor are 14 bit, then in underlying analog signal is converted into one of 2 ** 14 levels. Further, I assume that the levels (at base ISO or amplification) are spread (non-linearly, I know) between 0 and 'full' (max electron-hole pair capacity for that pixel).

To me, a given exposure would result in of the same output level regardless of sensor format - I think this is because the supposedly common definition of ISO.

The advantage of a larger pixel size is the confidence level you have in the level you measure after conversion. More exactly (I work with statistics...), you would be confident that if you repeated the measurement with 10 different pixels they would report the same level for that actual exposure. For example, a uniform area of blue sky - with bigger pixels, you have much more confidence that adjacent pixels would measure the same level after conversion to digital. You have less confidence with smaller pixels - and in fact this lower confidence manifests itself in adjacent pixels reporting more variation in the level they measure. We see this as noise.

Even simpler example - if the read error (I think this is what it is called) is + or - 10 electron-hole pairs for both APS and FF sized pixels, and the APS pixel has a maximum capacity of 100 vs the FF pixel of 1,000 (making the numbers up for illustration), then the likelihood of the analog signal being converted to exactly the same level in the blue sky example is lower for the APS sensor (+ or - 10 out of a total number of 100 is a much bigger error than + - 10 out of 100).

The comparison of APS-C sensor and FF sensor is a complicated one. It is because the sensor performance from one generation to the next can be very different.

For example, the high ISO performance of the X-Trans CMOS I sensor is on the same level of the FF sensor in 5D2, even though the sensor is 2.25x larger.

Moving along to the sensor in 5D3, high ISO noise is much lower. So when one has to compare, the comparison needs to be made to roughly the same generation sensor.

Then there is the number of pixels on the sensor. If you have a 36MPix sensor such as the one in D800, the pixel size is the same as the pixel size of a 16MPix sensor. It is because 36 / 2.25 = 16. So the number of pixels within an APS-C crop of D800 sensor is the same of that of a X-Trans sensor.

Since pixel size is the same, and the D800 sensor is roughly the same generation as X-Trans I, the quantum efficiency of the two sensors should roughly be the same.

Yet if you examine at the pixel level, you'll see that the noise of a 36MPix D800 sensor is actually noisier than the X-Trans I even at ISO 1600. I believe having two extra green pixels in a 6x6 grid helps lower the luminance noise.

Now if you compare to the next generation 36MPix sensor in A7r, you'll see that the high ISO noise is comparable to the X-Trans CMOS II.

Another reason why full frame sensor appears to have lower noise when viewed on the screen is due to pixel binning. You scaled down a 36MPix sensor to fit in your 3MPix screen. It is going to look better than a 16MPix sensor scaled down to 3MPix, both in terms of noise and details.

So all it boils down to is how large of a print are you going to make. It is not going to make a whole lot of difference even if you print it as 16x20.

I totally agree that things are complicated when comparing noise capabilities across different generations of sensor. And also that up to a certain size print no one really cares (I even remember shooting 1600 ISO film that looks noisier than my X-Pro or XE-2 at 6400 even on a 7 x 5 print!).

So my question to you is.....assuming I am not making huge prints and don't care about the small amount of noise difference, if I set the 5d mk3 to ISO 200, with the 85/1.2 lens, and take the identical image with the XE-2 and 56/1.2 also at ISO 200 (position adjusted to give the same image), how would the exposure time differ between the two (assuming that Canon and Fuji have the same definition of ISO - i know that isn't the case, but I want to understand the ideal case before the more complicated reality)

I totally agree that things are complicated when comparing noise capabilities across different generations of sensor. And also that up to a certain size print no one really cares (I even remember shooting 1600 ISO film that looks noisier than my X-Pro or XE-2 at 6400 even on a 7 x 5 print!).

So my question to you is.....assuming I am not making huge prints and don't care about the small amount of noise difference, if I set the 5d mk3 to ISO 200, with the 85/1.2 lens, and take the identical image with the XE-2 and 56/1.2 also at ISO 200 (position adjusted to give the same image), how would the exposure time differ between the two (assuming that Canon and Fuji have the same definition of ISO - i know that isn't the case, but I want to understand the ideal case before the more complicated reality)

Latest in-depth reviews

The Nikon Z6 may not offer the incredible resolution of its sibling, the Z7, but its 24MP resolution is more than enough for most people, and the money saved can buy a lot of glass. Find out what's new and notable about the Z6 in our First Impressions Review.

Many cameras today include built-in image stabilization systems, but when it comes to video that's still no substitute for a proper camera stabilization rig. The Ronin-S aims to solve that problem for DSLR and mirrorless camera users, and we think DJI has delivered on that promise.

The SiOnyx Aurora is a compact camera designed to shoot stills and video in color under low light conditions, so we put it to the test under the northern lights and against a Nikon D5. It may not be a replacement for a DSLR, but it can complement one well for some uses.

At its core, the Scanza is an easy-to-use multi-format film scanner. It offers a quick and easy way to scan your film negatives and slides into JPEGs, but costs a lot more than similar products without a Kodak label.

Latest buying guides

If you're looking for a high-quality camera, you don't need to spend a ton of cash, nor do you need to buy the latest and greatest new product on the market. In our latest buying guide we've selected some cameras that while they're a bit older, still offer a lot of bang for the buck.

What's the best camera for under $500? These entry level cameras should be easy to use, offer good image quality and easily connect with a smartphone for sharing. In this buying guide we've rounded up all the current interchangeable lens cameras costing less than $500 and recommended the best.

Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Sony mirrorlses cameras in several categories to make your decisions easier.

Whether you've grown tired of what came with your DSLR, or want to start photographing different subjects, a new lens is probably in order. We've selected our favorite lenses for Canon DSLRs in several categories to make your decisions easier.