NikoDoby said:
Actually Jonny my theory is that DxO shows pictures of bananas to a bunch of chimps and based on which photo is chosen they rank the cameras. Though I could be wrong and they might actually be using groups of gerbils instead.

jonnyapple - your theory, even if not completely right, provides a nice model to work from.
Question 2, then:
Why are they correcting for resolution, then? For I don't see how your model is resolution dependent.
Question 3:
What is the evidence against a custom lens (or any lens, really) placed a precise distance from the sensor bypassing the various mounts and any inconsistency which lies therein?

Edit:
Your model is consistent with all the evidence on their site, though. They clearly state that they are detecting pre-raw file noise reduction not through measuring decreases in resolving power, but through the difference in channel SnR and/or through noise autocorrelation checks (which is cool because that's how we detect the center of center of checkerboard-style targets in 3D point cloud data (but that's OT)). (This edit might well answer question 3)

by mounting a lens not to the body itself, but to a test frame and positioning the lens "inside" the camera at the precisely measured location. This technique would eliminate all mount/adapter variables and allow the easy use of the same lens on all cameras of type.

It isn't like they need AF or electronic aperture control.

From the above link.

"a set of precisely-described and bias-free test protocols for each measurement category which:
* strictly accounts for all physical parameters that influence measurements"
is where I got the idea.

soap said:
jonnyapple - your theory, even if not completely right, provides a nice model to work from.
Question 2, then:
Why are they correcting for resolution, then? For I don't see how your model is resolution dependent.
Question 3:
What is the evidence against a custom lens (or any lens, really) placed a precise distance from the sensor bypassing the various mounts and any inconsistency which lies therein?

2. I didn't mention it, but they normalize the noise for each sensor by dividing by the square root of the quantity (Npixels/Nref) where Nref is 8MP. Here's their explanation.
3. There is no evidence except for this: I think it would create bias in the data for different sensor sizes. Why? Because you would need to use exactly the same lens or system of lenses if you planned on comparing different sensor formats (APS, FF, and MF), which means you would need to change the image circle by moving the lens system closer to or further from the pattern they're using to change its magnification on the sensor (so that you fill each sensor you test with exactly the same pattern). That doesn't only change the magnification, though. It changes the angle that the light that's hitting the sensor is incident on the sensor. The other option is to leave the lens in the same place and just change which sensor is looking at the focused image, but that would be completely unfair to smaller formats because a 12MP APS sensor would show flaws in the lens that a 12MP FF sensor wouldn't, for example. They may do it. I'm giving them the benefit of the doubt by claiming they don't have any optics in their tests, since I can't see a fair way to do it.

soap said:
by mounting a lens not to the body itself, but to a test frame and positioning the lens "inside" the camera at the precisely measured location. This technique would eliminate all mount/adapter variables and allow the easy use of the same lens on all cameras of type.

It isn't like they need AF or electronic aperture control.

From the above link.

"a set of precisely-described and bias-free test protocols for each measurement category which:
* strictly accounts for all physical parameters that influence measurements"
is where I got the idea.

soap said:
by mounting a lens not to the body itself, but to a test frame and positioning the lens "inside" the camera at the precisely measured location. This technique would eliminate all mount/adapter variables and allow the easy use of the same lens on all cameras of type.

I don't think that would work. The cameras and sensors are so varied. The test rig would have to be very unique.

Since they are a software company I think the data is just compiled traditionally and then analyzed as jonny says.

But the only ones who know for sure are a bunch of French guys from DxO who are reading this and laughing at us. Or eating bananas.

1 - I don't see a single "traditional" aspect to jonny's model. ;)
2 - The method of analysis I thought was well documented. It's the method of collection which is the subject of the conversation, no?
3 - How complicated do you think a test rig is which consists of two clamps - one for the lens and one for the body - which can be moved precisely in relation to each other? We do this all the time for calibration of our various remote-sensing instruments (from lidar to conventional total-stations).

Soap what I meant was that I don't think it's as complicated as you think. By traditionally I mean they mount a 50mm lens and shoot a controlled scene with controlled lighting. The complicated part is in the analysis.

soap said:
1 - I don't see a single "traditional" aspect to jonny's model. ;)
2 - The method of analysis I thought was well documented. It's the method of collection which is the subject of the conversation, no?
3 - How complicated do you think a test rig is which consists of two clamps - one for the lens and one for the body - which can be moved precisely in relation to each other?

But I guess I don't see a traditional way to measure what they claim to measure without changing variables for different formats. If they don't control for everything for the different sensor sizes, they shouldn't have comparisons for different formats on the same list. It's like comparing bananas and seeds.

NikoDoby said:
Soap what I meant was that I don't think it's as complicated as you think. By traditionally I mean they mount a 50mm lens and shoot a controlled scene with controlled lighting. The complicated part is in the analysis.

Even that wouldn't be fair, Niko. A 50mm Nikkor might have better optical quality than a 50mm minolta for a Sony (of course I assume it's that way and not the other way round). If they're photographing scenes, they'd better be using the same lens like soap says.

jonnyapple said: Because you would need to use exactly the same lens or system of lenses if you planned on comparing different sensor formats (APS, FF, and MF), which means you would need to change the image circle by moving the lens system closer to or further from the pattern they're using to change its magnification on the sensor (so that you fill each sensor you test with exactly the same pattern). That doesn't only change the magnification, though. It changes the angle that the light that's hitting the sensor is incident on the sensor.

Understood. But why not move the pattern? (target) and adjust the light accordingly?

This is harder to imagine with real differences between FF and APS, but I'm going to exaggerate to make it easier (I'm a physicist, I can do that, right? ;-).

A train leaves Chicago bound for New York at 5:20 pm CST. At 6:30 EST a train... just kidding.

Lets say I have a sensor with the same size as the pattern I want to focus on the sensor. I need to put the lens a distance 2*f away from the pattern and the sensor a distance 2*f behind the lens to get the focus right. So that gives me some maximum angle for light incident on the sensor for that situation.

Now, let's say I'm going to test a sensor that has 1/10 the size of the first one. I won't do the math here, but if I did it right you get that the lens now needs to be placed 11*f away from the pattern while the sensor needs to be 1.1*f behind the lens. If you draw it out, you can see that the angles are different.

BTW, for the adventurous I was only using 1/f = 1/do + 1/di and m = -(di/do)
f=focal length
do=object to lens distance
di=lens to image distance
m=magnification
and that m=-1 for the first case and m=-1/10 for the second case.

edit: The changing m is what I meant when I was talking about the image circle. And for those who don't want to try to wrap your head around the math, you've seen this in real-world shooting. If you have a prime lens and you switch from DX to FX, you need to move closer to get the same framing (I'm assuming you're taking pictures of a wall here, so perspective doesn't enter the picture like it did in the 70-200 argument). m=-1 was my exaggerated FX and m=-1/10 my exaggerated DX.

Oh come on it still relates doesn't it? I'm sure the fine people at DxO labs know what they are doing and as long as Nikon cameras hold the top DSLR spots we accept their rankings. Once canon or Sony get past then we can cry foul ball! :^)