Pathetic regurgitation of the band-of-brothers mantraPosted by David FranklinDate/Time 5:59:42 PM, Tuesday, March 06, 2012 (GMT)

Quote

Look, I am not a defender or "fan" of Canon in general, and certainly not of the 5DIII, which I have never even held, much less shot and tested. But I am a working pro who shoots mostly at base or low ISO, sometimes of very technically difficult subjects with difficult lighting and lots of PP. I have never, ever, in tens of thousands of exposures using a 1DsIII (supposedly very close in quality to the 5DII), had a single issue with "banding." I'm not saying that banding is impossible to show after torturing a file to an extent that is beyond the realm of reasonably expected real-life use, perhaps from shooting a very difficult image very, very badly, with no planning or execution of alternate exposures to fall back on. But, in the history of photography, this has been called failure on the part of the photographer, rather than an issue with which to pillory a camera (or sensor, or film) company. Showing banding is apparently a very nice tech hobby for some, but not a terribly important issue in cases of even decent, much less good, photography.

And, by the way, just for the heck of it, I decided to do a small experiment. Because I don't have access to any raw file or truly perfected raw converter for a Canon 5DIII, I decided to look at that paragon of sensor goodness, the Nikon 800. I picked, pretty much at random, the only D800 file I had on my hard drive, the low ISO "Library" sample, an otherwise very amazing image for its great detail with very limited deep shadows, making it not the best image example to look for banding faults. When I merely applied the 100% shadow lift in PSCS5, and looked at the deepest shadow areas (very very small areas in an image that is otherwise very evenly lit for mid and low tones), and examined it at 200-300%, voila, what appears to me to be cross-hatched shadow banding appears.

So let's wait for some real verifiable testing with real verifiable Canon raw files and consider the extreme nature of the image manipulation before we pronounce some unverifiable and dubiously achieved judgmment on the ultimate quality of 5DIII files.

KeithR

It's basing a conclusion on a questionable analysis of a single image.

On that...

Something that's creasing me up about yer man's assertions is that I've been able to torture-test umpteen D7000/K-5 files into banding of a sort practically indistinguishable from that the pixel-peepers on DPR have dragged out of the 5D Mk III file - so what does that prove about the Sony sensor?

Nothing, Real World.

I've also been able to drag 7D files up by four or five stops without any problems with the banding that some "experts" insist is guaranteed from that camera in such circumstances, and I'll bet a large lump of cash that I'll be able to do likewise with 5D Mk III files.

It's actually a really easy issue to deal with if you simply use the right converter - for example, have you seen how much clean detail Lr 4 can bring out with the Shadows slider? I was playing last night with some 7D files (shrubs in bright light with black shadows, as it happens), and could render the shadow areas almost "daylight" with no banding penalty.

DPP and - of all things - Photodirector, are also excellent in how they deal with shadows without banding.

And who on Earth needs (emphasis intentional) that many stops of shadow recovery anyway?

It's only 4:30 in the morning here so dont jump all over me, but in terms of a digital image, just thinking logically, there's a very finite white and a very finite black. In terms of photoshop for instance, black is 0, white is 255. You cannot extend that number range any more than that. Given that a stop, in essence is 100% more light than the prior stop, I suppose camera manufacturers can in essense desensitize files... make the white detail even more subtle, making shadows even more subtle giving the illusion of more DR, more stops, but A) there's no getting around digital files limits, and B) there will be even less information in each stop range, am I right?

So... what are you doing? Blowing the highlights and measuring, blowing the shadows and measuring, and computing the magnitude between them? If you repeat the same calcs for every 5D3 sample do you get the same thing? If you repeat for every 5D2 image do you get the same 11.2? Or are you just measuring the DR of a single image?

I'm not doing anything. I'm using IR's files. Thankfully they blew the highlights on some specular highlights so that is where I got the raw saturation levels from. The dark current noise I measured from the masked area of the file that was cut off from light. it seems to be around +/- .1 stops for across three quick tries on files, doing the same thing my 5D2 values happen, by chance to match DxO exactly to the tenth. different copies might vary +/-.15 or so perhaps unless you got a real weird copy

Setting aside the log function to represent it in stops, I'm just trying to understand the calculation.

Correct me if I'm wrong: masked area = some part of the sensor that is physically blocked from light but still records brightness values. By definition, this would be the darkest part of any exposure.

So when you say you're measuring noise, are you reading random values from that black area (which in theory should be 0) and determining the minimum level at which noise no longer occurs?

Correct me if I'm wrong: masked area = some part of the sensor that is physically blocked from light but still records brightness values. By definition, this would be the darkest part of any exposure.

Allow me to correct you. Although the OP certainly implies that is the case, the 'masked area' the OP is referring to is not physically blocked from light - lenses project an image circle, and for an EF lens, a FF sensor is the largest 3:2 rectangle than can be inscribed within that circle, such as the red box below:

So, what he's calling the 'side masking area' - the regions to the left and right of the red box - are actually being illuminated by light from the lens. Why, then, are those regions 'black' in the RAW file? Because those regions of the sensor are electronically turned off (technically, set to an arbitrary value). Given that, I remain unconvinced that the method described by the OP is valid.

Correct me if I'm wrong: masked area = some part of the sensor that is physically blocked from light but still records brightness values. By definition, this would be the darkest part of any exposure.

Allow me to correct you. Although the OP certainly implies that is the case, the 'masked area' the OP is referring to is not physically blocked from light - lenses project an image circle, and for an EF lens, a FF sensor is the largest 3:2 rectangle than can be inscribed within that circle, such as the red box below:

So, what he's calling the 'side masking area' - the regions to the left and right of the red box - are actually being illuminated by light from the lens. Why, then, are those regions 'black' in the RAW file? Because those regions of the sensor are electronically turned off (technically, set to an arbitrary value). Given that, I remain unconvinced that the method described by the OP is valid.

Those regions are being illuminated, but what's there to record those values beyond the area of the sensor?

Does the sensor physically extend beyond the 36x24 area and they disable the band of pixels outside the perimeter? I guess that makes sense from a design standpoint (to ensure you get the full desired frame, you cut sensor a little larger and hold back the outliers).

I'd never really given a lot of thought to sensor design before. Thanks for bearing with me.

Numbers , numbers, numbers. Go outside take pictures. If you are happy and your customer is happy, then who cares if the DR is 10 or 11.

The same story everytime a new camera is about to release.

Guys i agree with both of you, its all about the image.

Now lets get it straight. This is Canonrumors forum. Its a specific Brand product discussion point. Its not about the trade its about the tool. If i want to talk about photography i'll go and discuss it in a photography forum. If i want to talk about the tools, then here is the right place.

Those regions are being illuminated, but what's there to record those values beyond the area of the sensor?

Does the sensor physically extend beyond the 36x24 area and they disable the band of pixels outside the perimeter? I guess that makes sense from a design standpoint (to ensure you get the full desired frame, you cut sensor a little larger and hold back the outliers).

No, technically the sensor is physically 36x24mm (well, approximately 36x24 as Canon states), but not all of that is active space - the edges aren't used. So, the 5DIII is '22 MP' but if you look at the detailed specs, it's actually specified as 22.3 million 'effective pixels' but 23.4 million 'total pixels'. It's those extra 1.1 million pixels (the non-effective ones) that the OP is sampling from in the 'side masking area'. The point is, though, that they are pixels which are being illuminated, but not read out in the RAW file - so, I don't see how it's valid to assume those pixels are representative of image pixels exposed to no light.

No, technically the sensor is physically 36x24mm (well, approximately 36x24 as Canon states), but not all of that is active space - the edges aren't used. So, the 5DIII is '22 MP' but if you look at the detailed specs, it's actually specified as 22.3 million 'effective pixels' but 23.4 million 'total pixels'. It's those extra 1.1 million pixels (the non-effective ones) that the OP is sampling from in the 'side masking area'. The point is, though, that they are pixels which are being illuminated, but not read out in the RAW file - so, I don't see how it's valid to assume those pixels are representative of image pixels exposed to no light.

Well, for whatever reason, if they do indeed merely use software to black out border pixels that have been exposed, then it seems that they have little bearing actual DR. As they say in engineering analysis, garbage in = garbage out.

Maybe there is a correlation between actual exposed black and what value Canon chooses to apply to those pixels, however, in which case the method may translate to actual DR.

(and if you look at all the Canon press they kept talking about it being better.... at mid and high isos. I was hoping they just forget to mention improved low iso, but i guess they didn't mention it because they didn't do much there this time. at least the high iso stuff should be better though.)

Just shoot at higher a iso, moar DR, problem solved. There's more than one way to starch your nose.

No, technically the sensor is physically 36x24mm (well, approximately 36x24 as Canon states), but not all of that is active space - the edges aren't used. So, the 5DIII is '22 MP' but if you look at the detailed specs, it's actually specified as 22.3 million 'effective pixels' but 23.4 million 'total pixels'. It's those extra 1.1 million pixels (the non-effective ones) that the OP is sampling from in the 'side masking area'. The point is, though, that they are pixels which are being illuminated, but not read out in the RAW file - so, I don't see how it's valid to assume those pixels are representative of image pixels exposed to no light.

This is exactly correct. All canon sensors have two areas of "black masked pixels" to the left and right edges of the sensor. Based on the CR2 format, there are three columns of pixels on each side. There have been many reports that the base value of these pixels is fixed at 1024 +/- some deviation of 3 +/- 2 to 3 units (0 to 6 units). So you might see values here between 1018 to 1030. The deviation around 1024 may be due to read noise, can't say for sure. Either way...using the masked pixels to determine black level will most likely not produce valid results. (Why Canon does this or exactly how these masked pixels are intended to be used is not entirely known, the general assumption is that they are for calibration purposes. If one were willing to dig into Canon's public RAW processing algorithms, the reason could probably be determined...I have not done so myself, and none of the resources I've found have any definitive information about exactly how these masked pixels are used.)

Correct me if I'm wrong: masked area = some part of the sensor that is physically blocked from light but still records brightness values. By definition, this would be the darkest part of any exposure.

Allow me to correct you. Although the OP certainly implies that is the case, the 'masked area' the OP is referring to is not physically blocked from light - lenses project an image circle, and for an EF lens, a FF sensor is the largest 3:2 rectangle than can be inscribed within that circle, such as the red box below:

So, what he's calling the 'side masking area' - the regions to the left and right of the red box - are actually being illuminated by light from the lens. Why, then, are those regions 'black' in the RAW file? Because those regions of the sensor are electronically turned off (technically, set to an arbitrary value). Given that, I remain unconvinced that the method described by the OP is valid.

Ok, but the sensor is 24*36mm, so whatever light hits the sensor is your image. The sensor is not round. Again, correct me if I'm wrong.