The P30 is new to me and I found the Phase One labeling of a RAW file ".tif" strange. I also found that because of this breeze browser doesn't handle the files correctly. Both ACR and C1 v4 seem to handle the conversion OK. I am liking the added dynamic range of the P30 seems shadows are much cleaner than the 5D but getting colors looking good is a bit of a challenge.Marc

[a href=\"index.php?act=findpost&pid=162615\"][{POST_SNAPBACK}][/a]

Mark,I also downloaded your images of the P30, 5D and G9. Thanks! I've played around with them a bit and tried various processing methods and the exercise has been very informative.

First just a few comments on choice of lens and matching FoV.

(1) You've matched the vertical heights between the P30 and 5D, though the P30 is tilted down a bit. That's fine, although perhaps it would have been kinder to the 5D to match the horizontal dimensions of FoV because of the huge difference in pixel count between these to cameras. In other words, the P30 can afford to be converted to a 35mm aspect ratio.

(2) You've adopted a similar approach matching the FoV of the G9 in the sense that you've given the G9 a disadvantage in relation to the 5D in the same way that the 5D is at a disadvantage in relation to the P30. In other words, you've matched the horizontal FoVs between the D9 and the 5D which effectively crops the G9 aspect ratio to that of the 5D for comparison purposes.

This is not at first clear because you've swung the G9 FoV a bit to the right.

(3) You've matched the f stops okay choosing f6.3 for the 5D and f9 for the P30. That's close enough, although theoretically it should be f9.45, but that can't be helped because we don't have such precise f stop settings.

Unfortunately, such matching is invalid because you haven't matched the focal lengths. You've used a 48mm lens with the 5D and a 55mm lens with the P30.

If you are matching vertical heights (the shorter dimension) the ratio between the two formats is 36/24 = 1.5. This is the multiplier to convert both FL and f stop to the larger format, if shooting from the same distance.

Having used a 48mm lens with the 5D, you should be using a 72mm lens with the P30, or as close as practicable.

The consequence of this effective mis-match of lenses is that the P30 image is clearly OoF at and beyond the kitchen window. However, in this particular case it's not a serious issue because of the huge discrepancy in pixel count between the two cameras. We're not primarily comparing resolution here.

(Edit: I should put in a further word of explanation here. Because you've not matched FoV through a choice of appropriate FL, shooting from the same distance, but have shot from a closer distance with the P30, the f stops chosen are not appropriate to give equal DoF.)

(4) With regard to dynamic range, it's clear to me that the 5D is on a par with the P30. The shadows appear to be showing the same amount of broad detail, except the P30, with a much higher pixel count, is revealing finer detail.

Perhaps this is a good example of what Jonathan Wienke has been trying to get across with his method of photographing his DR Test Chart.

However, this doesn't really make sense, does it? The 5D having a DR equal to the P30? There has to be some other factor which is skewing the result. And so there is.

The P30 image appears to be at least 1 stop underexposed, according to ACR. The 5D appears to be also underexposed but only very slightly, by about 1/4 of a stop, as is the G9.

So one could deduce from this that the P30 has around one stop more DR than the 5D.

As one would expect, the G9 image is clearly noisier than the 5D image, but still a good result considering the huge difference in sensor size.

Here are a few 100% crops comparing the P30 with the 5D and the G9 with the 5D.

I first processed the images (using identical adjustments for each image) so the shadows are unnaturally light, so one can see all the detail there is to be seen. I converted all images in ACR using exactly the same settings regarding WB, linear contrast curve, zero contrast and zero shadows, no NR and no sharpening. But I did apply an auto-adjust first to each image as a starting point, to get the exposures looking similar.

The following images are self-explanatory. They all have titles. Maximum quality, minimum jpeg compression has been applied.

So one could deduce from this that the P30 has around one stop more DR than the 5D.[a href=\"index.php?act=findpost&pid=162642\"][{POST_SNAPBACK}][/a]

You can't deduce anything about DR with these images, because you don't know where the top of the range is. You need to look at the RAWs, see where they clip, and see at what levels the details get lost. You can use conversions, but the RAWs are much simpler without any tone curves and clipping artifacts.

RayIf I had time I'd use Johnathan's test, perhaps in the future. give me the f stop and focal lengths appropriate for all 3 cameras and I'll reshoot the three. I was using the 2 kitchen posts as a framing guide. I ettr'd until minor clipping (on the rear display) on all 3 with different results. should I expose more into clipping? Marc

OK, I'll agree to that (as a means to determine if filtering has ocurred, even if not to the rest of your reasoning for testing pixels as the definitive DR), but in a way, the *image* resolution is still the bottom line. For example, with a 1.28GP camera, would you care if the noise was filtered, as much as you would with a 1.3MP camera?

Probably not. But the real issue with your logic is that binning or downsampling isn't really that effective a means to increase DR. It's a gimmicky trade-off that is only acceptable in a small subset of circumstances.

A Phase One P45+ has less-than-4x the pixel count of the G9, 1Ds, and 5D, and thus has less than a stop's worth of DR advantage over any of those cameras on the basis of additional pixels. Where the real difference lies (several stops worth if the MFDB shooters are credible, and I'm not going to presume to contradict them without tangible evidence) is in the quality of the MFDB pixels vs the smaller-format cameras. The you-can-get-extra-DR-from-extra-pixels argument is true in theory, but in practice, it's bulls**t. In most instances, trading away 75% of your pixels for a measly 1-stop DR increase is a waste of resolution that doesn't solve the DR limitation of your camera anyway. And it relegates the file to Web JPEG and small print usage which may well defeat the purpose of why you bought a camera with more pixels. Buying a camera with better pixels is going to make a bigger difference in image quality than just buying a camera with more pixels.

Comparing cameras on a pixel-quality basis makes the most sense, because when you add additional pixels, you expect to get more resolution and image quality, and in real life, the differences between lens quality, AA filter strength, and sensor quality on individual pixel quality are far more significant than the effect of throwing another megapixel or two at the image to brute-force another 1/4-stop or so of DR. Once you meaningfully and consistently evaluate per-pixel quality, comparing total image quality is a simple matter of multiplying pixel quality by pixel qualtity.

I analyzed the three images. It would have been better to keep and post series of shots 1/3 EV apart, so that one can bring them to a common denominator with the highest possible exposure without "exposure correction" of the raw data.

I "equalized" the exposures with +2/3 EV for the P30, 0 EV for the 5D and +1/2 EV for the G9. This brings them to the same level at highlights close to the clipping.

Increasing the exposure of the raw data is not fair, but the adjustments are not high. Note, that the same adjustment would appear as +1 2/3 EV in ACR because it reduces the exposure by 1 EV for whatever reason.

I created a many layered TIF with the screen captures of the analysis, 30 megabytes, it can be downloaded from here

It requires lots of explanation; I make that effort only if someone is interested for it.

My findings are:

1. not only that the G9 is no match for the 5D, but the 5D is very far from the P30,

2. the difference between the P30 and the 5D is not due to the dynamic range (I don't see a big difference there), but in the details,

3. I think (but I am not sure yet), that the 5Ds problem is the lack of levels. It generates only about 3570 levels with a relative large dynamic range, in contrast to the 65536 levels of the P30. The latter is probably an overkill, but the former is a show stopper.

4. the 5D creates less noise, than the P30. I am not sure, if that is due to the lack of levels or to some in-camera noise reduction, or to both. Anyway, the lower noise level on its own would indicate a high dynamic range (the measurement with the Stouffer wedge would probably show a high DR for the 5D), but that is eyewash.

Jonathan's sample image would be more helpful in this situation, because that accounts for the presence of details as well.

--------------------

Some layers of the file deal with a particular detail, which makes the 5D appear as a P&S camera beside the P30. It would be better if the field of views were the same. The field of view in the 5D's image is larger than that of the P30, and the P30 has much more pixels, so every object consists of 72% more pixels (linearly!) in the P30 image than in the 5D image, and that is not negligable for the details, but I don't think the pixel count equalization would make up for the difference in quality.

It would be better if the field of views were the same. The field of view in the 5D's image is larger than that of the P30, and the P30 has much more pixels, so every object consists of 72% more pixels (linearly!) in the P30 image than in the 5D image, and that is not negligable for the details, but I don't think the pixel count equalization would make up for the difference in quality.

Gabor,The field of view is the same for the P30 and 5D. The problem is the aspect ratio of the two cameras is different so the field of view can only be equal in one dimension, which in this case is the height, although the P30 is tilted down more than the 5D.

The great inequality here is the huge difference in pixel count. After the 5D has been cropped to the same aspect ratio as the P30, we're comparing a 10.8mp image with a 30mp image. That's almost equivalent to comparing a 6mp Canon D60 image with 16.7mp 1Ds2 image.

Quote

I "equalized" the exposures with +2/3 EV for the P30, 0 EV for the 5D and +1/2 EV for the G9. This brings them to the same level at highlights close to the clipping.

I've had another look at the exposure levels in Camera Raw 4.3 and my figures are slightly different but broadly in the same direction. Basically the 5D image has the greatest exposure, the G9 second and the P30 last.

However, it appears that the 5D shot is very slightly overexposed. Examining the brightest part of the image, the horizontal white rail outside, the RGB values are mostly identical, like 226, 226, 226, which tends to indicate that either the white post is a perfectly neutral grey, unlikely, or one or more of the channels is blown (more likely) and ACR has done the usual great reconstruction job.

As can be seen in the screen captures below, getting the histograms looking the same in all 3 images requires a -0.75EV adjustment for the 5D, a +1.0EV adjustment for the P30 and a -0.20EV adjustment for the G9.

Making an assessment that the 5D image is overexposed by 1/4 of a stop and assuming that a P30 shot fully exposed to the right would require a -0.50EV adjustment, I'd give the P30 a 1.5 stops DR advantage over the 5D.

The G9 could probably be given an extra 1/4 stop exposure, but the fact is, one can't be this precise in the field. In my opinion, the exposures for both the 5D and G9 are pretty accurate ETTRs; as close as matters. It's the P30 which is definitely underexposed.

Quote

It requires lots of explanation; I make that effort only if someone is interested for it.

I'll download your 30MB analysis, but my feeling is one has to expose in relation to the RAW converter one prefers to use. In my case, that's ACR. An overexposure is fine, if the image looks right. If the important detail is in the shadows, then the fact that a white post seen through the window appears a perfectly neutral grey (white) and has lost a bit of barely perceptual tonality is unimportant.

The field of view is the same for the P30 and 5D. The problem is the aspect ratio of the two cameras is different so the field of view can only be equal in one dimension, which in this case is the height, although the P30 is tilted down more than the 5D

Due to the difference in the aspect ratios AND to the landscape orientation of the shots, they can be brought closer to each other if the *horizontal* FoVs are matching. In these shots the 5D's horizontal FoV is 13% more than that of the P30. Of course the bulk of the difference comes from the pixel count.

Quote

However, it appears that the 5D shot is very slightly overexposed. Examining the brightest part of the image, the horizontal white rail outside, the RGB values are mostly identical, like 226, 226, 226, which tends to indicate that either the white post is a perfectly neutral grey, unlikely, or one or more of the channels is blown (more likely) and ACR has done the usual great reconstruction job.

The first three layers in the TIFF I posted indicate the clipping *after the exposure equalization*. As I left the 5D unchanged, the extra colors show the original clipping. The clipped pixels have been substituted by null in these images; magenta indicates the lack of green, and red shows, that both the blue and the green clipped.

Quote

As can be seen in the screen captures below, getting the histograms looking the same in all 3 images requires a -0.75EV adjustment for the 5D, a +1.0EV adjustment for the P30 and a -0.20EV adjustment for the G9

As I posted, ACR is fooling you with the exposure. The +1 EV for the P30 brings it to 0 EV. Unfortunately, ACR does not indicate at all, that it had made a pre-adjustment, in this case -1 EV.

Quote

It's the P30 which is definitely underexposed

The P30 shot was as close to the right as possible. There was no clipping at all, but 1/3 stop higher exposure would have resulted in 0.15% of the greens clipping, all of that on the rail. In real photo situation if shot with exposure bracketing, I would ignore this small clipping and pick the 1/3 stop higher exposure.

As I posted, ACR is fooling you with the exposure. The +1 EV for the P30 brings it to 0 EV. Unfortunately, ACR does not indicate at all, that it had made a pre-adjustment, in this case -1 EV.The P30 shot was as close to the right as possible. There was no clipping at all, but 1/3 stop higher exposure would have resulted in 0.15% of the greens clipping, all of that on the rail. In real photo situation if shot with exposure bracketing, I would ignore this small clipping and pick the 1/3 stop higher exposure.[a href=\"index.php?act=findpost&pid=162881\"][{POST_SNAPBACK}][/a]

If that really is the case, it's difficult to explain why the DR of the P30 does not appear to be better than that of the 5D, unless it's due to the fact that the P30 was not used at base ISO, which I assume is ISO 50. Using the P30 at ISO 50 would give it at least one stop more DR than the 5D at ISO 100 (which is really ISO 125), would it not?

If that really is the case, it's difficult to explain why the DR of the P30 does not appear to be better than that of the 5D, unless it's due to the fact that the P30 was not used at base ISO, which I assume is ISO 50. Using the P30 at ISO 50 would give it at least one stop more DR than the 5D at ISO 100 (which is really ISO 125), would it not?[{POST_SNAPBACK}][/a]

What I have learned from this exercise is the 5D (9 micron photosites) has better or the same, pixel dynamic range (14 stops according to Clarkvision) but with a 12 bit A/D less graduations. The P30 (7.2 micron photosites) has lower pixel dynamic range (or the same) but with a 16 bit A/D more graduations. But the per frame dynamic range because of the better resolving power of the P 30 is better than the 5D. Until now I had never correlated dynamic range to resolution and detail. the net result is better IQ from the P30 ((pixel quality+bit depth) X pixel quantity)Marc

What I have learned from this exercise is the 5D (9 micron photosites) has better or the same, pixel dynamic range (14 stops according to Clarkvision) but with a 12 bit A/D less graduations. The P30 (7.2 micron photosites) has lower pixel dynamic range (or the same) but with a 16 bit A/D more graduations. But the per frame dynamic range because of the better resolving power of the P 30 is better than the 5D. Until now I had never correlated dynamic range to resolution and detail. the net result is better IQ from the P30 ((pixel quality+bit depth) X pixel quantity)Marc[a href=\"index.php?act=findpost&pid=163023\"][{POST_SNAPBACK}][/a]

That's interesting! I wonder if Jonathan Wienke would like to comment on this assessment of the situation.

I can tell with certainty, that the higher number of levels of the P30 does not play any role here. This does not mean, that more levels are not useful in interpolation-aggressive post-processing.

Following is a layered comparison of the P30 image. There are three layers, all in channel view, no de-mosaicing.

The crop shows part of the kitchen iceland with some bottles; the labels on the bottles are illegible without brightness adjustment, as one layer shows it. Another layer is the same with +6 EV adjustment, and another one is with +6 EV too, but this one is not the same image: the 65536 levels have been reduced to 4096.

There is no relevant difference in the versions with the more and the less bit depth. (The version with 4096 levels is somewhat darker in the dark regios; this is natural, because +6 EV made those bits effective, which do not make any difference without huge brightness adjustment, and which are now zero in the manipulated image.)

To the 5D: unfortunately no closer comparison can be made, for the focusing of that shot is very different. The P30 shot is focused on one of the chairs at the table, while the 5D shot is focused at the window frame; this makes everything on the table blurry in the 5D shot. But one is clear: the noise of the 5D is very low even in the very darkest region.

A more conclusive comparison could be made by focusing on fine details, which then become very dark.

1. Re the framing: there are never-ending debates about what to compare: if the field of view should be equal or the number of pixels on selected objects. Which one is "right" depends on one's intention with the image. I don't see this as important from the point of DR, though the preserved details of course depend on the pixel count over such details.

2. The highlights are necessary only to identify the top of the brightness for the DR; fine details are not important.

3. The very shadow should include some uniformly lit and uniformly colored smooth surface to be able to determine the noise in terms of standard deviation. Focusing on such spots is irrelevant. For example the shaded side of the metal pan with the snowflake-like pattern (on the kitchen island, behind the mandarines) is suitable for that.

On the other hand, some fine details in the very dark areas are important, as *focused*, so that the other aspect, namely retaining details, can be compared. The label on the salt dispenser is a good example for that (but it was out of focus on the 5D shot).

4. The shutter speed is irrelevant, as long as it does not go into such region which causes extra noise. The apertures should be selected so, that the sharpness of the shots, independently of the dynamic range, are close (you don't want to compare the quality of the lenses in this test, do you).

5. It would help if you shot several frames with 1/3 EV apart and uploaded all of them, so that a pair can be selected with the closest exposure of the highlights.

I would not bother with the G9. I don't think one can gain any interesting information from that. Not only that it is in a different class, but the same spots, which are useful when comparing the P30 and the 5D may be totally out of the range of the G9.

Marc,If you are willing to do the test again, why not match horizontal FoVs this time? All comparisons between 35mm and DBs I've seen so far effectively crop the smaller 35mm sensor to the same aspect ratio as the larger sensor as though a 4:3 aspect ratio is always better or preferred.

If you are matching horizontal FoVs (36mm 5D sensor versus 48mm) then you should use the multiplier of 1.33 for both focal length and f stop, shooting from the same position.

Probably not. But the real issue with your logic is that binning or downsampling isn't really that effective a means to increase DR. It's a gimmicky trade-off that is only acceptable in a small subset of circumstances.

I don't know where you get the idea that I am advocating binning or downsampling. I think images should be left in their original RAW resolution until just before they get forced into a display.

My main point throughout all of this has been that binning and downsampling are BS. They throw away resolution, and gain nothing of value. Ditto for bigger pixels, in the same size sensor.

Perhaps you didn't see the numbers in the crop of Ray's under-exposure, and binned versions of it, which I linked to in that other thread, but at least you should have seen that all the versions looked to have about the same amount of noise, even though the standard deviations varied wildly. My point was that image noise does not equal pixel noise, and image DR does not equal (nor is it solely limited by) pixel DR.

Quote

A Phase One P45+ has less-than-4x the pixel count of the G9, 1Ds, and 5D, and thus has less than a stop's worth of DR advantage over any of those cameras on the basis of additional pixels. Where the real difference lies (several stops worth if the MFDB shooters are credible, and I'm not going to presume to contradict them without tangible evidence) is in the quality of the MFDB pixels vs the smaller-format cameras.

That's what I've been saying all along, but several stops; no way. 4x as many pixels means about 1 stop more DR, AOTBE (same pixel quality).

Quote

The you-can-get-extra-DR-from-extra-pixels argument is true in theory, but in practice, it's bulls**t. In most instances, trading away 75% of your pixels for a measly 1-stop DR increase is a waste of resolution that doesn't solve the DR limitation of your camera anyway.

Again, I'm not advocating binning or downsampling, and haven't in a couple of years or so, since I realized that they were false economy (unless the optics are so poor that it incurs no significant loss of detail, in which case it saves you storage without much compromise).

However, if you're going to compare two cameras, one with 4x the pixel density of the other in the same size sensor, then a 2x2 binning or downsampling of the 4xMP camera will, in all likelyhood, have the same shot noise as the other, but as little as 50% the read noise. Bigger pixel counts are just dirtier to read, relative to the number of captured photons, period. The range of read noises, at least at low ISOs, is not that great amongst cameras, and there is no strong correlation to pixel size. Some of the highest read noises in electrons are in DSLRs, like the D2X, not in compact P&S sensors.

Quote

And it relegates the file to Web JPEG and small print usage which may well defeat the purpose of why you bought a camera with more pixels. Buying a camera with better pixels is going to make a bigger difference in image quality than just buying a camera with more pixels.

Not at all. I keep hearing you and others say this, but no one has ever done a thing to prove it, except for proving something else, instead (like Roger Clark's S60 vs 1Dmk2 comparison, which proves that bigger sensors can capture more light, and tells absolutely nothing about pixel density values).

Yet, when I cut to the heart of the matter, with my binnings of Ray's under-exposure, or my equal-sized crop comparisons of DSLR vs FZ50, everyone just shrugs their shoulders, and don't bother to think about what they mean, and then a few days later repeat the myths which my demos should have destroyed.

Quote

Comparing cameras on a pixel-quality basis makes the most sense,

It makes sense, but it is meaningless unless you also include the quantity as part of the specs, and if the reader of the specs understands that pixel quality in and of itself is worthless. You need a significant number of quality pixels for them to mean anything at all, and more lower-quality pixels can result in a higher quality image.

I've never implied that pixel quality should not be measured; what I have stressed is that it does not decribe or limit the image quality. Therefore, I would not recommend just testing the pixels as you suggest. Such a test should also be accompanied by image-level testing.

Quote

because when you add additional pixels, you expect to get more resolution and image quality, and in real life, the differences between lens quality, AA filter strength, and sensor quality on individual pixel quality are far more significant than the effect of throwing another megapixel or two at the image to brute-force another 1/4-stop or so of DR.

Lens quality puts a limit on image MTF, but you can oversample the optics by a good margin before there are no remaining benefits. Oversampled images are the easiest to correct for CA, perspective and geometrical distortions, rotation, etc, and result in finer CFA artifacts and negate negative aspects of AA filtering.

What you call "brute force" is really just better design. Your negative connotations are an illusion. More pixels on the same sensor size do not come at the expense of any of the qualities you mention, only at the expense of pixel quality, which does not have to come at the expense of image quality.

Quote

Once you meaningfully and consistently evaluate per-pixel quality, comparing total image quality is a simple matter of multiplying pixel quality by pixel qualtity.[a href=\"index.php?act=findpost&pid=162720\"][{POST_SNAPBACK}][/a]

No, it is a matter of the square root of pixel quantity. IOW, if you double the amount of pixels, then you can take an increase of a half stop in pixel noise and maintain the same image noise, and the same DR, or with the same pixel quality, you can get 1/2 stop more DR or less noise.

You seem to be able to comprehend better image quality by extending the pixels, but you think that with lower pixel quality, you can never extend the pixels enough to get image quality as good or higher than any image with even slightly better pixels? That is mathematically and physically preposterous. You seem to be stuck on the idea that a pixel's quality has an absolute limit as to what it can do for an image. That's just a pure falsehood. We look at things all day long with individual photons striking surfaces in random locations, and it is not considered to be noise, nor do we consider it to limit DR. It *is* what is really there. Our retinas bin photons out of convenience, but we really don't need for them to be "pre-binned" with a loss of information (which is what is really happening with big pixels with low pixel noise and high pixel DR) ; our cameras do that out of necessity because of storage/transfer limitations and physical obstacles, and lack of vision on the part of designers as well, but we won't know how much until the physical obstacles are not as much of an issue.

What I have learned from this exercise is the 5D (9 micron photosites) has better or the same, pixel dynamic range (14 stops according to Clarkvision) but with a 12 bit A/D less graduations.[a href=\"index.php?act=findpost&pid=163023\"][{POST_SNAPBACK}][/a]

That's never available to the user, of course; that's just a projection of what ISO 50 would be like if it had the same read noise in electrons as ISO 1600 does, but it doesn't; not by a long shot. The read noise in electrons is about 15x as high at ISO 50 as it is at ISO 1600. That's because there are post well-capture noise components in the readout process and ADC that are not functions of gain.

12 bits is not a limit to DR in the 5D; the analog noise added in readout is. The analog noise is too great for 12 bits to limit DR.

If cameras only had noise from photon counting statistics, DR would be proportional to the total number of photons that could be captured.

My point was that image noise does not equal pixel noise, and image DR does not equal (nor is it solely limited by) pixel DR.

However, if you're going to compare two cameras, one with 4x the pixel density of the other in the same size sensor, then a 2x2 binning or downsampling of the 4xMP camera will, in all likelyhood, have the same shot noise as the other, but as little as 50% the read noise. Bigger pixel counts are just dirtier to read, relative to the number of captured photons, period. The range of read noises, at least at low ISOs, is not that great amongst cameras, and there is no strong correlation to pixel size. Some of the highest read noises in electrons are in DSLRs, like the D2X, not in compact P&S sensors. [{POST_SNAPBACK}][/a]

There may be but a weak correlation between pixel size and read noise expressed in electrons. However, when we look at an image, we are interested in noise as expressed in data numbers (DN) or pixel values. Again, looking at Roger Clark's data for the Canon 1DMII and Canon S70, both at ISO 100 we see that read noise in electrons is 13.02 for the 1DMII and 1.03 for the S70. However, the camera gain is much greater for the 1DMII and, when expressed in 16 bit data numbers, the noise is 41 for the 1DMII and 267 for the S70. See [a href=\"http://www.clarkvision.com/imagedetail/does.pixel.size.matter/index.html]Table 4[/url] in Roger's post.

Quote

Yet, when I cut to the heart of the matter, with my binnings of Ray's under-exposure, or my equal-sized crop comparisons of DSLR vs FZ50, everyone just shrugs their shoulders, and don't bother to think about what they mean, and then a few days later repeat the myths which my demos should have destroyed.

It makes sense, but it is meaningless unless you also include the quantity as part of the specs, and if the reader of the specs understands that pixel quality in and of itself is worthless. You need a significant number of quality pixels for them to mean anything at all, and more lower-quality pixels can result in a higher quality image.[a href=\"index.php?act=findpost&pid=163338\"][{POST_SNAPBACK}][/a]

On the usenet, Roger summed up your arguments best

"There are a number of flaws in your argument, and you present noactual data to prove your position. You simply stateresults, but again, with no actual data to prove your position.

Let's take your small pixel to a logical end:pixels so small the well depth is 1 photon (electron), andwith read noise of 1 electron. So every pixel has maximumsignal to noise ratio of 1, dynamic range is 1."

Your arguments are not convincing because you do not fully describe your methods or provide data to back up your assertions. You make assertions that no one can comprehend, so they ignore them and then you repeat the same assertions that were previously ignored. I would like to see your mathematical model for taking pixel quantity into account. Certainly, in the case Roger cited, a S:N of 1:1 and dynamic range of 1:1 for the large pixel count camera is not impressive.

No, it is a matter of the square root of pixel quantity. IOW, if you double the amount of pixels, then you can take an increase of a half stop in pixel noise and maintain the same image noise, and the same DR, or with the same pixel quality, you can get 1/2 stop more DR or less noise.

No, I am correct. You're completely misunderstanding my 0-1 Pixel Quality scale; specifically that it is NOT the same as the noise level. This scale is defined from an information theory perspective where pixel quality = 1 when the actual image data (as opposed to noise, sharpening artifacts, etc) is expressed in the smallest number of pixels possible without losing any true image detail. If we call this number the Effective Pixel Count, and Pixel Count is the number of pixels in the original image, then

(Pixel Quality) = (Effective Pixel Count) / (Pixel Count)

And when Pixel Quality is defined this way, then the equation

(Image Quality) = (Pixel Quality) * (Pixel Count)

is perfectly mathematically valid. If you double the Pixel Count while maintaining a constant Pixel Quality level, you now have twice the resolved image detail in the image, and can double the area of a print while maintaining the same image quality per unit of print area. You cannot ignore Pixel Quality when trying to quantify Image Quality any more than you can ignore aperture when trying to quantify exposure. In the same way that shutter speed, aperture, and ISO/sensitivity are all equally important aspects of exposure, Pixel Quality and Pixel Quantity are equally important aspects of Image Quality.

Noise level affects Pixel Quality, but not in a linear manner. At low levels, the effect of noise on Pixel Quality is minimal. But once noise starts compromising resolution, then each additional doubling of the noise level decreases Pixel Quality by a factor of four.

Other things affect pixel quality besides noise; lens quality, the anti-aliasing filter, Bayer interpolation artifacts, camera shake, focusing errors, etc. What I am trying to do is devise a way to quantify the net effect of all these things on Pixel Quality in as objective a manner possible. Once Pixel Quality has been quantified, comparing the product of Pixel Quality and Pixel Count between two cameras is trivial.

Quote

Again, I'm not advocating binning or downsampling, and haven't in a couple of years or so, since I realized that they were false economy (unless the optics are so poor that it incurs no significant loss of detail, in which case it saves you storage without much compromise).

Again, you are wrong. That is exactly what you are doing in effect if you shoot my DR test chart full-frame and use the smallest-text-legibility threshold test, because the only way you can achieve the DR you'll measure using that methodology in actual practice is to downsample the entire image down to 800 pixels in the smallest dimension. Here's why:

For the purpose of dynamic range testing, I'm proposing a Pixel Quality value of ~0.25 as the criteria for defining the noise floor, as the normal statistical method (S/N ratio) is wildly optimistic for predicting photographically useful dynamic range. With the center square 100 pixels wide, the legibility threshold of the smallest chart text (as shown in this image) represents a Pixel Quality of about 0.25 and an Effective Pixel Count of 160,000 (400x400) for the 4 quadrants of the chart. The part I don't think you understand is that if you shoot the chart full-frame and use the smallest-text-legibility to define your noise floor, the Effective Pixel Count of the chart quadrants will remain at ~160,000, no matter how many pixels are actually on the chart in the capture. If you shoot the chart full-frame, you're skewing the test results to the point where they are no longer meaningful.