By far the biggest factor in the output color response is the spectral transmission of the Bayer color filters. As far as color reproduction goes (*), I've measured some CCDs with great spectral transmission curves and other CCDs with not-so-great ones. The same with CMOS sensors.

Eric

(*) i.e., getting a good match on the so-called Luther-Ives condition

Eric,

Apropos of nothing, you might be amused to hear that for some time, whenever I saw your username "madmanchan" I kept reading it as "Mad Manchán" - Manchán is an old and uncommon Irish forename (pronounced Man-KHAWN). Your signature alerted me to the fact that it's really "Madman Chan"!

Anyway, I'm interested in your statement that you've measured several sensor transmission curves. What's your measurement setup? Have you seen any significant deviations from the corresponding curves in data sheets?

How much "better" would/could the color response have been if one removed the CFA and used a set of 3 purpose-made color filters that were inserted one at a time for 3 separate exposures (assuming that the scene did not run and hide in-between exposures)?

So perhaps the question could be paraphrased "how big a limitation is it that the spectral filtering carried out in the CFA have to have really small features, and be economically/practically feasible"?

I believe that color wheels are commonly used for multi-spectral cameras. I would guess that with 7 or 10 wisely chose bandpass filters, you would have a lot of options not available to regular cameras.

The differences with a RGB filter array is, of course, that when one filter is in front of the camera, you benefit from photons captured by all the sensels, thereby doubling the green practical QE vs a bayer matrix and quadrupling red and blue. Thise comes, of course, with the penalty of having to do three exposures. The problems for photography are essentially that conditions change when you take frames in succession: the camera moves a very tiny bit, the lighting has changed, etc... and also that you end up with three images that aren't identical. Even the focal plane can change its position somewhat (it's dramatic in an achromat, tolerable in an apochromat). Assuming a star is red, you'll also get, even if the focus is perfect, a larger diameter in the red channel that you will in the green channel and you'll have to handle that in some way. It's very bad for bright point sources, maybe a bit less for full images with less luminosity differences in terms of photography, but you don't want to have to deal with all those issues shooting pictures.The way those cameras are used, when they aren't used with standard RGB filters which have no other purpose than producing "pretty pictures" of no scientific value, is with filters whose bandwidth is very well defined (http://en.wikipedia.org/wiki/Photometric_system)BTW, that tri-color filter processed was used in the early 20th century. http://en.wikipedia.org/wiki/Sergei_Mikhailovich_Prokudin-Gorskii

And of course, a variation of that is the triple CCD in some video cameras

I run a variety of microscope cameras with color filters (commonly the same filtration as the Bayer filters) and Bayer patterns. The color is practically speaking the same. The only advantage to the filtered camera is the possibility of tuning the filters--color fitter wheels are a little old fashioned and LCD tunable filters work better. However, with broad bands, there is not much benefit in tuning the filters. That is usually left to very narrow bands measured in angstrom.

As far as the loss of resolution due to the Bater pattern, it is insignificant. The benefit of unfiltered monochrome sensors is really in sensitivity.

As far as the sensor is concerned, the important factor (as noted above by others) in color reproduction is not the density of the color filters (or their spatial arrangement in a mosaic pattern such as Bayer), but rather the shapes of the transmission curves and how they relate to each other. Ideally from a color perspective, you'd want the transmission curves to be the same as the human cone responses (in the eye) or a linear transformation thereof. But there is a tradeoff in terms of color vs noise, and of course there are other practical constraints due to materials, manufacturing, costs, etc., so in practice this technical condition is not satisfied. as I mentioned earlier, this is rather a separate issue from the choice of CCD vs CMOS.

But as far as photography is concerned, my experience has been that color response differences from system to system have less to do with the sensor, and more to do with the software rendering applied in post-processing (even if the user never touches any sliders or controls). Example: Canon has various Picture Styles (such as Portrait and Landscape) available in their software for their cameras, some of which have CMOS sensors, some of which have CCD sensors. The difference in visual appearance between these software-based styles is far greater than the actual differences in the color filters!

Ray, I generally measure camera optical systems with a monochromator to estimate the transmission curves over the visible and near-IR range. However, I don't have manufacturer data sheets for most of the systems I measure (and even for those for which I do, the maker's data is usually for the sensor alone, whereas I prefer to measure sensor + lens combinations, so comparisons are hard). And you never know -- maybe I was a mad Irishman in a previous life!!

And of course, a variation of that is the triple CCD in some video cameras

But then each photon is counted (at least in theory). For bayer and color wheel solutions, only e.g. 1/3rd of the photons hitting the sensor during the total exposure time is counted, the rest is absorbed in spectral bandpass filters.

As far as the sensor is concerned, the important factor (as noted above by others) in color reproduction is not the density of the color filters (or their spatial arrangement in a mosaic pattern such as Bayer), but rather the shapes of the transmission curves and how they relate to each other.

Sure, but my gut-feeling is that whenever you have to do something really tiny, complex and economical, you loose something. If that gut-feeling is wrong, and Canon & Nikon are free to make whatever spectral response they see fit (keeping in mind the color response vs noise issue you mentioned), then my gut-feeling was wrong.

But then each photon is counted (at least in theory). For bayer and color wheel solutions, only e.g. 1/3rd of the photons hitting the sensor during the total exposure time is counted, the rest is absorbed in spectral bandpass filters.

I'd don't have a well defined opinion on the efficiency of splitting vs filtering, but I am under the impression the sensors in a 3CCD or 3MOS cameras don't get all the photons. If they did, it would be a mess to colour balance imho.

I'd don't have a well defined opinion on the efficiency of splitting vs filtering, but I am under the impression the sensors in a 3CCD or 3MOS cameras don't get all the photons. If they did, it would be a mess to colour balance imho.

I'd don't have a well defined opinion on the efficiency of splitting vs filtering, but I am under the impression the sensors in a 3CCD or 3MOS cameras don't get all the photons. If they did, it would be a mess to colour balance imho.

How should I interpret those figures in light of you statement?

I am by no means an expert on this topic. But it seems to me that if such a thing as "perfect" splitting of light based on wavelength exists (I am sure that it does not, but perhaps as an approximation), then some kind of 3-band bandpass filtering might be possible with a "3CCD" solution. It might not provide the _desirable_ shape of spectral selectivity, and it may have all kinds of practical/economical drawbacks, but I think this is an interesting aspect of it.

It all boils down to doing spectral selection using spectral absorption vs spectral reflectance - at least on the level of physics that I am able to follow :-)

It seems that multi channel dichroic prisms are, in theory at least, better than wideband RGB filters, in the sense that there are no holes, spikes, overlaps, etc... in the transmission band. The incoming light is split, you characterize it and that's it. I guess this could also allow for different distances for the three focal planes to compensate for chromatic aberration.

But in practice, I have only worked with wide and narrow band filters and therefore will try to keep my foot out of my mouth, waiting for someone more competent in those matters to eventually jump in ;-)

As far as the sensor is concerned, the important factor (as noted above by others) in color reproduction is not the density of the color filters (or their spatial arrangement in a mosaic pattern such as Bayer), but rather the shapes of the transmission curves and how they relate to each other. Ideally from a color perspective, you'd want the transmission curves to be the same as the human cone responses (in the eye) or a linear transformation thereof. But there is a tradeoff in terms of color vs noise, and of course there are other practical constraints due to materials, manufacturing, costs, etc., so in practice this technical condition is not satisfied. as I mentioned earlier, this is rather a separate issue from the choice of CCD vs CMOS.

An example of these tradeoffs is discussed in the DXO paper comparing the Nikon D5000 with the Canon EOS 500D. The Canon has poor color depth due the characteristics of its CFA filters. The problem lies mainly in the Red CFA filter, which is actually more sensitive to green rather than red as shown below. This necessitates a large coefficient in the color matrix, which adds noise. In contrast, the Nikon has a better red response and a greater color depth.

CCD sensors can also have unfavorable CFA characteristics as shown by the DXO analysis of the Phase One P45+, where the red channel is also more sensitive to green than red. The camera has a poor metamerism index 0f 72, as compared to an index of 83 for the D5000. The P45+ is an older camera, and the situation is much improved with the newer P40+. These studies indicate that CCDs do not necessarily have better color depth than CMOS designs.

An example of these tradeoffs is discussed in the DXO paper comparing the Nikon D5000 with the Canon EOS 500D. The Canon has poor color depth due the characteristics of its CFA filters. The problem lies mainly in the Red CFA filter, which is actually more sensitive to green rather than red as shown below. This necessitates a large coefficient in the color matrix, which adds noise. In contrast, the Nikon has a better red response and a greater color depth.

Do you think that this is a trade-off of achromatic SNR vs color noise, or sensor cost/performance vs color noise?

Do you think that this is a trade-off of achromatic SNR vs color noise, or sensor cost/performance vs color noise?

-h

The article states, "This comparison is a bit surprising with respect to the previous SNR 18% results. Why such a difference? Color sensitivity is impacted by noise curves and spectral responses. If SNR curves are close, most of the divergence observed must be due to a difference in spectral sensitivities, which implies very different color processing for each sensor."

The article states, "This comparison is a bit surprising with respect to the previous SNR 18% results. Why such a difference? Color sensitivity is impacted by noise curves and spectral responses. If SNR curves are close, most of the divergence observed must be due to a difference in spectral sensitivities, which implies very different color processing for each sensor."

I conclude that the difference is largely due to color noise.

Regards,

Bill

I should have phrased my question differently. Given that Canon have less spectrally selective CFA than Nikon, and thereby a color correction matrix that is more different from the identity matrix and more color-noise prone:-Did they do this because they think that having wider filters, passing more photons, gives them an advantage when shooting spectrally broad/flat scenes-Or does Canon have a sensor with a disadvantage in the first place, and spectrally wide filters used to hide its flaws

Or perhaps this is a feature to the silicon process that Canon use, linked perhaps to micro lenses etc?

I have heard that Sony alpha DSLRs have a radically different philosophy (closer to the standard CIE observer, at the cost of more noise)?

I should have phrased my question differently. Given that Canon have less spectrally selective CFA than Nikon, and thereby a color correction matrix that is more different from the identity matrix and more color-noise prone:-Did they do this because they think that having wider filters, passing more photons, gives them an advantage when shooting spectrally broad/flat scenes-Or does Canon have a sensor with a disadvantage in the first place, and spectrally wide filters used to hide its flaws

Or perhaps this is a feature to the silicon process that Canon use, linked perhaps to micro lenses etc?

I have heard that Sony alpha DSLRs have a radically different philosophy (closer to the standard CIE observer, at the cost of more noise)?

It seems that multi channel dichroic prisms are, in theory at least, better than wideband RGB filters, in the sense that there are no holes, spikes, overlaps, etc... in the transmission band. The incoming light is split, you characterize it and that's it. I guess this could also allow for different distances for the three focal planes to compensate for chromatic aberration.

But in practice, I have only worked with wide and narrow band filters and therefore will try to keep my foot out of my mouth, waiting for someone more competent in those matters to eventually jump in ;-)

You know a lot more about the technology than me.

What I will say is I have seen 3 chip HD video vs 1 chip HD video. The 3 chip systems look way better. Maybe an order of magnitude better. Go to your local electronics store and compare the cameras for yourself. Panasonic makes a nice 3CMOS camcorder. Compare it to any manufacturer using 1 chip of similar size. Not the sony nex, that is a much bigger chip.

Edit: by compare I mean shoot video in the store with each. Output it to a HDTV.