Are the only colors you see rgb? No, there are other colors that are combinations. Remember Newton and his prism?

So what you good people are suggesting is using the total spectrum energy to catch certain wavelength only. As far as I know this (recording one color) can be done only by filtering the spectrum to isolate the wanted frequency. It does not matter if we use RGB or not. Picking even more specific colors than R G an B would mean even more narrow filters "wasting" even more of the poor photons. Is somebody forgetting basic physics? Me, maybe? Please inform.

Well, lets see. What colors would the bayer array absorb from a yellow sandy beach scene? Red from some spots, green from some spots. Blue almost 100% regection. The foveon would take both so the exposure would be 50% higher.

You take a picture of classical greek buildings. They are a light grey marble, almost white. The foveon would absorb photons on all 3 channels. the bayer would take 1/3 at each.

What you really want are micro prisms over angled RGB so you can split out what is there over smaller sub-pixels. Easy to say, it was probably very difficult to make in many an R&D lab. You almost need curved rings of pixels.

Now how it that? If we use a strong red filter to capture only the red rays, it is the non-red wavelengths we are rejecting which we do not want in the first place. So where is 60% of the light lost? That 60% can not be used to describe red, as it is not red.

Joofa covered this nicely. "That 60%" can be used to describe "green" or "blue". Why would we not want those described in a general camera?

If e.g. 60% of the photons hitting a sensor are reflected or absorbed in color filters, this is real information about the scene that is wasted. Clearly, an ideal sensor would not throw away information.

You take a picture of classical greek buildings. They are a light grey marble, almost white. The foveon would absorb photons on all 3 channels. the bayer would take 1/3 at each.

and interpolate and add the missing 2/3rds of the spectrum when demosaicing.

In both cases the green channels record similar (say 1/3rd of the total) amounts of light (give or take quantum efficiency differences) at each corresponding spatial sampling position, the red channels record similar amounts of light at their corresponding positions, and the blue channels record similar amounts of light at their corresponding positions. The only difference is that the Demosaicing will fill in the blanks on the image taken with a Bayer CFA.

and interpolate and add the missing 2/3rds of the spectrum when demosaicing.

In both cases the green channels record similar (say 1/3rd of the total) amounts of light (give or take quantum efficiency differences) at each corresponding spatial sampling position, the red channels record similar amounts of light at their corresponding positions, and the blue channels record similar amounts of light at their corresponding positions. The only difference is that the Demosaicing will fill in the blanks on the image taken with a Bayer CFA.

Cheers,Bart

The Bayer image would be based on fewer total counted photons. (given certain, idealized conditions that we seem to take for granted, but certainly not true for real Foveon sensors).

Given a smooth, feature-less scene, sampling its value using 1/3 of the photons would give a higher uncertainty ("more noise"), no matter what interpolation and demosaic is used.

If e.g. 60% of the photons hitting a sensor are reflected or absorbed in color filters, this is real information about the scene that is wasted. Clearly, an ideal sensor would not throw away information.

-h

If we want red color information from the scene, we ARE throwing away a lot of information. Same for any color. I see no way around it with present technology. This hypothetical sensor with 100% efficiency would work only for luminance = B&W photographs.

If we want red color information from the scene, we ARE throwing away a lot of information. Same for any color. I see no way around it with present technology. This hypothetical sensor with 100% efficiency would work only for luminance = B&W photographs.

Do you want to image just the "red" information in a scene? Or do you want to take general images? If you (like) me want a general image of a usual scene, those scenes tends to contain most information (=hard-to-predict sharp edges, fractal-like structures) in the luminance-channel, and after white-balancing, all primary color channels tends to contribute to the printed image.

Camera tech like the prism used in "3-CCD" video cameras could (in principle) capture all of the photons projected by the lense. In practice, I am sure that such prisms have losses, non-ideal wavelength selectivity etc. The point is that instead of converting "photons of the wrong color" to heat, they are instead diverted to the sensor that can properly count them. An ideal Foveon-like sensor would do the same.

I tend to believe that in the competitive camera market, manufacturers tends to make the kind of products that is possible today, that minimize R&D/production-cost and maximize customer willingness to pay. I really don't have the economical-technical knowledge to claim that they are wrong.

The Bayer image would be based on fewer total counted photons. (given certain, idealized conditions that we seem to take for granted, but certainly not true for real Foveon sensors).

Given a smooth, feature-less scene, sampling its value using 1/3 of the photons would give a higher uncertainty ("more noise"), no matter what interpolation and demosaic is used.

Indeed, for total luminosity, but not lower sensitivity as some seem to believe. The (broadly speaking) 1/3rd of the spectrum caught by a Bayer CFA at a given sampling position is the same 1/3rd of the spectrum recorded by a Foveon like sensor. By adding the interpolated missing data, the full RGB luminosity is restored (with a slightly lower accuracy).

Given that fewer photons in total are actually counted, also allows to utilize the full available well depth for the spectral band that was recorded. It doesn't have to share the silicon real-estate with 2 other spectral bands for the same sampling position. It also allows to write the Raw data to a file faster (because there is less data, just a single band) and file sizes are much smaller.

Indeed, for total luminosity, but not lower sensitivity as some seem to believe.

By "sensitivity" do you mean the number of electrons delivered to a hypothetical analog front-end/ADC for a given lense/sensor area?

Comparing e.g. the Leica M9 to the M9-m (or whatever the called the monochrome (achromatic?) version) sharing the same sensor and electronics, different only in the latter having no color filter. Then each sensel would receive e.g. 3 times as much light as a CFA filtered sensel, assuming that the CFA removes 2/3 of the photons hitting it, and that exposure parameters are held constant.

By "sensitivity" do you mean the number of electrons delivered to a hypothetical analog front-end/ADC for a given lense/sensor area?

By sensitivity I mean required exposure time at a given ISO setting. Some are suggesting that up to a stop can be gained by not filtering out 2/3rd of the spectrum at a given sampling position, which is not true because the 2/3rds are added through interpolation instead of being sampled directly. The Bayer CFA has a reasonably high transparency at the spectral band pass wavelengths although some loss takes place, but the Foveon also has less than 100% efficiency, lots of non-sensitive areas per photosite due to connectors/gates.

Quote

Comparing e.g. the Leica M9 to the M9-m (or whatever the called the monochrome (achromatic?) version) sharing the same sensor and electronics, different only in the latter having no color filter. Then each sensel would receive e.g. 3 times as much light as a CFA filtered sensel, assuming that the CFA removes 2/3 of the photons hitting it, and that exposure parameters are held constant.

but the Foveon also has less than 100% efficiency, lots of non-sensitive areas per photosite due to connectors/gates.

what about microlenses ? Foveon shall have those and when you chemically wash off (unless that was a monochrome sensor by design) your CFA from non Foveon Bayer CFA sensor you also wash off microlenses that sit on top of CFA....

Now how it that? If we use a strong red filter to capture only the red rays, it is the non-red wavelengths we are rejecting which we do not want in the first place. So where is 60% of the light lost? That 60% can not be used to describe red, as it is not red.

Of course it canot be used to describe red, but it could be used to describe blue or green at that location, which would add useful information, and that it what an X3 sensor does. In an X3 sensor, that light of other colors is also recorded, giving blue and green signals along with the red signal at that location. This avoids the need to interpolate those colors in from nearby locations. So if you compare sensors with the same number of photosite locations (like about 16 million on the current Foveon sensors vs a 16MP bayer CFA sensor) the avoidance of interpolation increases resolution, while if instead the CFA sensor has more photosites to equalize resolution (very roughly 32MP in my example) then the larger area of the X3 photosites are potentially gathering more light at equal exposure index, reducing the effects of photon shot noise on the overall S/N ratio. (Comparing sensors of equal size, of course.)

Of course, this makes the big assumption of comparable quantum efficiency in each color channel, whereas the current Foveon approach seems distinctly worse than the Bayer CFA "state of the art" in that respect, cancelling out the potential for lower noise at high exposure index. And for all I know, lower "QE per color" may be an unavoidable disadvantage of measuring with three vertically stacked detectors.

P. S. As Erik pointed out, the actual human fovea uses single color photodectors more like a CFA sensor: cones each of whcih gives a signal for one of red, blue, or green ... along with a few pure luminosity signals from rods. Not quite theGRBWtried by Kodak and Sony at times, but closer to that than to Foveon X3.

By adding the interpolated missing data, the full RGB luminosity is restored (with a slightly lower accuracy).

That "slightly lower accuracy" due to interpolation is in fact quite substantial if comparing with an equal number of photosite locations. The current "16MP X 3” Foveon sensor has distinctly more resolution than a 16MP CFA sensor. On the other hand, given our eyes' far lower resolution of color detail than luminosity detail, this advantage for X3 might be less in practice than in theory. And certainly far less than some Foveon fanboys make out with their trick of red-blue resolution charts!

Given that fewer photons in total are actually counted, also allows to utilize the full available well depth for the spectral band that was recorded.

That is a good point: at base sensitivity where you can get close to filling the wells at highlight locations, using a single monochromatic well at each location might work better. I do not know enough about the methods of constructing these three layer sensors with readouts from three levels; that could introduce even more constraints that make things even worse for any X3 sensor architecture.

That "slightly lower accuracy" due to interpolation is in fact quite substantial if comparing with an equal number of photosite locations.

Actually, on some of the Bayer CFA effect on resolution tests I did many moons ago I found that the loss in luminosity resolution is relatively limited, some 6.4%.

Quote

The current "16MP X 3” Foveon sensor has distinctly more resolution than a 16MP CFA sensor.

The main reason for that stems from comparing the lack of an Optical Low-Pass Filter (OLPF), which potentially creates other issues, to a Anti-Aliasing filtered image (an often not Capture sharpened correctly either). Add a few demosaiced Bayer CFA pixels and apply proper sharpening, then compare sharpness and artifacts again ...

Quote

On the other hand, given our eyes' far lower resolution of color detail than luminosity detail, this advantage for X3 might be less in practice than in theory. And certainly far less than some Foveon fanboys make out with their trick of red-blue resolution charts!

While that's correct, there is a small benefit to full RGB sampling at each output pixel position, especially when magnifying the image and when per pixel color accuracy is very important. But that also requires other factors to be exactly right, e.g. noise and color separation, which is a bit of an issue with the Foveons.

Indeed, for total luminosity, but not lower sensitivity as some seem to believe. The (broadly speaking) 1/3rd of the spectrum caught by a Bayer CFA at a given sampling position is the same 1/3rd of the spectrum recorded by a Foveon like sensor. By adding the interpolated missing data, the full RGB luminosity is restored (with a slightly lower accuracy).

Given that fewer photons in total are actually counted, also allows to utilize the full available well depth for the spectral band that was recorded. It doesn't have to share the silicon real-estate with 2 other spectral bands for the same sampling position. It also allows to write the Raw data to a file faster (because there is less data, just a single band) and file sizes are much smaller.

Cheers,Bart

Ok, but how much is slightly lower accuracy? Having real data vs guesstimated would be a big deal on fine changing detail. It would be irrelevant on something like a car shot that is all basically the same color.

In the current season a landscape of mountainside with snow covered trees would be a big difference in the detail of the shot. Bayer shots like this have always looked artificial to me. Typically you have to downsample much more on natural textures.

P. S. As Erik pointed out, the actual human fovea uses single color photodectors more like a CFA sensor: cones each of whcih gives a signal for one of red, blue, or green ... along with a few pure luminosity signals from rods.

The "luminosity" signal in human vision comes mostly from cones, not rods, under ordinary conditions, unless the illumination is really low.

The "luminosity" signal in human vision comes mostly from cones, not rods, under ordinary conditions, unless the illumination is really low.

Thanks for the clarification: that is what I was trying to indicate with the word "few". For one thing, AFAIK the fovea does not have many rods. All of which makes the fovea even closer to being a three color CFA sensor.

Actually, on some of the Bayer CFA effect on resolution tests I did many moons ago I found that the loss in luminosity resolution is relatively limited, some 6.4%.

That test is with a pure grayscale subject, so that CFA pixels of any color are near perfect proxies for luminance measurements, making it an unnaturally easy case for retaining resolution in demosaicing. On real world issues is that sharp luminance boundaries are likely to go with shifts in color, causing more oclor moiré issues and so need a stronger OLPF or stronger postprocessing to avoid aliasing artifacts. I prefer the rather consistent subjective observstion with natural subject matter that it takes about twice as many Bayer CFA photosites as X3 photosites to get comparableperceived sharpness.

The whole game will shifts of course if we get to the regime of oversampling, with resolution limited almost entirely by the lens, not the sensor. Then the main potential advantage of X3 over CFA is the effect on low-light performance of counting most of the received photons versus only about 40% of them.

Ok, but how much is slightly lower accuracy? Having real data vs guesstimated would be a big deal on fine changing detail. It would be irrelevant on something like a car shot that is all basically the same color.

It depends on the level of detail in the original signal, and the amount of (photon shot) noise. Light doesn't have an absolute brightness, only an average, there is always noise involved. Detail doesn't have an absolute level, there is also a lens/diffraction/(de)focus involved and subject motion and camera shake, and usually an OLPF to reduce the tendency for aliasing inherent in discrete sampling. So when it's not easy to say what the signal was, it's also not so easy to say how much exactly the signal was off.

So the 'slightly lower' can only be answered in a statistical sense, or compared to an ideal laboratory setup, and the answer will not be a simple number but rather something like an MTF curve which needs interpretation.

Quote

In the current season a landscape of mountainside with snow covered trees would be a big difference in the detail of the shot. Bayer shots like this have always looked artificial to me. Typically you have to downsample much more on natural textures.

Looks are hard to comment on, but from my experience it often has to do with inadequate Capture sharpening (assuming good technique was used to get the shot). Digital cameras just record what's thrown at them, and the sensor technology just makes a difference in which artifacts are tolerated or suppressed, but they all produce artifacts, no exception.

Bayer CFA sensors have a slightly lower Chrominance than Luminance resolution, but Chrominance usually has a lower level of signal detail anyway compared to Luminance (just look at the color channels of an image in e.g. in Lab mode). Also, the different sampling density between Red/Blue and Green can cause False color artifacts.

Sensors without an OLPF by definition exhibit more aliasing artifacts (although DOF/defocus can function as a Low-pass filter), and sensors that sample R/G/B for each pixel will only have Luminance aliasing. In the Foveon desgn especially, the channel separation is not that simple (the Raw sensor data is almost grayscale), and it requires significant processing which boosts noise and limits high ISO robustness, also because of the small well depth per channel.