Blue, then Green, then Red - just like I said. Thank you for again proving my point.

I don't care what the subsequent processing does. I'm much more interested in the source transducer as this is where the magic (light to electron/voltage conversion) happens. All is done for the R and G and B wavelengths for each pixel (for this sensor).

Practically all colour video camera processing is done in YUV (saving bandwidth), but their sensors still convert the R & G & B wavelengths at the pixel photo sites.

I return to my simple logic: what do your yellow and red layers 'see' if the top layer extracts the "white"?

(edit) oh and re your response to someone else's 67% comment. There won't be any change in exposure times, it will just be a change in gain applied to the chip if you gather more light. In theory you could net more DR.

If you read what I actually said "(all else equal)", you will realise that my wording was spot on. To get the same image spatial detail, you need only one third of the pixel resolution; therefore each resulting pixel can be three times larger to achieve the same detail; therefore three times the light gathering capability for the same spatial detail.

Or from an alternative angle: you have the same exposure time for three times the detail; or you could reduce image noise by 1.5 stops.

All of this makes perfect sense when you realise the simple fact that all current bayer-based colour image sensors discard (i.e. waste) two-thirds of the incoming light. Do you dispute this?

Like I also said: 10MP with true RGB per pixel is enough for me (the image details would be on par with my 21MP 5DII).

Blue, then Green, then Red - just like I said. Thank you for again proving my point.

Your point is incorrect. The first layer is sensitive all visible light. The blue just doesn't travel any deeper. Seriously go read some Foveon white papers. There's no secret sauce, no color filter, between layers. You must allow all light to enter the sensel in order to record more than one wavelength. This light can get picked off in the wrong layers, a sizable portion does. That's why blue isn't 'blue' in the sensel.

Quote

I don't care what the subsequent processing does. I'm much more interested in the source transducer as this is where the magic (light to electron/voltage conversion) happens. All is done for the R and G and B wavelengths for each pixel (for this sensor).

Practically all colour video camera processing is done in YUV (saving bandwidth), but their sensors still convert the R & G & B wavelengths at the pixel photo sites.

I return to my simple logic: what do your yellow and red layers 'see' if the top layer extracts the "white"?

I didn't say extracts the white. I said was sensitive to. Big different between the two. And that's why the foveon concept is difficult from a color perspective. There is a lot of color bleed between the layers. Certain red and green wavelength photons get picked off in the 'blue' layer, so you end up doing really nasty math to *try* and compensate for it. It's the exact reason you get those greenish casts in skin tones sometimes with a Foveon.

(edit) oh and re your response to someone else's 67% comment. There won't be any change in exposure times, it will just be a change in gain applied to the chip if you gather more light. In theory you could net more DR.

If you read what I actually said "(all else equal)", you will realise that my wording was spot on. To get the same image spatial detail, you need only one third of the pixel resolution; therefore each resulting pixel can be three times larger to achieve the same detail; therefore three times the light gathering capability for the same spatial detail.

[/quote]

I wasn't disagreeing with you. I was adding to what you said for the point of clarity.

Quote

Or from an alternative angle: you have the same exposure time for three times the detail; or you could reduce image noise by 1.5 stops.

All of this makes perfect sense when you realise the simple fact that all current bayer-based colour image sensors discard (i.e. waste) two-thirds of the incoming light. Do you dispute this?

No I don't, nor did I. I merely stated that exposure times wouldn't change, only applied gain. Not quite sure where the chip on your shoulder came from, but you're picking arguments that don't even exist.

Quote

Like I also said: 10MP with true RGB per pixel is enough for me (the image details would be on par with my 21MP 5DII).

Possibly. It would depend on how you use your bayer sensor, and what wavelength is dominant in your images. The nature of current Foveon sensors is such that the blue output is really pretty murky. Some of what Canon is trying to do is deal with some of those issues.

FWIW, there's a lot of color bleed with the Bayer mask on current Canon dSLRs, too. For example, if you illuminate the sensor with red light, both the red and green 'channels' are activated (the green channel slightly more than the red channel, in fact). The RAW conversion engine has to sort all that mixing during the demosaicing process. See the DxOMark article on this issue.

smeggy

This is true, but it doesn't mean the top layer is comparatively as sensitive to the red and green wavelength photons as they are to the blue; hence your statement of being sensitive to 'white' is very misleading.

I already did. I will quote from them so that you (and the reader) can read the relevant passages:

SIGMA_WHITE_PAPER_SD14:

Quote

As a result, light in the blue wavelengths, which have the highest energy, tend to be absorbed by the silicon very quickly, generating image-forming electrons in the top layer. Light in the lower-energy red wavelengths tends to penetrate further, to the bottom layer, before generating electrons, and intermediate- energy green light tends to produce electrons in the middle layer.

Color_Alias_White_Paper_FinalHiRes:

Quote

Foveon X3 sensors take advantage of the natural light absorbing characteristics of silicon. Light of different wavelengths penetrating the silicon is absorbed at different depths -- high energy (blue) photons are absorbed near the surface, medium energy (green) photons in the middle, and low energy (red) photons are absorbed deeper in the material.

You must allow all light to enter the sensel in order to record more than one wavelength. This light can get picked off in the wrong layers, a sizable portion does. That's why blue isn't 'blue' in the sensel.

Yes there is colour bleed, but I put it to you that the top blue layer is considerably more sensitive to blue wavelengths than red ones â€“ do you agree? If not, can quote white papers supporting your position?

It amounts to the same thing (both being wrong).Why: because it is not possible to measure photon counts without them being absorbed. Does silicon non-invasively measure light intensity?

All the white papers match my claim. Thus far you have not shown any link or paper that supports yours, despite my request.There is no point continuing with you unless you post something of substance. Until then I think it best that we leave the reader to ponder on the direct evidence and explanations that has been given, such that they can draw their own conclusions.

FWIW, there's a lot of color bleed with the Bayer mask on current Canon dSLRs, too. For example, if you illuminate the sensor with red light, both the red and green 'channels' are activated (the green channel slightly more than the red channel, in fact). The RAW conversion engine has to sort all that mixing during the demosaicing process. See the DxOMark article on this issue.

Oh agreed and it's going to get worse.. but it's a different more consistent sort of thing with the Bayer process. What's really funny is; Bayer is closer to how our eyes *actually* work than Foveon. The holy grail (in terms of replicating our eyes) would be not needing a CFA and having the sensels 'naturally' sensitive to specific spectrum, but the manufacturing of that would be a nightmare.

This is true, but it doesn't mean the top layer is comparatively as sensitive to the red and green wavelength photons as they are to the blue; hence your statement of being sensitive to 'white' is very misleading.

In the interest of helping those following this thread have an accurate understanding of the Foveon concept, I'll grant you the use of 'white' on my part is not the best term to describe the spectrum captured at that layer. It was lazy. However, it's not a blue sensitive only sensing 'site' (ugh we need a whole new lexicon for Foveon discussions). Unlike with a CFA based filter there is nothing to *stop* other wavelengths being sensed at that 'location/depth' and weirdness ensues.

I already did. I will quote from them so that you (and the reader) can read the relevant passages:

SIGMA_WHITE_PAPER_SD14:

Quote

As a result, light in the blue wavelengths, which have the highest energy, tend to be absorbed by the silicon very quickly, generating image-forming electrons in the top layer. Light in the lower-energy red wavelengths tends to penetrate further, to the bottom layer, before generating electrons, and intermediate- energy green light tends to produce electrons in the middle layer.

Color_Alias_White_Paper_FinalHiRes:

Quote

Foveon X3 sensors take advantage of the natural light absorbing characteristics of silicon. Light of different wavelengths penetrating the silicon is absorbed at different depths -- high energy (blue) photons are absorbed near the surface, medium energy (green) photons in the middle, and low energy (red) photons are absorbed deeper in the material.

I'll agree with what's written there for obvious reasons. However, both of those are still really generalized (have you, honest question, not an attack) looked at the response curve for a Foveon sensel... I used to have a link to a spectral response graph.. it was ugly.

Both of those are for marketing purposes, and dumbed down, which is why I said white papers. Foveon isn't incorrect (I suppose) in saying the top layer records blue, but it doesn't do so by only capturing blue photons. A lot of work is done to get just blue signal (or an approximation there of). Foveons have worse color accuracy than Bayer sensors if you look at the response charts and gamut.

You must allow all light to enter the sensel in order to record more than one wavelength. This light can get picked off in the wrong layers, a sizable portion does. That's why blue isn't 'blue' in the sensel.

Yes there is colour bleed, but I put it to you that the top blue layer is considerably more sensitive to blue wavelengths than red ones â€“ do you agree? If not, can quote white papers supporting your position?[/quote]

I will find a paper this weekend, that details what the %s are. From memory more than half the red light is lost in the other two layers. That is significant. Obviously the 'green' layer is a bigger mess than the 'blue layer', but there is significant 'green' contamination in the 'blue' layer. It may take me a while because frankly I bookmarked it so long ago I think it was 3 PCs back

It amounts to the same thing (both being wrong).Why: because it is not possible to measure photon counts without them being absorbed. Does silicon non-invasively measure light intensity?

All the white papers match my claim. Thus far you have not shown any link or paper that supports yours, despite my request.There is no point continuing with you unless you post something of substance. Until then I think it best that we leave the reader to ponder on the direct evidence and explanations that has been given, such that they can draw their own conclusions.

As I said above I will provide you a link to scientific documentation. However, in the interest of discussing this, I would appreciate if you don't insinuate my putting forth concepts that I haven't (such as non-invasive conversion of photons to electrons, as that is pretty much impossible.. although it might be possible in some extremely weird quantum cases I would rather not ponder right now).

Further more, I maintain the statement isn't wrong as there is nothing stopping the top 'layer' from absorbing too much bleed from the other layers. There's a pretty heavy amount of 'bleed' if we want to use that term between the layers.. and there has to be by the nature of it.

Logged

smeggy

In the interest of helping those following this thread have an accurate understanding of the Foveon concept, I'll grant you the use of 'white' on my part is not the best term to describe the spectrum captured at that layer. It was lazy. However, it's not a blue sensitive only sensing 'site'

...

Foveon isn't incorrect (I suppose) in saying the top layer records blue, but it doesn't do so by only capturing blue photons.

...

Further more, I maintain the statement isn't wrong as there is nothing stopping the top 'layer' from absorbing too much bleed from the other layers. There's a pretty heavy amount of 'bleed' if we want to use that term between the layers.. and there has to be by the nature of it.

As has been said already: this and other colour systems (bayer) have an amount of bleed. The fact is that the top layer is intended to capture the blue, and it generally does. Sure it's not a perfect blue cutoff, but what is? Our eyes aren't great either!

You gotta admit: the wiki link you posted earlier on during our little skirmish, perfectly matched what was said in the white papers - R & G & B; nothing said about white, yellow and red. Therefore, your earlier claim that my "point is incorrect", was itself incorrect. Hence you might want to be more careful with what proof you post too.

(have you, honest question, not an attack) looked at the response curve for a Foveon sensel... I used to have a link to a spectral response graph.. it was ugly.

Yup, and before my posted previous response too.I grant the colour bands for the layered imager are not as defined as that for a filtered bayer system, but as has been said: that one bleeds too.Either way, "white" implies luminosity (a la Y[UV]), which is not the same as 'blue with some green and a bit of red'.If this is the root cause of our difference (and I suspect it is), then I think this issue could now be closed.

However, in the interest of discussing this, I would appreciate if you don't insinuate my putting forth concepts that I haven't (such as non-invasive conversion of photons to electrons, as that is pretty much impossible.. ).

I don't see how else you can reconcile how silicon photo site can be 'sensitive' to a certain range of wavelengths, without extracting those photons.

As has been said already: this and other colour systems (bayer) have an amount of bleed.

Foveon color-bleeding is much worse than Bayer. Bayer sensors use optical color filters, which are much, much better than silicon in filtering â€˜undesiredâ€™ wavelength bands.

In a Bayer sensor thereâ€™s color bleeding because of 'cross-talk'. Cross-talk is when light passing through the color filter of a pixel excites electrons in neighboring pixels.This kind of bleeding is perfectly correctable and there are different techniques that do it.

In contrast, the color bleeding in a Foveon sensor is uncorrectable. Like I said, silicon is much worse at filtering light of certain wavelengths compared to an optical filter.

(It's completely another matter that manufacturers use wider bands for the color filters in order to improve sensitivity - which, of course, results in poor color separation.)

Quote

You gotta admit: the wiki link you posted earlier on during our little skirmish, perfectly matched what was said in the white papers - R & G & B; nothing said about white, yellow and red.

Osiris is actually right about this one. What you see on the Foveon web site is a logical diagram of how light is filtered. The diagram certainly does not represent how the filtering is performed in practice.

As Osiris said, all of the layers in a Foveon sensor are equally sensitive to all wavelengths. But based on the depth of a layer, only light of a certain wavelength band is supposed to be absorbed by this particular layer. In practice, though, the absorption is far less than ideal, so color separation is (much) worse compared to using an optical filter.

« Last Edit: July 08, 2011, 10:30:13 PM by x-vision »

Logged

tikappa

Edit: Sorry, freepatentsonline did not find the japanese patents that have not yet been translated. The patent exists and a machine translated version is available throug the website of the japanese industrial properties digital library at http://www.ipdl.inpit.go.jp/homepg_e.ipdl . Using the search service there is described at http://www.jpaa.or.jp/english/patent/how_to_search.html .As far as I can understand the claims, the patent describes a back-side illuminated CMOS-sensor with stacked detectors to detect different wavelengths. It is a kind of back-side illuminated Foveon sensor that - according to the patent claim - 'concerning this invention, it is possible to improve the color separation characteristic. '

You gotta admit: the wiki link you posted earlier on during our little skirmish, perfectly matched what was said in the white papers - R & G & B; nothing said about white, yellow and red.

Osiris is actually right about this one. What you see on the Foveon web site is a logical diagram of how light is filtered.

I have cause to disagree.Then it can be said that the white papers Osiris referred to showed the â€œlogicalâ€ diagrams too; he accepted these as R & G & B. The wiki link he gave to support his argument showed exactly the same (R & G & B).

As Osiris said, all of the layers in a Foveon sensor are equally sensitive to all wavelengths.

I am looking at the spectral sensitivity of the top layer of an early X3 â€“ it is very much blue weighted; it certainly isnâ€™t what anyone could call â€œequally sensitive to all wavelengths.â€. I have hosted the response of the top layer, from the Foveon Inc document:

If all layers really are as equally sensitive as each other (spectra wise), then I would say Foveon missed a trick as we know there is a dependency of the wavelength absorption to the thickness of the silicon layers â€“ it makes perfect sense to make the top layer (for blue) thinner than the next (for green) which in turn would be much than the next (for red).So if the X3 design really did use layers of equal depth (one of the wiki links given earlier in this thread weakly indicates otherwise), then Canon has considerable scope for improvement - which is good news!

Sigma says 46mp, because their sensor has 46 million photosites..approx 15million red, 15 million blue, and 15 million green. The red, blue, and green photosites are stacked on top of each other. This yields 15 millions RGB true pixels. A conventional 15mp sensor has 15 million photosites with a red, blue, green bayer filter in front of them. This results in 3.75 million red, 3.75 million blue, and 7.5 million green photo sites. Then a process called demosaicing is used to calculate an RGB color value for each of the final 15 million pixels. The result is color that is not 100% accurate at the pixel level. The results are stunning never the less, as I'm sure you've seen with your 5D Mk II (I own one as well). I should point out that there are shortcomings to the Foveon technology that usually considered enough to make it a niche technology. It will be interesting to see what the new sensor can deliver. And I am very anxiously awaiting to see where Canon's patent will lead. If they have perfected the true-color-at-each-pixel technology, it could be a revolution.

AbstractA microlens condenses incident light to an opening. Light passed through the opening reaches a first dichroic mirror. The first dichroic mirror passes blue light and reflects green and red light. Only the blue light is incident on a first light receiving surface. The first dichroic mirror leads the green and red light to a second dichroic mirror. The second dichroic mirror passes the green light and reflects the red light. Only the green light is incident on a second light receiving surface. The second dichroic mirror leads the red light to a third dichroic mirror. The third dichroic mirror reflects the red light. Therefore, the red light is incident on a third light receiving surface.