How does one extract just a channel? There still has to be some sort of conversion. Raw formats aren't actual file formats. Something still has to be done to change the raw data into a visible image. Given the makeup of the typical sensor, how do you end up with a continuous tone image? Why does it not have gaps? That is, if you take just the green pixel information, why are there not gaps where the red and blue pixels would normally be?

How does one extract just a channel? There still has to be some sort of conversion. Raw formats aren't actual file formats. Something still has to be done to change the raw data into a visible image. Given the makeup of the typical sensor, how do you end up with a continuous tone image? Why does it not have gaps? That is, if you take just the green pixel information, why are there not gaps where the red and blue pixels would normally be?

There are no gaps because you only take the pixels of the desired channel: R, G1, G2 or B. Maybe this illustrates:

Left: Bayer mosaic, Right: just the B channel

There is no conversion, just put the chosen RAW numbers on a bitmap and save it as any desired image format. RAW numbers are visible in a straightforward way, and a RAW histogram can be plotted using those numbers as well, without existing any image.

Something still has to be done to change the raw data into a visible image.

the same is true for JPG, etc... something has to be done to change the data inside. jpg file into a visible image... you even have less math to do to display something visible from most of raw files then from .jpg

There are no gaps because you only take the pixels of the desired channel: R, G1, G2 or B. Maybe this illustrates:

Left: Bayer mosaic, Right: just the B channel

There is no conversion, just put the chosen RAW numbers on a bitmap and save it as any desired image format. RAW numbers are visible in a straightforward way, and a RAW histogram can be plotted using those numbers as well, without existing any image.

OK, but you can't use it straight out of the camera. There is still some intermediary work that has to happen.

Vladimirovic, yes software has to 'read' the JPEG but a JPEG can still be used straight out of the camera. A raw file can't. JPEG is an actual image file format. Raw files aren't.

Oh yes, and to view a JPEG file intermediary works has to happen: a software engine must run decompression algorithms to convert the JPEG file numbers (which are not an image) into a displayable bitmap. In fact a JPEG file stores frequency values rather than spatial luminosity values, which the RAW file consists of, so strictly speaking RAW data are closer to a final image than the JPEG data.

Vladimirovic, yes software has to 'read' the JPEG but a JPEG can still be used straight out of the camera. A raw file can't. JPEG is an actual image file format. Raw files aren't.

used by what exactly ? by some software that knows how to make 0s & 1s in JPG file to appear on your screen in a form an image that you can understand (what was in that shot) ... I have some news for you - the same works for raw files as well, you also need some software that knows how to make 0s & 1s in raw file to appear on your screen... there is no difference between "JPG" and "raw" except your wrong perception that JPG has an image and raw does not... the mere fact that your web browser or whatever does not know how to display an image stored in raw does not make that a non image.

sure, clipping is when your capture/display device does not display the full range of luminosity/color present in reality. but isn't that a uniquely human judgement? after all, how does the software know that the white of a cloud is clipped and it really wasn't that white? we know because the cloud in reality has many tonal variations and when we see a cloud that looks like a white paper cutout, we say the cloud had its whites clipped. but how does software know to put blinkies in the cloud? or for that matter, how does a camera know to do that in the lcd or its electronic viewfinder? if the camera can 'show' that it's not capturing certain tones, then how is it detecting those tones?

is the algorithm simply looking for consecutive pixels of the exact same tone and assigning the clipping indicator to it? (after all there is no homogeneity in the real world, right?) i guess i am asking how does software define clipping?

Perhaps better understanding can be gained by considering scene capture by the sensor and conversion to a color image as separate subjects? And the use of the word 'clipping' itself can be questionable, IMHO.

The sensor is sometimes said to have a linear gain characteristic (curve) with respect to incident illuminance but we all know that it does not - it has an 'S' shaped curve with a fairly linear portion in the middle. For example, one of my cameras has a sensor well capacity of 77,000 electrons but is stated to have acceptable linearity in the range of 40,000 electrons or so. Thus we see that there is a 'headroom' of some 37,000 electrons but which has a decreasing gain (electrons/lux-sec) as the sensor approaches saturation. For such a camera, any level between 40 and 77 thousand electrons could be chosen for a 'clipping' signal but clipping per sedoes not occur. Even 77,000 is only an average value for saturation, varying as it does according to the laws of probability and tolerances of sensor manufacture.

Onwards to consideration of conversion of the sensor signals to a color image. Taking an example of flower shots with their highly saturated colors, often shot in bright conditions, most sensors will successfully capture most of the reflected color gamut. However, one finds that many of the captured colors are outside of the gamut covered by the RGB or CMYK color spaces used for image output - monitors, printers, OLED's, etc. Unfortunately for highly saturated images, the process of conversion uses color compression (perceptual) or just plain color clipping (colorimetric) which this time is real clipping of a digital nature. So an on-board JPEG histogram, or a blinkie function, will show clipping when the sensor signals themselves are not clipped. Thus the camera is doing it's internal conversion to JPEG (sRGB or aRGB) and is showing when the conversion is clipping, not when the sensor is saturated. Indeed, this is the basis of 'highlight recovery'.

Even if a RAW image is shot and the file converted into a wide-gamut working space such as Kodak RIMM/ROMM (ProPhoto), or good old Adobe "Melissa", the time comes when the image must be converted into a smaller color gamut which can often be clipping time. For yellow flowers, I find that blues are often reduced to zero and reds increased to 255 when converting from either RAW or ProPhoto to sRGB and thus are truly clipped and not easily retrievable, if at all.

Perhaps better understanding can be gained by considering scene capture by the sensor and conversion to a color image as separate subjects? And the use of the word 'clipping' itself can be questionable, IMHO.

The sensor is sometimes said to have a linear gain characteristic (curve) with respect to incident illuminance but we all know that it does not - it has an 'S' shaped curve with a fairly linear portion in the middle. For example, one of my cameras has a sensor well capacity of 77,000 electrons but is stated to have acceptable linearity in the range of 40,000 electrons or so.

Xpat,

You have one strange sensor that gives a sigmoidal characteristic curve. How did you determine this? Most digital sensors are linear and this can be shown by photographing a step wedge and observing the pixel values in Rawdigger. For example, I used the Stouffer T4110 which has density steps of 0.3, corresponding to 1/3 EV. With the current version of Rawdigger one can superimpose a grid over the wedge and take readings and save them for analysis in Excel or some other program.

Here is the wedge with the grid:

And here are the pixel values. Note that the green channels are clipped in the two brightest steps as shown by maximal pixel values of 15778 and decreased standard deviations (when the channel is completely clipped the SD is zero, and clipping begins when the right side of the bell shaped curve imposed by shot noise reaches the clipping point of the sensor).

Here is the plot from Excel for the Green1, red, and blue channels. The red and blue are not clipped and are linear within the limits of the wedge and illumination. The green channel is entirely clipped in step 1 and partially clipped in step 2.

GL and Vladimirovich, you both know full well what I mean and am saying.

yes, we know - you decide (for your own convenience) to define an image only as something that certain programs of your own choice can help you see on your screen as a "beautiful" picture... that's it...

You have one strange sensor that gives a sigmoidal characteristic curve. How did you determine this? Most digital sensors are linear and this can be shown by photographing a step wedge and observing the pixel values in Rawdigger.

Bjanes (or may I call you Bill?),

Funny you should say "sigmoidal"! The sensor I quoted is in fact the Foveon F7 as used in the Sigma SD9 DSLR, and the numbers came from a paper by Gilblom, et al, "Operation and performance of a color image sensor with layered photodiodes":

Quote

4.3 PerformanceThe total quantum efficiency of the F7 at 625nm is approximately 49% including the effects of fill factor. Total quantum efficiency is over 45% from about 530nm to beyond 660nm. Testing is underway to establish the limits of wavelength response. The F7 is expected to have useful sensitivity extending from below 300nm to 1000nm or higher. Well capacity is approximately 77,000 electrons per photodiode but the usual operating point (for restricted nonlinearity) corresponds to about 45,000 electrons. Photo response non-uniformity (PRNU) is less than ±1%.

I see that, in my earlier post, I said 40,000 electrons not 45,000 as above. Poor memory, sorry.

I have neither Stouffer wedges nor RawDigger, so I myself have determined nothing. Perhaps I was misled by the somewhat sigmoidal graphs for DR found in places like this http://www.dpreview.com/reviews/sigmasd1/12. They're nowhere near as linear as yours.

I have neither Stouffer wedges nor RawDigger, so I myself have determined nothing. Perhaps I was misled by the somewhat sigmoidal graphs for DR found in places like this http://www.dpreview.com/reviews/sigmasd1/12. They're nowhere near as linear as yours.

Hi Ted,

Those curves are the result of gamma correction and tone-mapping (and with the influence of the lens, veiling glare and such). In a practical sense, the sensors in most cameras are close to perfectly linear in their response to light. This is also the result of the camera electronics/DAC that use a Black-point and White-point on the response curve that produces such a linearity. Since we can only access the data as recorded after the DAC, it's all that matters in a practical sense.

I like to think of 'Blowing' as applying to photosites and Full Well Count, a function of Exposure - if we saturate the photosites any highlights higher than that are gone forever.

Conversely I like to think of 'Clipping' as possibly also being caused by processing - we've reached the upper limit of the Raw or other data in the processing chain, while perhaps information at the sensor is not 'Blown', so possibly recoverable with different processing.

Vladimirovic, yes software has to 'read' the JPEG but a JPEG can still be used straight out of the camera. A raw file can't. JPEG is an actual image file format. Raw files aren't.

What is it that you are trying to prove?

A JPEG-file is a file. A raw file is a file. Humans cannot look at files as if they were images. Files have to be decoded, interpreted and rendered in order to make sense as a visual thingy. The JPEG format is notorious for being open for interpretation by the decoder (the Independent Jpeg Group produce a decoder that is the de facto reference for how a given Jpeg file should decode into raw rgb values, simply because the standard can't be easily/reliably be used for that). Then you have to convert something akin to a *.bmp file into colored, space-variant luminance/reflectance that is to appear in front of the viewer.

A raw file developed by the manufacturers own raw developer tends to look visually nearly identical with the jpeg file coming directly from the camera.

...What happens on the sensor can't be taken in isolation though because, excepting shooting JPEG, we can't use what comes off the sensor without conversion. The entire chain has to be taken as a whole.

We can do statistics on raw files that can be interpreted in isolation as to predicting how a differently exposed raw file of the same scene would behave. That is clumsy language for saying that raw histograms is all that is needed for adjusting exposure as long as you are shooting raw only.

You have been shown that an image can be formed by sampling one pixel value in every 2x2 raw sensel array. The fact that you seemingly did not know this makes me think that your confident assertions about raw files are poorly founded.

Funny you should say "sigmoidal"! The sensor I quoted is in fact the Foveon F7 as used in the Sigma SD9 DSLR, and the numbers came from a paper by Gilblom, et al, "Operation and performance of a color image sensor with layered photodiodes":

I see that, in my earlier post, I said 40,000 electrons not 45,000 as above. Poor memory, sorry.

I have neither Stouffer wedges nor RawDigger, so I myself have determined nothing. Perhaps I was misled by the somewhat sigmoidal graphs for DR found in places like this http://www.dpreview.com/reviews/sigmasd1/12. They're nowhere near as linear as yours.

In this case, the clip level would be 40,000 - aka, the sensitivity of the A/D converter would be set such that maximum would be at the equivalent of 40,000. E.g., for a 12-bit camera, 4095 = 40,000 electrons. This is done exactly to keep the sensor operating in its linear range.

Thanks Gentlemen for setting me straight on linearity as it appears in in files versus how it is at the sensor.

I take it then that camera sensors are different to photodiodes, with respect to saturation? Some Googling seems to indicate that ordinary photodiodes, unlike sensor photodiodes, do have a soft saturation characteristic, for example:

Which I naively assumed would also apply to camera sensors

Quote

the sensitivity of the A/D converter would be set such that maximum would be at the equivalent of [45,000]. E.g., for a 12-bit camera, 4095 = [45,000] electrons. This is done exactly to keep the sensor operating in its linear range.

Sandy,

The early Sigma cameras work a little differently in that respect. The sensor has three analog ouputs which are presented to three A/D converters. Although the converters themselves are 12-bit, the camera firmware outputs higher bit-count numbers (14 or 16?) to the X3F raw file thus giving numbers much greater than 4095 decimal e.g. somewhere around 10,000 for a saturated sensor. But firmware itself also sends 3 metadata tags called 'saturation' (somewhere between 5,500 and 7,000) - one for each of the three channels - presumably for some arcane use by the raw converter and, being Sigma, that use will probably not be obvious. I think they are also used in-camera for the LCD preview image (an sRGB JPEG thumbnail), which can show blinkies if so selected.

Off topic, but the Foveon sensors do also get quite a lot of trimming on-sensor by means of mainboard-generated sensor inputs - presumably due to production variability of the technology in practice.

Vladimirovic, yes software has to 'read' the JPEG but a JPEG can still be used straight out of the camera. A raw file can't. JPEG is an actual image file format. Raw files aren't.

Hi Bob,

The trouble with all-encompassing statements is that they can sometimes be wrong.

Here is an image made from a Sigma X3F raw data file with no interpolation (i.e. no de-mosaicing).

Of course, to appear on our monitors, it has been gamma'd and brightened . . . and yes, it's a JPEG but it looked just the same as a TIFF on my screen. It was produced by DCraw in "document" mode (no interpolation).

I do see what you're driving at, but the days are gone when double-clicking on a raw file only worked with the camera manufacturer's RAW converter.