Given that the main purpose of demosaicing is to recover colour as accurately as possible, would there be any advantage to a "black and white only" demosaic algorithm? That is, instead of first recovering the colour and then converting black and white, might it be better to convert the RAW file directly to black and white?

I'm particularly interested in the image quality (i.e. dynamic range and sharpness). On a related note, which common demosaicing algorithms are most amenable for black and white conversion?

Color is an intrinsic factor of a RAW image created from a color bayer sensor. The problem for converting it to grayscale is that you only have luminance for a single given color at any given pixel. It doesn't really matter if you only treat each pixel as a luminance value, or treat it as a color value, each pixel only represents approximately 1/3rd of the total luminance that was incident on the pixel at the time of exposure. "Demosaicing" is really unnecessary for grayscale images, however to get ideal grayscale images you would want to use a grayscale sensor... without the bayer at all!
–
jrista♦Feb 6 '13 at 19:21

1

As for which demosaicing algorithms are ideal for B&W conversion when using a color camera... I would say the simplest form, your standard quad interpolation. A lot of other more advanced demosaicing algorithms are designed to minimize color moire and other color related artifacts. If all you care about is B&W, then standard 2x2 pixel interpolation will preserve the most detail.
–
jrista♦Feb 6 '13 at 19:23

@jrista I'm not sure why a naïve interpolation would preserve more detail than one of the more advanced algorithms which attempt to distinguish between brightness and intensity changes. Iny any case colour artifacts can show up in black and white images as well depending on how the conversion is done.
–
Matt GrumFeb 6 '13 at 19:42

Well, I guess I'm basing that primarily off of AHDD, which tends to soften detail. At least, the implementation in Lightroom produces slightly softer results than the algorithm used by Canon DPP, which produces very crisp, sharp results from a simpler demosaicing algorithm (although I guess not as simple as your basic 2x2.)
–
jrista♦Feb 6 '13 at 20:14

4 Answers
4

There is no way to convert a RAW file directly to black and white without recovering the colour first, unless your converter takes only one of the R,G,B pixel sets to produce an image. This approach would result in a substantial loss of resolution.

In order to not lose resolution when converting to black and white you have to use all R G and B pixels, which implicitly means colour calculations must be performed, at which point you might as well use one of the advanced colour demosaicing algorithms, and then convert the result to black and white.

halfing the resolution without weighted average of the quads by extracting one colour would not be the expected greyscale image as it would be like putting a green or red or blue filter on a monochrome camera. And a philosophical question: dividing each axis by 2, reducing the Mp count by 4. I would call this half resolution. But you seem to call sqrt(2) per axis /2 Mp count "half resolution". Which definition is technically correct? If resolution is the ability to resolve, then width/2 and height /2 is half resolution in a 2D system where you want to preserve rotational invariance?
–
Michael NielsenFeb 7 '13 at 8:15

extension of my view on resolution I think that Mp is not the resolution, its a photography marketing number. As an image processing engineer a resolution is given as w X h.
–
Michael NielsenFeb 7 '13 at 8:34

@MichaelNielsen What "expected greyscale image"? There are many different methods to convert to greyscale, the question didn't specify an equal weighting approach. Secondly, if you had a linear detector and halved the number of samples, the resolving power, i.e. the maximum amount of detail detectable would halve, you wouldn't say it reduced by a factor of root 2. From that it stands to reason that if you have a 2D field of detectors (such as an image sensor) and halve the number of samples in both directions, leaving you with one quarter, you'd say the resolution was reduced by a factor of 4.
–
Matt GrumFeb 7 '13 at 9:27

if you halve only the x or y axis, you have different resolutions in each direction thus defeating the ability to count a total resolution in terms of Mp and computing a single factor "/2 resolution". Ofc. lenses dont have equal resolution , either, but sensor manufacturers are pretty proud to announce that nowadays their pixels are quadratic and square, thus yielding equal resolution on both directions, this means resolution of 640x = 480y. See how the pixel number itself means nothing. resolution 640 is the SAME resolution as 480.
–
Michael NielsenFeb 7 '13 at 9:45

1

Greyscale: I didnt say equal weighted. And I know there are many different greyscale versions, but I can bet you that R, G or B is not one of the expected ones by the OP. Highest probable one is the 0.11*b+0.59*g+.3*r version.
–
Michael NielsenFeb 7 '13 at 9:49

A reason for that is quite simple - otherwise you'd get sub-pixel artifacts all over the place. You need to realize that image recorded by sensor is quite messy. Let's take a look at the sample from Wikipedia:

Now imagine we don't do any demosaicing and just convert RAW into grayscale:

Well... you see the black holes? Red pixels didn't register anything in the background.

Now, let's compare that with demosaiced image converted to the gray scale (on a left):

You basically loose detail, but also loose a lot of artifacts that make the image rather unbearable. Image bypassing demosaicing also looses a lot of contrast, because of how the B&W conversion is performed. Finally the shades of colours that are in-between primary colors might be represented in rather unexpected ways, while large surfaces of red and blue will be in 3/4 blank.

I know that it's a simplification, and you might aim into creating an algorithm that's simply: more efficient in RAW conversion to B&W, but my point is that:

The good way to do B&W photography is by removing colour filter array completely - like Leica did in Monochrom - not by changing the RAW conversion. Otherwise you either get artifacts, or false shades of gray, or drop in resolution or all of these.

Add to this a fact that RAW->Bayer->B&W conversion gives you by far more options to enhance and edit image, and you got pretty much excellent solution that only can be overthrown by dedicated sensor construction. That's why you don't see dedicated B&W RAW converters that wouldn't fall back into demosaicing somewhere in the process.

Machine vision cameras with bayer filters can give greyscale images directly but it does this by demosaicking, converting to YUV, and sending only the V channel (the ones I normally use at least). If they had a better way by bypassing this colour reconstruction I think they would, as they are constantly pushing framerates (the typical camera I use runs 100FPS for example).

If it were to ignore the colour based demosaicking it could half the resolution and weighted average each 2x2 quad, but if you want full resolution it is better to use the normal colour demosaicking algorithm which tries to preserve edges better. If we know we want greyscale we just get a monochrome camera from the start, slap on a colour filter if we look for a certain colour, as this setup is vastly superior in image quality, reducing the need for resolution oversampling, which in turn allows use of a fast low resolution sensor with larger pixels, which in turn gives an even better image.

The effect of the color filters over each pixel well in the Bayer layer are the same as shooting B&W film with color filters over the lens: it changes the relationship of the gray levels of various colors in the scene being photographed. To get an accurate luminance level for all colors in the scene the signals from each pixel must be demosaiced. As others have mentioned, a sensor with no Bayer layer would yield a monochrome image that need not be demosaiced. This should result in better image sharpness if the circle of confusion from the lens is equal to or less than the width of each pixel.

White Balance adjustment can effect a change in overall perceived luminance in the same way that contrast adjustment can. As such, it can be used to fine tune contrast.

White Balance will also have an affect on the relative luminosity of different colors in the scene. This can be used to fine tune application of the "Orange", "Yellow", "Red", etc. filter effects. Red seems to be the most affected by this and is much darker at 2500K than at 10000K. Surprisingly, at least to me, is that blue tones do not demonstrate the reverse.

Since for all practical purposes there is no chrominance noise in a B&W photo, it can be left at "0".

The unsharpen mask tool will give much more control over sharpness than the simpler "Sharpness" slider. Especially if you have a few "warm" or "hot" pixels in the image, you can increase overall sharpness without emphasizing them.

Below are two versions of the same exposure shot on a Canon 7D with an EF 70-200mm f/2.8L IS II lens and a Kenco C-AF 2X Teleplus Pro 300 teleconverter. The image was cropped to 1000X1000 pixels. The first was converted using the in camera settings shown below it. The second was edited with the settings shown in the screen shot. In addition to the RAW tab, a Luminance Noise Reduction setting of 2 was applied, as was a Chromatic Aberration value of 99.