Is this question on-topic? It would probably be more on-topic on Biology.
–
damned truthsJan 30 '13 at 9:25

1

What do you mean that the camera isn't able to replicate it? That's what auto white balance is about. It's not perfect, but it's remarkably good in a lot of cameras and the human brain can be fooled as well.
–
John Cavan♦Jan 30 '13 at 11:58

1

I think these sorts of question are on topic because understanding them better can help understand what cameras do (and how it's different — directly part of the question in this case). That can help in composition, in visualization of intended results, and in post-processing. My question Michael linked to really didn't "work" because it was too broad; more focused questions like this one may do better.
–
mattdmJan 30 '13 at 13:47

4 Answers
4

The human vision system is complex, involving both photo-receptors and neurons in the eye itself, in addition to complex processing in the brain. The main receptor of information from the retina is the lateral geniculate nucleus, a part of the thalmus located pretty much in the center of your head. This, in turn, routes information to the primary visual cortex and to extrastriate visual areas. The primary visual cortex processes the information into a three dimensional model of the world. For this question, it's areas known as V2 and V4 which are important.

The cells in V2 process orientation, spacial frequency, and color. Together with the primary visual cortex (that's V1), V2 registers wavelength, hue, and luminance. At this level, the brain probably does "white balance adjustment" very similar to what a camera does: basically, normalizing each channel.

It's V4 that does the real trick: color consistency based on color memory. We remember what objects are supposed to look like, and subconsciously adjust so that they do look like that. We know that roses are a certain kind of red, that snow is white, that certain fruit looks a certain way. This seems pretty incredible, but a lot of research has gone into demonstrating that it's so. For humans, skin tones are a particular set of "memory colors", and this is one reason white balance is so important to get right in portraits.

Automatic white balance could be implemented in a similar way, just as "matrix metering" works by comparing scene information to a database. However, object recognition and processing is far below what would be necessary to implement this in a useful way, both in algorithms and in processing power. So, for now, when there's no known neutral reference in the scene, we're stuck with the simple auto-wb, presets, and adjusting by eye, using our own processing of memory colors.

I have not studied this in school; the above is a lay summary of my understanding based on my own research. It's also my understanding that the state of the art is only at a tentative understanding of a complicated topic. What part of the brain does what, exactly, is still under investigation. But the basic concept is sound: your brain uses awareness of what it's seeing to "adjust" the scene, and cameras just aren't that smart — yet.

+1 It'd be good to add that the fovea, the part of the retina that creates the sharpest image and the main part of the eye that's sensitive to color, covers only one or two percent of the retina. A big difference between the eye and a camera is that a camera records an entire full color image at once, while your eyes and brain build and constantly update the image over time.
–
CalebJan 30 '13 at 22:30

@Caleb: definitely important to the overall understanding of human vision and photography, but I'm not sure how it relates to white balance / color consistency....
–
mattdmJan 30 '13 at 22:59

The brain does the white balancing for us, and it's way more advanced than any software. The brain can for example do partial white balancing, i.e. it uses separate white balance on different areas.

The brain also does object recognition of what you see, which is even more advanced, and then it uses that information to do the white balancing. I.e. a white wall looks white mainly because you recognise it as a white wall, not because you see that its color is white.

In a few words, the eye is more detailed and has much more processing power (via the brain) available to it than any camera. 'Seeing' is one of the most energy consuming tasks we do because we constantly process so much information.

Image data acquired by sensors – either film or electronic image
sensors – must be transformed from the acquired values to new values
that are appropriate for color reproduction or display. Several
aspects of the acquisition and display process make such color
correction essential – including the fact that the acquisition sensors
do not match the sensors in the human eye, that the properties of the
display medium must be accounted for, and that the ambient viewing
conditions of the acquisition differ from the display viewing
conditions.

In addition, Ken Rockwell has done a nice write up on the subject of how we see and how it relates to photography, its worth a good read here

There's also that little matter of the brain. Most colourblind people don't see monochromatically, and many don't know they have a problem at all until they fail the Isahara test for the first time. A lot of that "auto white balance" comes from knowing what you're seeing.
–
user2719Jan 30 '13 at 7:29

As usual, Ken's article is entertaining, has some good insight, and does not particularly concern itself with accuracy. Some of it is just completely wrong.
–
mattdmJan 30 '13 at 17:13

The brain is remarkably smart at correcting colours - or better put: at seeing what it wants or thinks it should see.

Try the following experiment: using a colour printer, print a photo that includes some white areas, but print it on pastel coloured paper (pink, light green, etc). When you look at the printed image, you will see "white" where it should be on the photo.