His best advice (as I remember it) was to yank on the sliders until the image "looked" like you though it looked. If you know Stephen you would realize that what you though you saw and what you wanted to see and what you ended up capturing was all very different. At some point you need to get over what you though you saw and make the adjustments needed to make it look like what you want it to look like.

BRAVO! At some point we need to stop thinking "Perfect" color, and create with our minds eye. Be the Artist and interpret what we Feel and not what just what we see.

Peter

For artistic photography, Jeff's advice is well taken. However, for documentary photography one aims to reproduce the scene or the perception of the scene. For the former, an accurate white balance is needed. For the latter, one must take human perception into account. The OP wanted to reproduce the scene as he perceived it. The scene was relatively dark, likely in the range of mesopic vision. As Jeff pointed out, color perception is decreased as one is going into darkness. Also, if one wants to reproduce the appearance of the scene, chromatic and dark adaption must be taken into account. His empiric approach was relatively crude and limited by the camera's LCD, but it got the job done. The Photoshop CIECAM02 plugin can help in adjusting the appearance of the reproduction.

For artistic photography, Jeff's advice is well taken. However, for documentary photography one aims to reproduce the scene or the perception of the scene.

Perception, key word above. I suspect the same is true for non documentary photography and neither the subject nor the photographers intent changes the facts that there's nothing accurate here, certainly past someone saying they feel it's accurate. We could both be at the same scene, shooting it together with identical cameras and settings. You could say what you captured is "accurate" or looks as you recall it, I could say you're way off base. Neither of us can provide anything scientific to say our perception of either the scene or the capture is more accurate than the other. The concepts don't belong here. They might belong in a post about capturing data using some kind of sound, scientific method and using some sound analytically useful tools.

Even IF you could provide that your capture was colorimetrically accurate than mine, if I like my capture better, the story is over. Accurate loses over desired color appearance for the vast majority of the people making images who come to LuLa.

People can discuss how many ICC profiles can theoretically dance on the head of a pin, or what CCT value on a camera produces "accurate" colors, I don't see how this will in any way aids in creating preferable appearing images.

What's more accurate, Fujichrome, Ektrachrome or (ugh) Agafchrome? Answer, doesn't matter. What matters is which film stock do you visually prefer and thus buy and use.

Hi Bill,so it was you who posted that graph in that old WB thread of mine. The cached version I found through Google is stripped of images. Thank you for providing it, then and now.

The value of 6,000 K (close to 5,500) did not work well in my test as can be seen.

> Also, if one wants to reproduce the appearance of the scene, chromatic and dark adaption must be taken into account. His empiric approach was relatively crude and limited by the camera's LCD, but it got the job done.

Exactly. I am of course aware of the limitations of the camera screen and the in-camera jpeg, and the precision level of 1,000°K can be raised to 100°K (on the Canon 5D2). But even at 1,000°, my "method" could do something no other method I know of can. One may see it as a crunch to a freehand estimate of the Kelvins, including not only the light, but also its reflections through the scene, and phenomena specific for human vision. Just like the grayscale step wedge in the old days was in estimating the EV.

> Even IF you could provide that your capture was colorimetrically accurate than mine, if I like my capture better, the story is over.

Andrew you are insinuating 2 things I have never claimed nor am interested in. 1-I am not talking about colorimetry. I want to reproduce the scene as I saw it. But to me this is far different from sheer arbitrariness. I just don't buy it. 2-I don't want to urge anybody to use my method. If your preferred rendering of that scene is say like the one resulting from the AWB, you're welcome. All I wanted to do is for once give back a little to this forum where I have learned so much.

> Neither of us can provide anything scientific to say our perception of either the scene or the capture is more accurate than the other.

No we can't. The kind of science that might apply here might be something like this: I should gather xx naturalistic painters on my porch, have them look at the scene, then at the images on my calibrated screen, and have them select the one they found most natural. Until I get that done, I will proudly point to Tim Lookingbill (post #4) who picked the same image as I did even if he was not at the scene.

1-I am not talking about colorimetry. I want to reproduce the scene as I saw it.

That is a perceptual phenomena. The end result of the excitation of photoreceptors followed by retinal processing and ending in the visual cortex. We define colors based on perceptual experiments.

As I said, we can both be at the same scene and disagree about what we perceive. Colorimetry is the yardstick in which we could attempt to say which person and subsequent capture is as you or the camera "saw it", otherwise this is no more accurate than an assumption.

Quote

The kind of science that might apply here might be something like this: I should gather xx naturalistic painters on my porch, have them look at the scene, then at the images on my calibrated screen, and have them select the one they found most natural. Until I get that done, I will proudly point to Tim Lookingbill (post #4) who picked the same image as I did even if he was not at the scene.

You mean Tim and you agree about a preferred rendering? It says nothing about accuracy, my original point.

In my o.p. I did not ask which rendering anybody preferred, but which one anybody found most natural, and that was the question Tim answered. It says nothing about which rendering he preferred.

The original quote I see is:

Quote

I had a bright moment and have figured out how to set the natural (visually correct) white balance at shooting time. But before I reveal the secret, you have to guess:

Visually correct is what you prefer. I've said that now more than once. It has nothing to do with the numbers you set on the camera, see on the back of the LCD or in reality, what you remember you saw when you took the capture. OK, done.

In my o.p. I did not ask which rendering anybody preferred, but which one anybody found most natural, and that was the question Tim answered. It says nothing about which rendering he preferred.

The illumination of the scene appears to be quite blue, perhaps most of it is coming from the skylilght, which could easily have a CCT of 10000K even towards dusk. One can estimate the illumination by looking at the roof of the structure covered with snow.

The WB with a CCT of 10000K gives the most neutral result and it is not surprising that it was favored by several observers.

One can open one of the images in ACR and white balance on the roof as shown. This could be used as a starting point from which adjustments to obtain the best match to the perceived scene.

I disagree with your interpretation of the OP's quote. The way I interpret Hening's revelation, he wants a reasonably close representation of what he saw at the time of shooting encoded as "As shot". Period. Where that initial white balancing will ultimately lead to is not yet decided. That seems to be a perfectly valid approach, because it provides a much more reasonable starting point than we could get from an "Auto" setting or from our memory. It could well be something close to 5500K. As a minimum it would probably reduce a significant percentage of magenta sunsets (sunsets are usually not magenta), and produce a somewhat more realistic looking thumbnail in the Raw file.

I think that the images presented weren't the best to prove or disapprove anything. Aesthetically not at all pleasing. Better images might have helped? Hening's idea isn't going to become popular mainly because the lcd on a camera isn't calibrated, too small and notoriously difficult to see. An "accurate" WB as Andrew points out is meaningless. If there was only one small flaw in Henings idea then it would be possible to argue a good case but there is more than one so it isn't going to happen? Like 99% of Heureka - I think he means eureka - moment's ....it is doomed to failure.

Bill, Setting the color temp in my camera on the d700 and d800e do nog give identical result. That is why i stopped working with a set colottemp, i just use AWB and where possible a CCP image or more if light situation changes.

WB can get quite philosophical. Do you want each color rendered as closely as possible to how the physical scene was (i.e. a spectrophotometer would have as similar readings in the original scene vs the recreation as possible)? Do you want to remove the spectral bias caused by the scene illumination? Do you want to replace it by the illumination used in the room where the image is to be viewed? Do you want image "white" to match passpartout white?

I use the WB picker on something perceived as "white", then push the sliders until I see something that I like on screen, then do a few iterations of prints until I am happy (lr softproofing is only slightly helpful to me). I also have profiles for my camera, displays and printer/paper.

> Where that initial white balancing will ultimately lead to is not yet decided.

I must admit though that my intention is to leave it there - I have nothing better to come up with. Every change would be merely arbitrary.

Hi Stamper

> Aesthetically not at all pleasing.

Agreed! I had no artistic ambition with the image. The scene has what I needed: a tricky WB - that's all.

> I think he means eureka

Indeed it turns out that is the english spelling. The original greek word contains a little diacritic mark which is the equivalent of a leading "h", and so the quote is spelled in my native german.

> the lcd on a camera isn't calibrated, too small and notoriously difficult to see.

All sadly true. Further more, the Canon in-camera jpeg is over-steepened in contrast and over-saturated in colors. If memory serves me, my Nikon D200 was more civilized.But alas - we have to live with the tools we can get...And I still think that despite all of the above, my "method" can do something I have not found elsewhere.

Hi Christmas Dwarf!

My WB philosophy: I want to reproduce the scene as I perceived it at shooting time. That is not a colorimetric representation, but the same, biased by human vision, herein the AWB of our brain - and the limits of that AWB. I do not want to remove the spectral "bias" caused by the illumination - that is the approach which is valid for catalog shooting and art reproduction. To me, this "bias" is part of the "subject" I want to reproduce. At shooting time, I can not take into account the illumination in any room.I, too, have profiles for my camera and my monitor. For printing I use a service which has good profiles for their printers. My passe-par-touts are black.

My WB philosophy: I want to reproduce the scene as I perceived it at shooting time. That is not a colorimetric representation, but the same, biased by human vision, herein the AWB of our brain - and the limits of that AWB. I do not want to remove the spectral "bias" caused by the illumination - that is the approach which is valid for catalog shooting and art reproduction.

The WB model we use with our digital cameras works well for relatively high levels of illumination and with illuminants whose CCT does not deviate too far from daylight. At low levels of illumination, the Purkinje effect occurs and this is not taken into account by the usual WB algorithm. Also, chromatic adaption is never complete with illuminants much below 3000K or thereabouts. Candlelit scenes will always appear reddish.

Because of these complications, your empiric approach is necessary at low levels of illumination and with low CCT illuminants. Some have criticized your use of the camera LCD for this purpose but have offered no solution other than to adjust the WB after the fact using your memory of the scene. However, memory is imperfect. Another approach would be to use live view and a field monitor such as this. I don't know if this model can be calibrated. If so, one would have to decide on what luminance and CCT to use for the white point.

Anyway, thanks for a useful post that has led to some interesting discussion despite some naysayers. As per an old saying, no good deed goes unpunished.

Thank you for your encouragement. I am happy to read that you find my post useful. As said, I felt that for once I might be able to give back a little to this forum, where I find myself on the asking side for the most. Almost all my education in digital photography has gone on here.

As for a field monitor: Thanks for the link! I would order this one at once if id did not weigh 860 grams - and not specified if that includes batteries - or does 'battery adaptor' mean that one needs a wall socket ??

I have long wondered why nobody makes an adaptor to use the iPhone for this purpose. That weighs only 100 grams...

Don't know if anyone's tried this and arrived at the same results, but I've noticed processing a thousand or so Raws shot under various non-pro/non-studio light conditions after setting WB in ACR clicking on a WhiBal card target included in the scene, I can adjust the Temp/Tint sliders to noticeably change the overall color cast of the image with very little if no change to the R=G=B readouts on the WhiBal card gray portion barring any noise sampling point issues.

What does that say about scientifically setting accurate WB according to a spectrally flat target?

As for a field monitor: Thanks for the link! I would order this one at once if id did not weigh 860 grams - and not specified if that includes batteries - or does 'battery adaptor' mean that one needs a wall socket ??

I have long wondered why nobody makes an adaptor to use the iPhone for this purpose. That weighs only 100 grams...

Anyway - thanks again -

and a happy new year to all of you!

Hening

Helicon does make an ap for Android devices (Helicon Remote) that is primarily for focus stacking but it might be used to preview the camera image. An iPhone/iPad version is planned, but it will only be able to communicate with the camera via WiFi since the Apple system is closed and the necessary WiFi transmitter is expensive. For Nikon, Control my Nikon enables tethered operation and is very inexpensive. Unfortunately it requires a PC or Mac. Perhaps one could use one of the small notepad PCs. If anyone has any experience with the above, please share your findings.

That is really good news! The Google Nexus 7 looks affordable and not too heavy (12 ounces, 340 grams). I am investigating concerning the connectivity. Both the tablet and the camera have Micro USB sockets, but I can not find a cable that has micro USB male on either end. I am not familiar with WiFi. Can one set up a connection just between the camera and the tablet without a router and outside Internet coverage?Good new year! - Hening

The magenta-green axisI am aware of that that this is to be considered. But how much does the daylight change during the day along this axis? In another thread on this forum, which I fail to retrieve right now, I was told that it changes very little, and have since set the Tint in the raw converter to zero.

While the magenta/green axis doesn't change a lot in normal blackbody radiators, the response of the magenta/green by a sensor can. ... To have zero Tint is likely to be wrong in 90% + cases...

The canned settings for Daylight, Cloudy and Shade in ACR are indeed based on a fix value and offset for Tint which is suggested to be +10, thus adding some magenta neutralizing the slight green cast of the standardized D illuminants relative to the Planckian locus (while the so-called Daylight locus is a tad of 0.003 delta-uv above the blackbody curve in the CIE xy chart).ACR Daylight: 5500 /+10, Cloudy: 6500 /+10 and Shade: 7500 /+10

Aside from possible deviations with the chromaticities of natural daylight, we have seen this +10 value to be a reasonable assumption for one camera, whereas it was found to be persistently off target and should be more -10 for another camera (referring to different camera models).

Hence there can be a need to calibrate the ACR arithmetic to the camera/sensor, for example by creating own Presets in ACR for the various Temperatures but with a different offset for Tint. Or, by just having this Tint offset in mind when starting editing with camera-AWB as shot.

Adjust to taste, warmer or colder, is typically easier when having green/magenta already somewhat in the ballpark.