Hey, thanks for the tip. Do you have WIN7? Mine was ok until I went to WIN7.

Something must be wrong, It darkens and desaturates the image way more than I have ever seen on an image prepared for web. If I modified my image to look good in this color space, I hate to think what it might look like on the web. I would need to brighten maybe a whole stop and go way up on saturation.

My cal settings are

White point D65
Gamma 2.2
150 cd/mm^2
Monitor default contrast.

I have sometimes used 120 for brightness which helps for printing, but I had a custom print profile made that matches a screen brightness of 150.

ben egbert wrote:
I have sometimes used 120 for brightness which helps for printing, but I had a custom print profile made that matches a screen brightness of 150.

A printer profile has two roles in a color managed workflow: 1) fitting the file values to the gamut of the printer's paper and ink, taking into accout mechanical varialbes, and; 2) softproofing on the monitor to simulate how the image will look on the print.

The limiting factor when printing is the gamut the paper/ink create. It is what it is and it is, is better on some printers than others and is a DIFFERENT SHAPE than the gamut of RGB monitors.

What do I mean by shape? Color managed color is mathematically modeled and manipulated on the axes of the Lab coordinate system. The gamut of Lab was defined with testing in the 1930s as the range of color a "standard observer" (i.e., average human with no color vison defects) can detect.

The ColorSync utility in OSX allows comparision of gamuts as 3D wireframes. Here's the comparison of my iMac's calibration profile and the profile for HP glossy paper on the 8/C printer next to it, seen from the "top" and "bottom" of the color space along the "L" axis:

The areas where the screen gamut is hanging out are colors the monitor can display with more saturation than printer can match. The yellows and greens hanging outside the monitor's gamut are colors the printer can reproduce "better" than you'll ever see them on your screen — in absolute terms of color you can see.

That physical difference and the fact that the printer can reproduce some colors better than a monitor is the reason you don't want your default monitor calibration to match the printer. Note I said monitor match the print.

The advantage of a wide gamut monitor is that it can display more of the printer's gamut accurately. For example here's the same printer profile compared to AdobeRGB in 3D from the top and bottom of the L axis (north and south poles of 3D colorspace):

There are still a few colors the printer can print with more saturation than AdobeRGB can display accurately.

Most starting out think in opposite terms and come to FM asking: "How can I get my printer to match my screen?" The answer is, "You can't". The printer's gamut is what it is. You can't increase the max saturation of the color it will print, you are just messing with the balance and rendering of the less than 100% percentage of CYMK ink the printer can lay down on the paper.

If you have a less expensive monitor or laptop you aren't seeing how the file will look when printed because of the limits of the monitor and are more likely to make poor, uninformed decisions simply because you are flying in a cloud by instruments.

What calibrating your monitor does is balance the RGB pixels so 255,255,255 and 128,128,128 and 16,16,16 all look neutral. But neutral perceptually is a moving target because the brain adapts to expectations. If you calibrate the monitor to a D50 / 5000°K white point, which was common in graphic arts in the 90s it the WB and color will look the same as with a D65 white point because your brain will adapt your color perception. But if you were to take a file edited on a D50 monitor and display it on a D65 calibrated one then will look different. That was the source of the Mac vs PC debates in the early days. Macs used a "paper white" 5000°K white point, while PC monitors where usually uncalibrated with a native WP of 9000°K. The gamma on early Mac screens was adjusted for 1.8 vs 2.2 on a PC.

What sorted things out was everyone agreeing to the same standards for viewing files on screen: D65 with 2.2 gamma.

What sorted things out in commercial offset publishing was all printers and color separators proofing using SWOP or other standard inks and paper and adjusting press profiles from that standard baseline

But there is no standard baseline for digital printing because the gamut of each printer is different and it changes everytime a different paper stock is used.

If a printer is sent a 255,0,0 RGB file value the color management engine in the printer (printer manages color) or Photoshop will convert it to M=100% + Y=100%. How that looks will vary between printers because the use different pigments. So one part of the conversion is mapping the extremes of the file values to the limits of the printer's gamut.

The other part is getting the less saturated colors looking "perceptually" correct. If you have a 128,128,128 gray in the file you calibrate your montitor to make it look gray. What the printer profiling process does is print that 128,128.128 gray and figure out why in isn't being printed as gray when color manangement is turned off.

If you use the "North America Web / Internet" selection in the color setting preferences Photoshop inserts "SWOP Web Coated" as the default CYMK space. If you go to the color picker and enter 128,128,128 as the RGB coordinates you'll see 52% C, 43% M, 43% Y, 8% K as the "recipe" for "SWOP middle Gray". Note there is more Cyan required than Magenta or Yellow to achieve "Gray Balance".

Now go back to color setting and change the CYMK default to "US Sheefeet Coated" and then look at the CYMK equivalents for 128,128,128 RGB gray. They change to 48%, 37%, 37%, 5%. The balance of C vs Y/M is still unequal and the values are lower meaning less ink is needed to create the same shade gray on sheetfed vs web press.

The difference? The paper brightness and absorption characteristics. What is assumed, but not obvious with those presets is the the papers and inks being used to print the targets which created those profiles where SWOP and Sheetfed standard papers as defined in printing industry specs. It was the establishment of standards like SWOP that allowed an advertisement for Coke to wind up looking "Coke Red" regardless of the press it was printed on.

All of this is obvious to me because I worked in printing and started dealing with color management in the mid-70s long before digital. When you print you need to manage the color backwards from the press.

First you print a target with known values for max 100% CYMK — as much ink as the printer can put on the paper and without dripping off onto the floor — neutral R=G=B — and lighter color combinations.

The printed sheet is then evaluated with a colorimeter. The 128,128,128 RGB patch will look too red and too light in tone because of the pigment impurities so the profile creation software adds more Cyan and a bit of Black to the recipe for CYMK middle gray. Since the paper and inks are different for web vs sheetfed the recipe for gray and all the other less than max % colors wind up different.

The profiling process is the same for ink jet and photo printers. When you get a custom profile make you are asked to print a standard target which is sent off and read on the same type of analyze we used at the printing plant to profile our large presses.

The printer profile is the road map for shifting the colors. Printing 255 Red as 100% Y+M is pretty much a no brainer on any printer and paper. Were the profile is critical and varies with paper is getting the neutral grays and less saturated skintones looking the same on the print as on the screen PERCEPTUALLY.

When you put a print next to a monitor and compare them the will never match in absolute terms in the most saturated colors. But go back up and look at the 3D wireframes. See how most of the colors are overlapping? The overlap in the wireframe render means they are the same in both gamut and don't change much between screen and print.

So if you have a photo of a woman in a yellow dress and print it the face will look similar on print and screen but you should expect the color of the dress to change. Why? It is physically impossible for the printer to create a purple that is as saturated and bright as your RGB monitor

Adobe, understanding that monitors can't match printers created "Soft Proofing" view mode to allow the user to simulate how the colors and contrast will change when an image is printed. It is done by inserting the printer's calibration profile the loop between the RGB values in the file and the RBG calibration profile making the image look perceptually correct on the monitor's gamut.

What that tells me, because I understand how to interpret it, is that the face will print OK and not change because all the colors that are not grayed out by the out of gamut warning are the same in absolute terms in the printer and screen gamut.

What would happen if I soft proofed on a wider gamut monitor? It wouldn't change the fact that the purple background looks duller on the screen because that's due to what the inks can print. The only difference is that the monitor will be able to display that fact more accurately. I'd see the same color shift but less of it would be grayed out by the "out of gamut warning" which tells me what areas my monitor isn't simulating accurately.

What will happen when I print the file? Photoshop will send 255,0,255 file values in the purples and the printer will put as much C + M ink there as it can but the results will only look about as saturated as the screen will look if I were to reduce saturation in the colors that triggered the the out of gamut warning during soft proofing.

The printer can't match the normal calibrated screen gamut because it can't physically create the same purples. What injecting the printer profile does is reduce the output of the RGB drivers on the monitor to degrade the image to look as "bad" as the printed image will look.

Why do that? Because the point of the soft proofing exercise isn't to make the image look it's best on screen it is to PREDICT THE OUTCOME OF THE PRINTING WITHOUT NEEDING TO WASTE INK AND PAPER.

At best soft proofing is a rough simulation of the actual results. As with most situations where actual results don't match expectations your brain will learn to corollate what is seen on screen in soft proofing mode in the out of gamut warning areas with how much those areas will shift in color when printed.

But more importantly the soft proofing process changes the CONTRAST of the screen to simulate the lower contrast seen when the image is printed. You can't increase the color saturation of inks in the printer but you can change the contrast within the file to improve its appearance PERCEPTUALLY by tweeking the contrast range of the file.

Again because I managed reproduction workflows as my profession I'm aware of what makes images change perceptually when printed and what can be done to fix it. The change is due to the difference in overall contrast between white and black. The white on paper doesn't seen as bright as the white on a monitor and the blacks don't seem as dark because there are physical limits to how much CYMK can be stacked on top of the paper.

You can't control the white on the paper except by changing types or the max density the printer can print to change the appearance of the print percepyutally.

The print should now better match the screen. The problem wasn't the color gamut in a photo like that because all the colors seen on screen fit into the printer's color gamut. The problem was the printer has less inherent contrast on the screen.

The take away? You need to edit contrast differently for printing than display on screen. After getting the highlight and shadow values correct per detail seen you need to adjust the mid tones until they "look right" when printed.

Next take the file that looks good on screen and soft proof it. You should see the image change and show the same too dark results you encountered when printing the file. Looking at the file in soft proof mode open Levels, adjust the middle slider until it looks "right" on the screen. Save it as a copy file and print it. The print should better match what is seen on screen with soft proofing off.

If you take the time to do that Ben you'll may realize you shouldn't be changing the brightness level of the base calibration of your monitor to match the contrast of your print results you should calibrate the screen so the image looks normal and then SOFT PROOF TO SIMULATE HOW THE CONTRAST WILL CHANGE WHEN PRINTED. Then while in soft proofing mode make a midtone tweek, sharpen, etc. then save as a new file with the suffix "_printer".

3) A screen resized and converted to 8-bit sRBG first, then tweeked and sharpened to optimize appearance.

4) A print copy resized and converted to how you send the file to the printer (e.g. Level 10 JPG) tweeked IN SOFT PROOFING MODE USING PRINTER PROFILE and sharpened to optimize appearance.

When I print in different sizes I start with the master edit in step 2, resize to the print dimensions x output resolution (e.g. 2400 x 3000 for 300ppi 8 x 10, 1200 x 1800 for 4 x 6) then sharpen and do the final tweeks in soft proofing.

I edit different size prints individually because I know print size and viewing distance affects viewer perception of them. That's the piece of the reproduction puzzle I take for granted due to my background that others miss.

Reproducing an image is for the most part an exercise in "faking it to make it seem real". What looks real on a print is different than what looks real by eye because the brain can't interpret the clues the same way. Varibables like image size and viewing distance are not factors when looking at a landscape in person but they are when looking at a photograph of the landscape.

The learning curve in reproduction is discovering what technical factors, such as shifting of the mid-tone and sharpening, are needed to convert what the camera captured into what the viewer will be able to see at that viewing distance from the print. Beyond reading distance the brain shifts gears and relies less on texture on the front of objects and more on their overall contrasting shape with the background to identify them.

If you look at a photo of a forest and a sailboat on the ocean from across the room you'll identify and relate to the sailboat faster because to has large contrasting geometric shapes. That dynamic is why this pretty picture works. The contrast of the V tree line and the ^ mountains with the sky create geometric shapes that are easily seen and processed.

But stand up and walk across the room and look at it or apply the Guassian blur test I did on your other images. How does it work then? That will predict how well it will work printed 30 x 40 and hung over the fireplace...

But all things consider nowadays it's simpler and looks better to just hang a wide screen TV over the fireplace and run a slideshow on it. Then you don't need to worry about the print matching the monitor.

Thanks for the technical explanation. I am a retired mechanical engineer with 10 of my last years designing printers. They were monochrome, but still I understand some of this.

My prints and screen have a wonderful match. Perhaps because my color sensitivity is not as well developed as others. The issue I always had with printing was brightness. The print was always too dark. I had a custom profile made for my paper that lightens the print to match brightness when I use 150 for monitor.

The real issue for folks who post on the web is the fact that there are so many standards or lack of standards.

Not everyone uses calibrated monitors or views in a color aware browser. I know better but still use chrome because of its speed. But I switch to Safari for critical viewing.

Worse yet, people use monitors that change drastically with viewing angle. My laptop for example.

I have seen images that are garish and oversaturated when viewed on my WG monitor in Chrome but which look fine in Safari. I have been told the reason for this is because the out of gamut colors are randomly assigned in a non color aware system. Whatever the reason, I have seen it

But even for a calibrated system, you have values to choose for calibration. It seems to me that D65 and 2.2 are pretty settled. But I have never seen a standard for brightness.

If I use 120 as I once did to get decent prints, my web images would be to bright. This is why I changed to 150 and got the special print profile. It was to allow setting my monitor to 150.

Not knowing what other users were using for brightness, I had to guess at this. But I knew that 120 was probably on the dark side and that people who used defaults might be up in the 180 range, I choose 150 as a sort of compromise.

I never had trouble with 120 viewing. I work in a dark basement with one 40 watt screw in florescent blub overhead behind me. I do critical viewing with an Ott lamp.

So what would you hazard is the proper brightness for post processing for web viewing?

Edit, by the way, I used levels to compensate print brightness prior to getting the custom profile.

We are getting pretty far off topic here, but I want to keep going on this since I have an expert audiance.

When Auntipod suggested that I soft proof with internet sRGB, I tried it and got terrible desaturation. I thought something was drastically wrong. For one thing, my sRGB conversions usually look very close to my Prophoto PSD files. I sometimes lose a bit in the red/yellow sunsets, but never a wholesale desaturation.

Just now I did an experiment. I converted a full size 16bit ProhotoRGB image to sRGB. The change was barely noticeable. Then when I soft proofed this, there was no change. But if I soft proof it while still in Prophoto the desaturation is huge.

I think I have the answer and want to check if its correct.

The softproof needs to be done with the final color space assigned first to be of any use. When I softproof a Prophoto RGB using internet sRGB, I am shown what the image will look like if posted on the web.

I need to first convert to sRGB then softproof to see what the web display will look like.

I was not reassigning the color space before soft proofing. That's the reason I had such a big change.

The fact monitors aren't calibrated isn't as big a factor as you might think. TVs are not calibrated either and that doesn't create a problem.

When sitting in front of a monitor your brain adapts color and contrast perception to that gamut. If the screen is uncalibrated and has a slight green cast your brain expecting the content is sees on the screen to be neutral shifts perception of the screen gamut to make the stuff it sees on it neutral. it's only when the error is gross or the ambient lighting is significantly different in white point that a calibration error will be noticed.

There is a standard on the web. As mentioned back in the 80s before the Internet Macs were 5000°K and 1.8 gamma to simulate a printed sheet of paper. MicroSloth took a different approach and standardized on the physical characteristics of a CRT gamut resulting in huge difference in appearance of files edited on Macs and PC when people started sharing them over the net in the mid-90s on web pages.

Since PCs outnumbered Macs the de fault standard became sRGB which is more similar to the native CRT gamut than Apple's "paper white" perceptual match approach was. The result?

Now everyone sees similar color on their screen if the images are converted to sRBG but the Catch - 2.2 of the de facto sRGB standard is the 2.2 gamma that is part of the sRGB and other RGB monitor standards doesn't match the lower contrast of a print; what Apple was trying to do by using the lower 1.8 Gamma on the screen.

Why didn't 1.8 gamma become the standard? Because while it match the contrast ink on a printed page it doesn't look normal on a monitor. That's the crux of the screen vs. print match problem. Ink on paper has lower contrast than the 2.2 calibration standard of the monitor. You can change the gamma of the screen to match the ink and paper (what soft proofing does) but you can't increase the contrast of the print. All you can do is shift the mid tone which affects how what is between black and white is perceived on the print.

Unless one has an understanding of reproduction variables it's not easy to look at an image and determine whether the overall difference in appearance perceptually is caused by color variance (gamut mis-match) or a difference in contrast. The brain adapts perception of both to the overall contrast range of whatever is being looked at (print or screen) individually and when both are seen at the same time the one with greater contrast is used for comparison.

That problem is exacerbated with outdoor digital captures because SOOC the a file exposed for highlight detail has a loss of shadow detail and darker than seen by eye midtones. That's a contrast problem. The photographer lowers contrast so it looks good on screen but then when printed it gains contrast again perceptually due to the mechanical variables of printing a screen image doesn't have: overlapping stochastic dot pattens and ink spread/absorption. Ink printers have different mechanical variables than laser printers which fuse toner to the paper or photo printers which similarly expose the paper with laser LEDs. So file value must be tailored to how the printer variables affect contrast. That's what the check box for dot gain does. It anticipates the midtones will wind up looking too dark due to dot gain so part of the conversion is a reduction of the midtone contrast.

Back in the mid-70s I did OC for production of National Geographic maps. Tint values on the maps depicting water depth had to be exact within 2% of the specified values. But to wind up with 50% on the press we had to start with a 40% screen when making the films because each duplication the master screen changed it's value. My job was testing to determine how much it changed. What I learned doing that is mid-tone values shift more — as a result of the uncontrollable reproduction variables —than the end points because on the ends of the tonal the paper is either completely covered with ink or plain paper. As a result the amount of variance between the starting screen values and targets on press varied considerably and was greatest in the mid tones.

The same is true in digital images when printed. You are more likely to see the change in the midtones vs. the shadows where there is max ink coverage and the highlights where their is very little. That's why the middle slider in Levels or pulling the middle of the curve in Curves up or down affects the way the image looks so much. If an image is captured with neutral or normal for scene context WB and a reasonably full range of detail the only PP correction needed is a tweek to the midtone values to get it to match what is seen by eye.

It's a trick that makes the viewer focus more attention on the foreground and not notice the loss of shadow detail elsewhere the might in a wide shot without well exposed foreground details. Here's the test scene with dual flash assist, matching the key angle of the flash with the 45/45 angle of the sun cast shadows:

Cropped tight to eliminate the underexposure caused by sensor range the flash didn't react can you tell it was flash vs. direct sunlight? No because it has all the same clues as a sunlit shot seen by eye.

On overcast days there is the opposite problem. The contrast of the lighting is so low that the camera sensor records the midtones and shadows lighter than perceived by eye and the resulting image looks "flat"..

That's opposite of the correction I'd make in a sunny scene where the exposure set for highlight detail would render the midtones darker by eye. For a sunny scene I'll adjust brightness and fill in the RAW file which has the same net effect on the mid-tones in RAW as the middle slider movement in Levels: making the overall tonal range the camera was able to capture more closely match my impression by eye.

cgardner wrote:
The fact monitors aren't calibrated isn't as big a factor as you might think. TVs are not calibrated either and that doesn't create a problem.

When sitting in front of a monitor your brain adapts color and contrast perception to that gamut. If the screen is uncalibrated and has a slight green cast your brain expecting the content is sees on the screen to be neutral shifts perception of the screen gamut to make the stuff it sees on it neutral. it's only when the error is gross or the ambient lighting is significantly different in white point that a calibration error will be noticed.

There is a standard on the web. As mentioned back in the 80s before the Internet Macs were 5000°K and 1.8 gamma to simulate a printed sheet of paper. MicroSloth took a different approach and standardized on the physical characteristics of a CRT gamut resulting in huge difference in appearance of files edited on Macs and PC when people started sharing them over the net in the mid-90s on web pages.

Since PCs outnumbered Macs the de fault standard became sRGB which is more similar to the native CRT gamut than Apple's "paper white" perceptual match approach was. The result?

Now everyone sees similar color on their screen if the images are converted to sRBG but the Catch - 2.2 of the de facto sRGB standard is the 2.2 gamma that is part of the sRGB and other RGB monitor standards doesn't match the lower contrast of a print; what Apple was trying to do by using the lower 1.8 Gamma on the screen.

Why didn't 1.8 gamma become the standard? Because while it match the contrast ink on a printed page it doesn't look normal on a monitor. That's the crux of the screen vs. print match problem. Ink on paper has lower contrast than the 2.2 calibration standard of the monitor. You can change the gamma of the screen to match the ink and paper (what soft proofing does) but you can't increase the contrast of the print. All you can do is shift the mid tone which affects how what is between black and white is perceived on the print.

On overcast days there is the opposite problem. The contrast of the lighting is so low that the camera sensor records the midtones and shadows lighter than perceived by eye and the resulting image looks "flat"..

That's opposite of the correction I'd make in a sunny scene where the exposure set for highlight detail would render the midtones darker by eye. For a sunny scene I'll adjust brightness and fill in the RAW file which has the same net effect on the mid-tones in RAW as the middle slider movement in Levels: making the overall tonal range the camera was able to capture more closely match my impression by eye.

Unless one has an understanding of reproduction variables it's not easy to look at an image and determine whether the overall difference in appearance perceptually is caused by color variance (gamut mis-match) or a difference in contrast. The brain adapts perception of both to the overall contrast range of whatever is being looked at (print or screen) individually and when both are seen at the same time the one with greater contrast is used for comparison.

That problem is exacerbated with outdoor digital captures because SOOC the a file exposed for highlight detail has a loss of shadow detail and darker than seen by eye midtones. That's a contrast problem. The photographer lowers contrast so it looks good on screen but then when printed it gains contrast again perceptually due to the mechanical variables of printing a screen image doesn't have: overlapping stochastic dot pattens and ink spread/absorption. Ink printers have different mechanical variables than laser printers which fuse toner to the paper or photo printers which similarly expose the paper with laser LEDs....Show more →

In the end, the biggest issue is using cheap monitors with viewing angle issues. I can get any degree of brightness or saturation desired with a fractional head movement when viewing my laptop. This is not something I can adapt to without using a head restraint.

For monitors with wide viewing angles, I agree most times the eye adapts and the issues are small. But I have seen the garish color problem at FM on other peoples images and a few times on mine. My color almost always looks over saturated when viewing with chrome. Should I then process for chrome? Then they would be flat for wallpaper or prints.

Next I would suggest that brightness is a web issue, perhaps minor though. I have no printing issues.

For people who use wide gamut calibrated systems, there is always the nagging doubt that we will be showing work on the web that we are not able to proof as viewed by others. Soft proofing does not show me what Chrome shows, only what a color aware browser will show. This would not be an issue but I see with my own eyes a garish difference between browsers.

ben egbert wrote:
In the end, the biggest issue is using cheap monitors with viewing angle issues. I can get any degree of brightness or saturation desired with a fractional head movement when viewing my laptop. This is not something I can adapt to without using a head restraint.

For monitors with wide viewing angles, I agree most times the eye adapts and the issues are small. But I have seen the garish color problem at FM on other peoples images and a few times on mine. My color almost always looks over saturated when viewing with chrome. Should I then process for chrome? Then they would be flat for wallpaper or prints.

Next I would suggest that brightness is a web issue, perhaps minor though. I have no printing issues.

For people who use wide gamut calibrated systems, there is always the nagging doubt that we will be showing work on the web that we are not able to proof as viewed by others. Soft proofing does not show me what Chrome shows, only what a color aware browser will show. This would not be an issue but I see with my own eyes a garish difference between browsers. ...Show more →

The solution to the problem all those monitor variables create is to learn to base reproduction decisions "by the numbers" rather than what is seen by eye on the monitor.

The logic is this:

Absent a color cast in the light source a photo will look "normal" SOOC if it is recorded with Custom WB off a gray card, 255 specular highlights, 250 solid highlights, middle gray in the 128 range, and shadow detail around 20-30 with 0 detail only in voids where detail isn't seen by eye either.

Those criteria combine to define "optimal" capture conditions. Can you always achieve them? No. But when you can't knowing what varies from optimal informs you how to compensate for the less than optimal TECHNICAL capture by manipulating the sub-par capture PERCEPTUALLY to trick the brain of the viewer into thinking it normal.

By setting custom WB and exposing my highlights under clipping in the brightest area of the scene I get 2/3rds of the variables needed for a normal result SOOC perfect. The uncontrollable variable is scene contrast vs. sensor range which determines how the shadows will be recorded SOOC.

Could I bias exposure to make the midtones and shadows correct? Yes but that's robbing Peter to pay Paul because that blows the highlights. Instead I either use flash in the foreground — a solution for most photos I take outdoors — or put the camera on a tripod and take two exposures: one for highlights under clipping, the second +4 stops to ensure I have recorded the darkest, non void shadows with detail well above the noise threshold of the sensor.

Being anal retentive about recording a full tonal range and using a neutral baseline for color at capture doesn't make what I capture wit the camera any more or less "creative" it simply improves the delivery of the message I create more effectively by giving me the raw materials to make it look "normal" regardless of how it is displayed. My photos are boring because I shoot boring stuff most of the time

By using targets in test images I can easily measure whether or not the capture was optimal with nothing more than the eye dropper tool in ACR and Photoshop.

Clicking on a gray card image and seeing if it changes on screen or not tells me what the WB at capture was relative to neutral allowing me a baseline I can trust to adjust from — even if my eyes are telling me something different I trust and adjust from that baseline.

After getting the WB neutral the colors on the MacBeth chart tell me visually and by measurement how close the camera came to recording perceptually important colors like skin tones, foliage, sky accurately more objectively that I could judge by looking at the same objects in the photo without the target reference beside them.

But in the end its not accurate I want it's what looks best the technical "by the number" baseline is just a way to control the start of the process from the same place. It's the jumping off point for evaluation by eye, not what controls the final outcome.

Profile based color management was designed by smart, clever people to make the use of it a no-brainer. Back in the days of shooting JPG if using Custom WB and the camera clipping warning and histogram to optimize the capture you could by-pass the computer (and your faulty judgement) completely, put the CF card in the printer and print directly letting the printer manage color and get a normal looking print. That happen because the WB and exposure was optimal and the printer converted the optimal file values optimally via it's internal profile for the paper being used.

Shoot RAW and trusting what you see by eye on the monitor adds more variables. What happens when you add more variables to a process? The the odds of it varying from optimal increase.

If the results of an optimal capture wind-up less than optimal isn't not a "hardware" problem but rather a "meatware" problem.

I almost always shot for blending, making sure I have highlight and shadow covered. I even shoot in burst mode to assure rapid capture and reduce the effects of subject motion. Hand blending two images sometimes works because I can assure using one image in areas with motion.

I often find even cloud motion and for sure vegetation or water spoils blends unless it is really still.

Using ND grads, including a reverse also helps with DR, but I can often see the blend line now.

I usually don't have any shadow area noise in my ISO100 shots from my 1DS-mk3 when using CS6. I am gravitating towards using single exposures, preferably without grads. Now I am bracketing at 1/3 stops and typically choose the brightest exposure that does not have any clipping.

I have a McBeth color checker on order to replace my 4 year old checker. I will make a new custom color profile for my camera and use the device in the field to get WB.

I agree, for a color challanged person like myself, I need to go by the numbers. I don't do people so skin color is not an issue, but landscapes still need some help.