One common technique for boosting color saturation is manipulating channel A and B in LAB color mode, for example with "curves" in Photoshop. It seems to be a question of personal preference whether people use

A) an S-curve for this or

B) just move the edges of the adjustment panel inward to get a straight line.

I would like to know how many of you are following "A" or "B" (or "C": use completely different approaches to increasing color saturation), and most importantly on what rationale you base your curves, lines, values etc.

To increase colour saturation using the a and b channels of Lab, you must move each curve the same distance from the corners top and bottom, linear (no S, no other shape) and both curves must pass through the middle creating a uniform symmetrical "X". Anything else will muck-up your colour balance big time. These are very sensitive adjustments with large impacts.

All that said, this is most often a needless process. For 99% of all practical purposes, it will do just fine to increase the saturation of an RGB file by leaving it in RGB colour space and using the conventional tools provided in Lightroom, Camera Raw or Photoshop for increasing saturation globally or selectively. They are more than adequate.

Wouldn’t a symmetrical S-curve that hits the middle keep the balance, too?

My experience is that most images which need a boost in saturation have values in A in B that are not far away from the middle. To stretch those near-neutral values I have to build a curve that has a steeper slope where those values sit. In theory I could move the lines’ beginnings as far in as the most extreme value in one of the four colors (that is, stop just before cutting something off). So the most extreme color value determines the manipulation in both channels, on both sides of the middle. Of course these curves are likely to have such an impact that we won’t like the image afterwards anymore, but on this basis we are free to decrease the opacity of our curves layer.

My guess is that most people prefer to make this kind of adjustments based on actions that move the four end points in, so they don’t have to go to each channel all the time, click and move or type in values fout times, control the values, perhaps do some mental arithmetic. And then find out that the change was not what they had hoped for.

Anyway, what made me start this thread was to find out the rationale behind the values people chose (maybe just values they start the process with, something based on profound experience?).

Now about the S-curve: It does not seem to harm the image too often, probably because most A/B values are close to the middle where the S-curve is hardly any different from a straight line. Only relatively extreme values (in the unaltered image file that is) will experience a different adjustment: they will not be boosted as much. But, and this might be a good reason to use an S-Curve in the first place, they also will not be cut by moving the ends too far as might happen in the "straight line workflow".

So the S-curve might work as a more comfortable way of working on a lot of images?

No, it is not just a matter of personal preference - it is technical. ANY departure from linearity will impact colour balance because you are altering the relationship between green and magenta on the a curve and blue and yellow on the b curve at different luminosity levels depending on where the non-linearity sets in, and if those linear curves do not intersect in the middle the colour balance of the whole image is thrown-off. Lab conversion is a needless and cumbersome complication from the get-go for 99% of the image editing the great majority of us ever need to do. There are many things in Photoshop that don't work in Lab mode, so most often you will need to convert back to RGB, and once you do this your Lab adjustments are baked-in and non-reversible, unless you have a separate duplicate image copy with the Lab adjustments converted back to RGB but layered-in. Lab has certain specialized uses that justify its inclusion in the Photoshop arsenal, but making simple adjustments to saturation is not one of them when there are much more straightforward ways of doing this in RGB.

All that said, this is most often a needless process. For 99% of all practical purposes, it will do just fine to increase the saturation of an RGB file by leaving it in RGB colour space and using the conventional tools provided in Lightroom, Camera Raw or Photoshop for increasing saturation globally or selectively. They are more than adequate.

Amen to that! Plus if there is an unattractive color shift, just Fade using Luminosity. Better yet, do all this in the raw processor like ACR or Lightroom.

In addition to all the wonderful discussion points, I believe that any conversion to and from different colour spaces is going to result in conversion/rounding errors, even if at a very minute amount. Therefore this alone may negate any potential slight benefit of increasing saturation via Lab instead of RGB.

Anything else will muck-up your colour balance big time. These are very sensitive adjustments with large impacts.

It's a relief someone else notices this besides me. I used to edit my scanned color negatives in Lab and I liked how it gave me strange, glorious colors very quickly until I noticed after some lengthy tweak sessions I virtually destroyed the natural color constancy and color palette of the original scene to where it now looked cartoonish.

For instance forest green shrubs/trees lit by golden hour sunlight shouldn't look cooler with added cyan but warmer with a bit of added orange. If you aren't careful and aware of the human visual system's adaptive nature to cool and warm colors editing in Lab space, you can end up reediting the entire image after taking a break and returning to something butt ugly you thought before looked great.

RGB by its very three letter description follows a color sensibility with regard to color constancy that more closely follows the behavior of the the rods and cones of our eyes to how humans perceive color changes due to changing light on a scene.

Jay’s worry about conversion/rounding errors made me do a little experiment this morning. I built a Photoshop document filled with neutral gray (128,128,128) and did some conversions back and forth. I'll only report my most significant results here:

sRGB 16bit > LAB 16bit > sRGB 16bit= no error even after 200 circles of converting back and forth

Same result in ProPhoto RGB for those who wonder. So I guess with a 16bit workflow there is no need to worry about rounding errors.

Then I was curious about what exactly happens with luminosity when increasing color saturation in RGB or LAB. Not color shifts, just luminosity. So I took an image and added a layer of +30 saturation in RGB. On four random points with different luminosity and color I measured what happened to the values, then had a look at the individual points as well as at the average deviation. This is my ranking of methods in sRGB according to deviation in luminosity from the greatest (A) to the smallest amount (D):

Interesting: Even the last workflow in the list, the one with the least amount of deviation in luminosity produced some deviation. That being said, I guess that nobody would notice any of the measured amounts of deviation in luminosity in real life and no matter how aesthetically sensible.

Now for the LAB workflow. I did not try to boost saturation the exactly same amount as in RGB. Actually I am lacking the knowledge how to relate between "+30 saturation" in RGB and the slope of a curve in a/b channels in LAB. But nevertheless I could see something interesting in my experiment:

I guess I have learned a lesson about the importance of a 16bit workflow.

Well, last but not least I converted my LAB images 1)–4) back to sRGB:Luminosity had changed in all cases. The values are not really comparable to the RGB workflow due to the lack of consistency, but the deviation values where somewhere between the (B) and (C) values.

I will think about what all this means for my workflow. My feeling is that all the measured deviations are much smaller than the difference between my motives’ colors and what I get from my RAW developers as a tif file. So as long as I like the result of a certain manipulation in Photoshop, it might be okay for me to work in whatever color space available. As long as it’s 16bit, that is.

I will think about what all this means for my workflow. My feeling is that all the measured deviations are much smaller than the difference between my motives’ colors and what I get from my RAW developers as a tif file. So as long as I like the result of a certain manipulation in Photoshop, it might be okay for me to work in whatever color space available. As long as it’s 16bit, that is.

Hi,

You may also want to consider that converting to Lab, and back to RGB, will change/lose colors, even in a 16-bit workflow. Here is a test file that contains 'all' possible colors to test with (although not all images will have those colors that will be affected). One particular problem that can hurt is a shift of Blue to Purple (will create ugly looking skies).

You may additionally want to consider that the "RGB to Lab to RGB" conversion is often avoidable, by staying in RGB and use dedicated tools that avoid perceptual color shifts.

One must remember that 16777216 "colours" in RGB are in fact not unique colours, but all possible combinations of the encoding space.

ie there are not 16777216 perceivable colours in the sRGB spaceLAB space on the other hand is based on perceived colours and is used to calculate Colour Difference. It is an attempt to represent Human Colour Vision in perceptually uniform manner.

sRGB etc is not based on human colour vision but on a real or ideal device.

Hence the reduction from 16777216 to '2,186,578 unique colors' as Bruce states.

One must remember that 16777216 "colours" in RGB are in fact not unique colours, but all possible combinations of the encoding space.

ie there are not 16777216 perceivable colours in the sRGB space

Hi Iain,

When the sRGB colorspace is concerned, then all of the 16+ million color combinations are well within the gamut that human vision can see, and as such are colors. When we assign a different colorspace, e.g. ProPhoto RGB, to the same test file, then a number of those coordinates fall outside the human range of perception, and as such they cannot be called colors.

Quote

LAB space on the other hand is based on perceived colours and is used to calculate Colour Difference. It is an attempt to represent Human Colour Vision in perceptually uniform manner.

CIE Lab is not perceptually uniform, unless a theoretical transformation is performed as Bruce describes here. As he states, Lab "was not designed to have the perceptual qualities needed for gamut mapping". It was indeed "designed to measure color differences". Using Lab for anything else than measurements is fraught with difficulties.

Quote

sRGB etc is not based on human colour vision but on a real or ideal device.

Hence the reduction from 16777216 to '2,186,578 unique colors' as Bruce states.

That's not what Bruce states though, he says that "All of the loss may be attributed to quantization, that is, multiple unique RGB colors collapsing into a single Lab color". Besides, sRGB is only a small sub-set of all humanly visible colors, that's one of the reasons it was used as a lowest common denominator for various hardware devices (camera and scanner files/displays/printers/etc.).

I did mention that not all of the color coordinates in the test file are present in an average image, so some images will lose more color precision (multiple colors collapsing in a single coordinate) than others.

HiAs Srgb is device space the 16million + coordinates do not represent unique perceived colours. There are a lot of rgb triplets that produce the same colour perception or 'holes'. The holes are already there is srgb its that you just dont 'see' them until you convert to say LAB

So to go from device space to perceptual space of course there will be a reduction.

I said that LAB as an attempt to produce a uniform color space. CIECAMUCS is a better attempt.

Converting from RGB to Lab and back in 8-bit, depending on the original color space will lose a decent amount of values and it's unnecessary. In 16-bit, the same is true but the rounding errors are so tiny, we don't detect them. Partially thanks to the lack of precision of Photoshop's number read-out <g>. If you feed color lists to ColorThink, you'd see this. Keep in mind the role of Dither on 8-bit per color data (check the color settings)! One way to 'see' the effect of this kind of conversion is to use the Apply Command to subtract two iterations, the role of dither here should be visible:

CIE Lab is not perceptually uniform although that was the idea <g>. The creators of Lab probably couldn't imagine the use of the color model today in app's like Photoshop and would probably question some of those use as well.

As Srgb is device space the 16million + coordinates do not represent unique perceived colours. There are a lot of rgb triplets that produce the same colour perception or 'holes'. The holes are already there is srgb its that you just dont 'see' them until you convert to say LAB

In perusing Bruce's page on RGB working space information, I see that the gamut of sRGB is 832,000 ΔE3 units. a ΔE of 1.0 is the smallest color difference the human eye can see. If I interpret this correctly, this means that sRGB contains 832,000 perceivable colors. The RGB file in question contains 16M values, and many adjacent values are not differentiated by the human visual system and therefore are not unique colors in the strictest sense, since color is a perceptual phenomenon and not a physical entity. If the round trip retains 2,186,578 colors, perhaps this is sufficient. The L*a*b gamut is 2,381,085 ΔE3 units.

In perusing Bruce's page on RGB working space information, I see that the gamut of sRGB is 832,000 ΔE3 units. a ΔE of 1.0 is the smallest color difference the human eye can see. If I interpret this correctly, this means that sRGB contains 832,000 perceivable colors.

Hi Bill,

I don't think that is the correct way of looking at it. It is more that many colors fall below the JND (just noticeable difference) threshold, but that's only in a side by side attempt to differentiate between them. They are all unique colors.

It's most certainly not the reason for the encoding precision losses that Bruce mentioned.

Quote

The RGB file in question contains 16M values, and many adjacent values are not differentiated by the human visual system and therefore are not unique colors in the strictest sense, since color is a perceptual phenomenon and not a physical entity.

It is indeed the adjacent colors that cannot be discriminated from each other, but they are unique (in fact they are practically infinitely analog). The colors themselves can be perceived, but not discriminated between when seen side-by-side.

The encoding losses have to do with the differences in gamut size and an encoding in integer coordinate space.

Remember that colour is a perception. it is wrong to think of device values as separate colours unless each individual triplet is produces a different perception ie a unique colour. In sGB this is not the case.

"adjacent colors that cannot be discriminated from each other, but they are unique" - no they are the same colour!

Device vales and encoding values are just that, values, not colours until the device is viewed.

Dont forget that the PCS is either LAB or XYZ and to go from sRGB to CMYK inkjet for example you pass through LAB so the sRGB gets mapped to the LAB values anyway.

As for the OP, working in LAB is maybe not as intuitive as a RGB space, but can be done with a bit of care. Most of the time working in RGB will suffice.

You may also want to consider that converting to Lab, and back to RGB, will change/lose colors, even in a 16-bit workflow. Here is a test file that contains 'all' possible colors to test with (although not all images will have those colors that will be affected). One particular problem that can hurt is a shift of Blue to Purple (will create ugly looking skies).

You may additionally want to consider that the "RGB to Lab to RGB" conversion is often avoidable, by staying in RGB and use dedicated tools that avoid perceptual color shifts.

Cheers,Bart

Below is a screengrab in Photoshop off my calibrated sRGB-ish Dell 2209WA LCD of a 100% cropped view of an image of a decorative deep blue glass crystal ball captured in Raw under outdoor daylight shade and processed in 16bit ProPhotoRGB ACR to demonstrate why sRGB sucks as a technical quantitative comparator and even a working space no matter how many perceptual levels of colors it encompasses.

It shows what happens when I convert from ProPhotoRGB to sRGB in both ACR and Photoshop (same thing happening converting to Lab and then to sRGB). Converting to AdobeRGB shows noticeably less results but still a shift from blue to purple. Editing in ACR in sRGB would not let me correct the blue to purple color shift. None of the hue/saturation tools made a dent. ProPhotoRGB made it easy.

ACR/LR has so many tools that can bring out saturated facets like in the blue crystal that Lab tools can't even come close to...

2. Saturation sliders in "Camera Profile" panel. Don't underestimate their power when editing vibrant jewelry such as the blue crystal. They act upon color quite differently than HSL, Vibrance and Saturation sliders.

3. HSL, Vibrance and Saturation.

All of them are immediate and quickly accessed. All you have to do is play with them like your playing a video game. It's that fast. Working in Lab requires going into and saving out of too many dialog boxes, not to mention having to deal with layers.

Below is a screengrab in Photoshop off my calibrated sRGB-ish Dell 2209WA LCD of a 100% cropped view of an image of a decorative deep blue glass crystal ball captured in Raw under outdoor daylight shade and processed in 16bit ProPhotoRGB ACR to demonstrate why sRGB sucks as a technical quantitative comparator and even a working space no matter how many perceptual levels of colors it encompasses.

You may be correct, but your demonstration is a bit lacking in detail. This is what happened in the Unconverted image:

1) opened a raw file in humongous ProPhoto2) automatic/manual adjustments were applied in the raw converter spinning colors around3) color management then tried to squeeze these wide ranging ProPhoto colors into your monitor's presumed color space using unspecified parameters4) the video card driver and LUT performed more squeezing - a screen capture was taken5) the monitor displayed its best (sRGBish) rendition of such corrected colors

In the Converted image you went back to step 2) and inserted step 2a) after it in the list:

2a) conversion to sRGB color space using unspecified parameters.

The two screen shots were compared, but there are many unknowns for the comparison to be meaningful.

Since in 99% of cases people are either printing or viewing in sRGB, it would be interesting to know whether it makes a practical difference in day to day use to work in it from the very start or rather convert to it at the end of PP. Squeeze at the beginning or at the end, but always squeeze we (non-pros) must. My feeling is that most of the color differences/shifts are introduced during conversion into the final color space (sRGB), and that the more and the more extreme adjustments one makes in a larger color space, the more the chances that final colors will be wild guesstimates - which may then need to be corrected for (again if converted later) after conversion. On the other hand when there are minimal adjustments, there is no reason why Camera Space --> XYZ --> ProPhoto --> XYZ --> sRGB at the very start should not result in very similar values to Camera Space --> XYZ --> sRGB, assuming proper conversion parameters. If I understand correctly this is what is shown in your attachment (the conversion to sRGBish in the left image capture performed by your CM system). So where do the differences come from? The first suspect that comes to mind is the chosen intent/compensation, the second are adjustments performed in ProPhoto. BTW, the more direct approach should in theory result in less noise.

To check this with your difficult test file, you could open the Raw file directly into sRGB with an sRGBV4 profile (Nikon's is good) using a variety of intents with/without blackpoint compensation and produce a series of comparisons to ProPhoto+sRGB in the same conditions. Better yet you could post the Raw file and let us have a go at it with our own raw converters and CM workflows.

Jack

[EDIT]I viewed this page in Chrome, which is not color managed. Here is a screen capture of tlooknbills' message and thumbnail in Chrome, superimposed with the relative file open in color-managed CS5. Lost some of its purpliness already. I wonder what it would look like if originally opened in sRGB direct from Raw by a well behaved Raw converter other than LR.

Thanks to all contributors for this enlightening discussion. Although I have to admit that some arguments are beyond my fullest understanding. But that’s okay because I’m here to learn.

Two thoughts of mine on the recent posts:

- I think I do not care too much if my eyes can differentiate between colors/values. I would appreciate it if those different values are there in the first place and stay in my file as long as possible, because you never know, one day you might want to stretch, bend or blow up those values until they are perceivable as different colors.

- I have experienced the blue-purple shift quite often when I tried to desaturate images in CaptureOne Pro 7 ("saturation -30" for example). Is this due to CO working in some LAB-ish colorspace that is not perceptually uniform? Or could it be just an effect on my monitor (sRGB-ish, profiled with Spyder3)? What would be a save workaround?

- I have experienced the blue-purple shift quite often when I tried to desaturate images in CaptureOne Pro 7 ("saturation -30" for example). Is this due to CO working in some LAB-ish colorspace that is not perceptually uniform? Or could it be just an effect on my monitor (sRGB-ish, profiled with Spyder3)? What would be a save workaround?

It was explained by Bart's posted link above of Bruce Lindbloom's (Blue turns Purple) graphical analysis of the non-uniform definitions of color within Lab space and the mathematics involved in mapping certain colors that follow an arch rather than Euclidean math (straight line) mapping of one color definition (blue) within one color space (Lab) and into another (sRGB/AdobeRGB) and the errors that show up on an 8bit video preview.

Like it was said in this thread by other contributors Lab space was created as a color "difference" description model and is now used as a mathematical reference point "Profile Connection Space" PCS operating under the hood of color management processes. It was never designed to be an editing space though tools have been built for it because before color management with digital processes that (and Monitor RGB) was the only intuitive space to work in for commercial press color correction. The scanner was also a color space for source media such as film.

Everything was proprietary back then and so everyone had their own secret sauce for maintaining quality color reproduction. Now color reproduction integrity is controlled with math and algorithms so WYSIWYG is assured across a wide range of devices, not controlled by one company with their own secret sauce. Blue turns purple is just some of the imperfections we have to cope with in signing on to color management processes.

Quote

You may be correct, but your demonstration is a bit lacking in detail. This is what happened in the Unconverted image:

1) opened a raw file in humongous ProPhoto2) automatic/manual adjustments were applied in the raw converter spinning colors around3) color management then tried to squeeze these wide ranging ProPhoto colors into your monitor's presumed color space using unspecified parameters4) the video card driver and LUT performed more squeezing - a screen capture was taken5) the monitor displayed its best (sRGBish) rendition of such corrected colors

Jack Hogan, I'll go with Bruce Lindbloom's explanation. The limiter editing in any color space be it Raw or Jpeg is the display. There's no such thing as twisting, distorting and squeezing color on a display. To test view Photoshop Color Picker for each of two new docs, one in ProPhotoRGB and then switch to an sRGB document. They will look drastically different. In ProPhotoRGB Color Picker select the most intense blue and fill a selection and then convert to sRGB. Turns purple even in a simple color fill. It has nothing to do with getting crazy with color in ProPhotoRGB. It's just about finding the easiest way to get all the color a display can deliver and ProPhotoRGB is the space to do it. It's that simple.