first, we see more and more wide gamut monitors (even in offices ... mostly not calibrated/color managed, though). Secondly one of the classical targets of a photo is a print. Sure, many mass printer services require sRGB... but there is a growing number of photo enthusiasts printing at home and using the canned profiles paper vendors provide. Printers/papers produce a larger gamut than sRGB.sRGB is just the lowest common denominator but even as such it is pretty dated.

I viewed this page in Chrome, which is not color managed. Here is a screen capture of tlooknbills' message and thumbnail in Chrome, superimposed with the relative file open in color-managed CS5. Lost some of its purpliness already. I wonder what it would look like if originally opened in sRGB direct from Raw by a well behaved Raw converter other than LR.

One can test his/her browser by viewing an image in the ICC site. I viewed this site using the latest versions of Chrome, Firefox, Internet Explorer, and Safari. Chrome is partially color managed and the other browsers now appear fully managed as judged by this test. With wide availability of color managed browsers, there is little need to post images in sRGB if your audience is at least slightly sophisticated in color management (which I think would apply to most members of this forum).

Bill, regardless of browser, what one sees still depends on the constraint of the display gamut. Many people are using displays that don't "see" more than sRGB, notwithstanding the increasing preference - in the photographic community - for wide gamut displays that see almost all of ARGB(98).

@ tlooknbillDo you mean that there is no way to avoid blue-purple shifts because programs like RAW converters are based on a not perceptively uniform color spaces? There’s this little thingy on Bruce’s website that looks like a cure for LAB’s perceptive shortcomings. It’s downloadable for free. But maybe I got it all wrong.

@ new_havenThanks for the link. Looks like fun. I’ll play around with that method one of these days.

@ the general publicI see two main paths:#1 - Go to the smallest color space as soon as possible. Most probably that’s your monitor’s sRGB-ish gamut. Others may have a little more territory like 98% of AdobeRGB(1998). Look for your weakest link. Only this can give you some sort of WYSIWYG during adjustments. And you might find that most of your output profiles are sRGB anyway.#2 - Go to a really huge color space for adjustments. Play around and maybe even jump from one space to another, because errors from conversion are likely negligible. If you don't agree, then don’t jump. But in the end you will have to have faith in your last conversion. The one that compresses your files’ most extreme values for output into what your printer, your mother’s monitor or your client’s web browser can handle. Could be that it’s sRGB most of the time.

Any which way you choose, you will have to live with something that looks almost like a blue crystal, but surely isn‘t one if you take a closer look.

Bill, regardless of browser, what one sees still depends on the constraint of the display gamut. Many people are using displays that don't "see" more than sRGB, notwithstanding the increasing preference - in the photographic community - for wide gamut displays that see almost all of ARGB(98).

Mark,

Users whose monitor gamut is limited to sRGB or thereabouts, would experience clipping if the gamut of an image with a tagged color space exceeded their monitor's gamut, but at least an image in ProPhotoRGB would not appear washed out.

@ the general publicI see two main paths:#1 - Go to the smallest color space as soon as possible. Most probably that’s your monitor’s sRGB-ish gamut. Others may have a little more territory like 98% of AdobeRGB(1998). Look for your weakest link. Only this can give you some sort of WYSIWYG during adjustments. And you might find that most of your output profiles are sRGB anyway.#2 - Go to a really huge color space for adjustments. Play around and maybe even jump from one space to another, because errors from conversion are likely negligible. If you don't agree, then don’t jump. But in the end you will have to have faith in your last conversion. The one that compresses your files’ most extreme values for output into what your printer, your mother’s monitor or your client’s web browser can handle. Could be that it’s sRGB most of the time.

Or you can do what Michael and I do, shoot in raw, do the heavy lifting for tone/color correction in the raw processor (and only go to Photoshop for specific retouching, compositing needs) and maintain images in ProPhoto RGB, 16-bit as RGB Master images and transform to specific output profiles for prints or web/multimedia. Considering my ink jet printers can print colors outside of Adobe RGB, let alone outside of sRGB, I think it's a mistake to cut your gamut early in processing. Both printers and displays are increasing their ability to reproduce colors, so I think it's foolish to not take advantage of larger color gamuts for as long as you can. ProPhoto RGB is the only color space (currently in ACR/LR) can can keep all the colors your camera can capture and the colors your printer can print.

As for L*a*b*, it's an interesting color space that allows some unusual color manipulations not possible in RGB, but it's not interesting enough for me to make me move out of RGB processing...but hey, do whatever it takes to get your images they way you want them.

If I start in sRGB I need to do this re-opening in less than 10% of the files.

You are not starting with sRGB if you are working with raw data and some processor working in some RGB processing space. Think of sRGB as just another one of many possible color spaces you can end up with.

sRGB is not a print output color space. There are no sRGB printers. There are printers you can feed sRGB and then that data is converted into another color space for that output device. Today, the only reasonable use of sRGB is to show an image to someone on a display. That's what sRGB is based upon (albeit a very old CRT circa early 1990s).

Your final may be sRGB or Epson 3880 Luster RGB. You treat this process the same. The data doesn't have to be in sRGB or Epson 3880 to do 95%+ of the work while maintaining the widest gamut data you are forced to use in the converter.

IF you really want sRGB, at least the closest in terms of what can be produced on your end, set the camera to sRGB.

QuoteThe limiter editing in any color space be it Raw or Jpeg is the display. There's no such thing as twisting, distorting and squeezing color on a display....It has nothing to do with getting crazy with color in ProPhotoRGB. It's just about finding the easiest way to get all the color a display can deliver and ProPhotoRGB is the space to do it. It's that simple.

Quote

Thanks for your opinion, tlooknbill, but I'd like to see for myself, ideally with a difficult file like your blue ball one. And so far I have not yet seen a practical advantage to going through ProPhoto when the final output space is sRGB - in a Raw file to sRGB monitor/print workflow.

Quote

PS For those who are wondering, ProPhoto/aRGB from start to finish give clearly better colors on my U2410 monitor than the sRGB/ProPhoto+sRGB workflows discussed above. This is especially evident in gollywop's sunset image.

I can see from your statement that we share the same opinion about large color spaces giving better colors on our displays.

I can also see I wasted my time uploading the blue crystal ball Raw image so you could see the blue to purple shift in what I thought you were going to process the Raw to get similar results as I showed in my screengrab. You just did a screengrab of an unprocessed Raw file which isn't going to show anything. I don't even understand or see the point of you posting the screen shots.

I can also see I wasted my time uploading the blue crystal ball Raw image so you could see the blue to purple shift in what I thought you were going to process the Raw to get similar results as I showed in my screengrab. You just did a screengrab of an unprocessed Raw file which isn't going to show anything. I don't even understand or see the point of you posting the screen shots.

I thought I was clear in my obective: comparing images processed/rendered in sRGB only versus opened and processed in ProPhoto and converted to sRGB at the end. The differences appear to be minimal as I mentioned, at least with the three example files and adjustments I tried. In the case of your file, the purple shift was there in both workflows, so I would have had to correct for it either way - except that in the ProPhoto workflow I would have realized it only at the end of the session.

Thank you for sharing your file, it was useful at least to me. Apologies if I have wasted your time.

I think what we need and maybe Adobe will include this at some-point in their development of LR or ACR is the ability to preview or predetermine the image colour gamut. The user then could select the appropriate colour space dependent on the image colour gamut if so desired.

The image is in XYZ at some point in the pipeline so a quick subsampling to LAB for gamut comparison would be quite straight forward.

For example if an image contains a very small colour gamut would it need to be rendered to prophoto or would sRGB suffice? or vice versa.

Either way I think it is important to match as best as possible image gamut space to rendered colour space.

I think what we need and maybe Adobe will include this at some-point in their development of LR or ACR is the ability to preview or predetermine the image colour gamut. The user then could select the appropriate colour space dependent on the image colour gamut if so desired.

The image is in XYZ at some point in the pipeline so a quick subsampling to LAB for gamut comparison would be quite straight forward.

For example if an image contains a very small colour gamut would it need to be rendered to prophoto or would sRGB suffice? or vice versa.

Either way I think it is important to match as best as possible image gamut space to rendered colour space.

Iain

What is the purpose of having a bunch of Raw/Jpeg files saved in various output color spaces. Schewe, Rodney and others have already given the simplest workflow solution and that is to just keep it in ONE COLOR SPACE and that being 16bit ProPhotoRGB.

You don't need to know about gamut compression. You've got a preview to tell you all you need to know. The "Blue Turns Purple" issue is not a deal breaker. It was just a demonstration to show how editing in sRGB hobbles your color you paid good money for in the form of capture device and wide gamut display.

Raw images are nothing but grayscale luminance representation of voltage measurements from charged photo cells. The display and the color space gives the photographer the means to deciding what colors to turn those grayscales into. sRGB is the worst place to find that out.

Adobe could improve the accuracy and usefulness of an OOG overlay. ColorThink has been doing it better than the old PS OOG overlay for years. It would be useful to see in real time, what's out of your current display gamut as you edit and you don't have to cover the image with one color at 100% opacity either.

I think what we need and maybe Adobe will include this at some-point in their development of LR or ACR is the ability to preview or predetermine the image colour gamut. The user then could select the appropriate colour space dependent on the image colour gamut if so desired.

The image is in XYZ at some point in the pipeline so a quick subsampling to LAB for gamut comparison would be quite straight forward.

For example if an image contains a very small colour gamut would it need to be rendered to prophoto or would sRGB suffice? or vice versa.

I don't know what type of tool you would like to have, but both ACR and LR have the capability to determine if the gamut of the image fits into a selected color space for the final rendered image. In ACR you can set the preference to render into sRGB. If gamut clipping is apparent as in the histogram of the first image below, select AdobeRGB or ProPhotoRGB to eliminate the saturation clipping.

In LR, the rendering into the selected color space is deferred until the export stage. One can use soft proofing to see if there is clipping when exporting into the final color space as shown in the second image below.

Either way I think it is important to match as best as possible image gamut space to rendered colour space.

If you use 16 bits per pixel, I see no disadvantage in using ProPhotoRGB. If 8 bit output is desired, then a smaller space would be preferable. Personally, I would not take the trouble to go through dozens of images, selecting the appropriate space for each, but would simply use ProPhoto as Jeff recommends. Otherwise, you could use a space that contains the real world surface colors that occur in nature. The size of such a space is shown in the third image below, courtesy of Gernot Hoffmann. Bruce Lindbloom's BetaRGB might fit the bill.

@tlooknbillCould not sleep after what had been said and did some research. The color-space/-conversion basis of Photoshop indeed seems to be screwed up. For example I found quite informative what Rags Gardner wrote (http://www.rags-int-inc.com/PhotoTechStuff/ColorCalculator/AdobeMath.html). Funny to read that he hopes that Adobe will finally get things straight in CS3 – we're at CS6 right now.

So my understanding is that even if CIELab might not be perceptively uniform (Wikipedia claims it is: "Lab color is designed to approximate human vision. It aspires to perceptual uniformity") it takes just a bit of math to change that and meet with the Munsell criteria. The know-how is there and also the computational capacity of our Macs/PCs would manage to do that trick. It’s a shame that Adobe does not seem to care.

Furthermore the biggest player is unable to get some rather primitve value conversions right, which altogether makes changing color spaces between RGB and LAB in Photoshop so lossy that you better don’t do it, no matter at what bit rate.

None of this has to be this way, and I am waiting for a software that does it right, because I still prefer how changing saturation in LAB is not affecting luminance (color-psychological effects aside). In RGB my images lighten up when boosting saturation and I need to apply another curve to bring those values down again.

@tlooknbillCould not sleep after what had been said and did some research. The color-space/-conversion basis of Photoshop indeed seems to be screwed up. For example I found quite informative what Rags Gardner wrote (http://www.rags-int-inc.com/PhotoTechStuff/ColorCalculator/AdobeMath.html). Funny to read that he hopes that Adobe will finally get things straight in CS3 – we're at CS6 right now.

So my understanding is that even if CIELab might not be perceptively uniform (Wikipedia claims it is: "Lab color is designed to approximate human vision. It aspires to perceptual uniformity") it takes just a bit of math to change that and meet with the Munsell criteria. The know-how is there and also the computational capacity of our Macs/PCs would manage to do that trick. It’s a shame that Adobe does not seem to care.

Furthermore the biggest player is unable to get some rather primitve value conversions right, which altogether makes changing color spaces between RGB and LAB in Photoshop so lossy that you better don’t do it, no matter at what bit rate.

None of this has to be this way, and I am waiting for a software that does it right, because I still prefer how changing saturation in LAB is not affecting luminance (color-psychological effects aside). In RGB my images lighten up when boosting saturation and I need to apply another curve to bring those values down again.

First, human visual perception is non-linear. I don't know what you mean by "perceptually uniform".

Second, if you want to separate luminance from color in RGB you can do it with Photoshop's Blend Modes and don't need conversion to Lab. But every time you do it you will find that after changing contrast in Luminance mode, you will need to add saturation to make the image look "natural". So it's largely pointless for most intents and purposes.

Third, working in 16-bit, the mathematical rounding errors from RGB>Lab conversion will most likely be un-noticeable 99% of the time. But that's not the point.

The main issue is reversibility of image editing. Lab conversion complicates reversibility and creating these complications is largely needless. Yes, there are certain ways of using Lab to make selections based on manipulation of the channels that don't require selection tools, but this isn't the workaday requirement for most intents and purposes. It's good to use when nothing else works as well, but that has been decreasingly the case since years back. No-one should make a theology of this. It's just another tool to use - selectively - when nothing else can do as well more easily.

So my understanding is that even if CIELab might not be perceptively uniform (Wikipedia claims it is: "Lab color is designed to approximate human vision. It aspires to perceptual uniformity")

That might have been the goal, but it isn't the reality. CIELAB was 'recommended' by the CIE in 1976, to address a specific problem, namely, while identical XYZ values could tell you when two stimuli would be experienced as the same 'color' by most observers, it did not tell you how 'close' two colors were if they were not exactly the same XYZ value. Where Lab is useful is for predicting the degree to which two sets of tristimulus values will match under defined conditions thus it is not anywhere close to being an adequate model of human color perception. It works well as a reference space for colorimetrically defining device spaces, but as a space for image editing, it has many problems. There are a slew of other perceptual effects that Lab ignores. Lab assumes that hue and chroma can be treated separately, but numerous experimental results indicate that our perception of hue varies with the purity of color. Mixing white light with a monochromatic light does not produce a constant hue, but Lab assumes it does! This is seen in Lab modelling of blues. It's the cause of the dreaded blue-magenta color issues or shifts. Lab is no better, and in many cases can be worse than a colorimetrically defined color space based on real or imaginary primaries.

Quote

Furthermore the biggest player is unable to get some rather primitve value conversions right, which altogether makes changing color spaces between RGB and LAB in Photoshop so lossy that you better don’t do it, no matter at what bit rate.

That's simply not accurate. Rather, the need to convert to Lab has been over sold and most just accept this rather than investigate how and why the tools we have work they way they do!

None of this has to be this way, and I am waiting for a software that does it right, because I still prefer how changing saturation in LAB is not affecting luminance (color-psychological effects aside). In RGB my images lighten up when boosting saturation and I need to apply another curve to bring those values down again.

Hi,

Topaz Labs "Adjust" could be what you are looking for. It's a Photoshop plug-in, but can also be used without Photoshop in Topaz Labs "photoFXlab", which is a kind of command central for all plugins, but it also offers e.g. layers and masking functionality. 'Adjust' offers, amongst others, Adaptive Saturation, regular saturation, and low Saturation Boost controls, while attempting to leave Luminosity alone.

I am waiting for a software that does it right, because I still prefer how changing saturation in LAB is not affecting luminance (color-psychological effects aside). In RGB my images lighten up when boosting saturation and I need to apply another curve to bring those values down again.

Combinations of Saturation, Vibrance, HSL and Calibration Panel sliders in ACR/LR can do the same Sat/Lum disconnect as in Lab.

The Blue channel Saturation slider in Calibration Panel really is the worst while Sat slider in Basic Panel is the best. No need to go into Lab in Photoshop to apply something as simple as increased saturation. What a cumbersome workflow to have to go into Lab just to add saturation.

I couldn't imagine having to do that on my 3000 Raw shots.

Quote

Mixing white light with a monochromatic light does not produce a constant hue, but Lab assumes it does! This is seen in Lab modelling of blues. It's the cause of the dreaded blue-magenta color issues or shifts. Lab is no better, and in many cases can be worse than a colorimetrically defined color space based on real or imaginary primaries.

Interesting. Never heard of monochromatic light, Andrew. What is it and how does it change hue perception that's different from "White Light" Lab doesn't take into account? How does that fit into Bruce Lindbloom's "Blue turns purple" analysis posted earlier? Or are you referring to something different from Bruce's explanation.