gollywop wrote: I'm wondering, however, where the XYZ came from unless you're talking about OOC jpegs.

Hi gollywop. As I understand it, all conversions (including from camera space, if not short circuited by vendors' proprietary Raw converters) need to go through Profile Connection Space (XYZ D50) . Come to think of it, the two color space flows should probably look like this:

It's my understanding that the camera's jpeg process begins with a trip to XYZ. I wouldn't take bets, but I don't believe ACR does and I'm of an even stronger belief that RPP doesn't.

Regarding white balancing processes specifically:

Iliah Borg wrote:

The usual way to achieve WB is to equalize response of all colour channels over a grey (or synthetic grey, as in the grey world model) and is performed in RGGB domain. This is based on the theory of equalizing colour channel sensitivities using achromatic colours as a reference.

The "chromatic adaptation" way of achieving white balance is acting over XYZ, that is the ACR/LR way.

All raw converters ultimately have to take the demosaiced raw camera coordinates (RGB or GMCY) and map them to XYZ or Lab, though it may be only an intermediate step. This is especially true for raw converters that use ICC profiles, where either XYZ or Lab D50 is the working space. The final mapping from XYZ to any of the standard RGB spaces is just a linear transform plus an encoding curve. The lookup tables provide a mechanism for non-linear corrections (e.g., if you need to tweak deep saturated reds differently than less-saturated reds, something the earlier matrix-only profiles obviously could not do).

gollywop wrote: I'm wondering, however, where the XYZ came from unless you're talking about OOC jpegs.

Hi gollywop. As I understand it, all conversions (including from camera space, if not short circuited by vendors' proprietary Raw converters) need to go through Profile Connection Space (XYZ D50) . Come to think of it, the two color space flows should probably look like this:

It's my understanding that the camera's jpeg process begins with a trip to XYZ. I wouldn't take bets, but I don't believe ACR does and I'm of an even stronger belief that RPP doesn't.

Regarding white balancing processes specifically:

Iliah Borg wrote:

The usual way to achieve WB is to equalize response of all colour channels over a grey (or synthetic grey, as in the grey world model) and is performed in RGGB domain. This is based on the theory of equalizing colour channel sensitivities using achromatic colours as a reference.

The "chromatic adaptation" way of achieving white balance is acting over XYZ, that is the ACR/LR way.

All raw converters ultimately have to take the demosaiced raw camera coordinates (RGB or GMCY) and map them to XYZ or Lab, though it may be only an intermediate step. This is especially true for raw converters that use ICC profiles, where either XYZ or Lab D50 is the working space. The final mapping from XYZ to any of the standard RGB spaces is just a linear transform plus an encoding curve. The lookup tables provide a mechanism for non-linear corrections (e.g., if you need to tweak deep saturated reds differently than less-saturated reds, something the earlier matrix-only profiles obviously could not do).

Thanks, xpatUSA. Yes, those are the choices for an output space. So it would appear that XYZ is a possible output space, but it is not employed as a matter of course during processing, which is what I'd figure. It's like RPP allows LAB as a choice for an output space, but it doesn't figure in any of the basic rendering operations.

My mistake. As I understand color theory, XYZ is a color model and, as you probably know, color spaces are contained within a color model. XYZ itself is, as it were, dimensionless - often normalized to 1,1,1 for example. What Coffin does (at least with my X3F's) is transform the 3-channel raw data into 16-bit XYZ numbers, i.e. 1 = 65,553. Theoretically, he should do this internally anyway, irrespective of the output space but he's not forced to do so and I don't know if he does. When displayed on the monitor, his XYZ images come out very unsaturated as one expect. One of Coffin's profiles is ProPhoto linear gamma, so I suspect that he may use Kodak RIMM/ROMM as the working area. ArvoJ probably knows more about that than I.

Similarly, CIELAB is also a color model, not a space, so I'm not sure how it could be chosen as an output space although it is most certainly a PCS for some profiles.

A very well written article on the key elements related to color management. You hit the high points of a technically complex issue. A good contribution for enthusiasts seeking good background information on the topic. Thanks for the work.

I've got a number of sunset shots with saturated oranges that I've processed with both sRGB and Adobe RGB, and the differences are quite noticeable from my Canon PixmaPro 9000 II (particularly when using Red River papers). The sRGB results are dull and disappointing.

Is there any advantage for skintones (black, brown, white/ pink and yellowish) using larger than sRGB color spaces? Or sRGB is sufficient for skin tones?

I'm usually not that interested in videos - but this tutorial is worth checking out. Some interesting comparative 3-D looks at ProPhoto, Adobe, and sRGB spaces, and some looks at a few images to see how these color-spaces encompass (or fail to encompass) the recorded RAW color information.

I've got a number of sunset shots with saturated oranges that I've processed with both sRGB and Adobe RGB, and the differences are quite noticeable from my Canon PixmaPro 9000 II (particularly when using Red River papers). The sRGB results are dull and disappointing.

Is there any advantage for skintones (black, brown, white/ pink and yellowish) using larger than sRGB color spaces? Or sRGB is sufficient for skin tones?

sRGB is fine for skin tones; they fall within the gamut.

Indeed, sRGB may be preferable if you're working with 8-bit files. Because sRGB is a narrower space than Adobe RGB, but uses the same number of values (256) for each part of the RGB triple, the "distance" between colors is smaller with sRGB and so you can get a finer color gradation. However, if you're working with 16-bit files, this is a non-issue.

A very well written article on the key elements related to color management. You hit the high points of a technically complex issue. A good contribution for enthusiasts seeking good background information on the topic. Thanks for the work.

You're most welcome, Doug. I'm glad it was of value, and I appreciate your comment.

Starting from Raw, when you know your final image needs to be in sRGB, does it make more sense to

1) open it and perform PP in a much larger color space like ProPhotoRGB, converting to sRGB at the very end; or2) open the Raw file directly in sRGB and stay in it throughout?

A few years ago I used to use Melissa D65 as my primary working color space, fine tuning images in that large space only converting to sRGB at the end of the worlflow if needed. Often in these cases the sRGB version required additional fine tuning but at least the original with all my adjustments would be in Melissa D65 for archival purposes and I wouldn't have to revisit it in the future if/when monitors/media improved. Or so went the theory.

It worked well, except when I realized that I was spending a lot of time re-fine tuning most of my keepers because the vast majority of them needed to be turned into sRGB after all. Statistics to the rescue: 90+% of my keepers need to be in sRGB because someone wants a copy via email or because someone wants to make a wallmart print today - only 1 or so a month get the special fine-art treatment, eventually being printed large to perfection.

And I started thinking that if red flowers clipped when going straight to sRGB, they probably still will when ending up in sRGB after ProPhoto: except that in the latter case you'd only realize that you are clipping them at the end of your session, adding additional PP time to get them the way you want them. So I now do it the other way around: sRGB throughout for most keepers and only start in aRGB or a larger space with the very few images that I print large. CNX2, which I use on 100% of my captures (90% of the time ending it there, without needing a trip to CS5), makes it easy to make this change after the fact leaving all other adjustments intact. This thread gave me the impulse to revisit this decision.

Thanks to Tim Lookinbill's Blue Ball, gollywop's sunset and a flower Raw files, all of which have clipped histograms in sRGB that are not clipped in ProPhoto in the areas of interest, I was able to use ACR 6.7 and CS5 to investigate the differences to be expected in 1 and 2 above.

If opened with the neutral camera profile and no adjustments are applied between opening and converting, the images resulting from the two workflows are virtually idistinguishable. Here Tim's blue ball is shown with workflow 2, 1 and just ProPhoto left to right on my calibrated/profiled Dell U2410 monitor run by Win7, which covers 95% of aRGB.

Very slight differences between the two workflows became apparent when adjustments were introduced. Something as simple as changing the ACR 6.7 camera profile from Camera Neutral to Camera Landscape caused some slight but visible differences to appear in the two resulting images, neither necessarily better than the other. Switching back to Camera Neutral and pushing a more aggressive adjustment (CEP3 Tonal Contrast at default settings) resulted in this comparison:

There are tiny localized differences (sRGB only to the left, ProPhoto converted to sRGB to the right). I know where they are, so I can spot them easily. But neither image is clearly more accurate or preferable to the other.

So unless someone has a good argument for otherwise, I think I am going to stick with my current approach: sRGB as my day-to-day working color space, and aRGB/MelissaD65 for the few occasions when I feel it is necessary. As opposed to the other way around.

Cheers,Jack

PS For those who are wondering, ProPhoto/aRGB from start to finish give clearly better colors on my U2410 monitor than the sRGB/ProPhoto+sRGB workflows discussed above. This is especially evident in gollywop's sunset image.

PPS Apologies for cross posting on LuLa, these two threads came about independently and at the same time.

Starting from Raw, when you know your final image needs to be in sRGB, does it make more sense to

1) open it and perform PP in a much larger color space like ProPhotoRGB, converting to sRGB at the very end; or2) open the Raw file directly in sRGB and stay in it throughout?

A few years ago I used to use Melissa D65 as my primary working color space, fine tuning images in that large space only converting to sRGB at the end of the worlflow if needed. Often in these cases the sRGB version required additional fine tuning but at least the original with all my adjustments would be in Melissa D65 for archival purposes and I wouldn't have to revisit it in the future if/when monitors/media improved. Or so went the theory.

It worked well, except when I realized that I was spending a lot of time re-fine tuning most of my keepers because the vast majority of them needed to be turned into sRGB after all. Statistics to the rescue: 90+% of my keepers need to be in sRGB because someone wants a copy via email or because someone wants to make a wallmart print today - only 1 or so a month get the special fine-art treatment, eventually being printed large to perfection.

And I started thinking that if red flowers clipped when going straight to sRGB, they probably still will when ending up in sRGB after ProPhoto: except that in the latter case you'd only realize that you are clipping them at the end of your session, adding additional PP time to get them the way you want them. So I now do it the other way around: sRGB throughout for most keepers and only start in aRGB or a larger space with the very few images that I print large. CNX2, which I use on 100% of my captures (90% of the time ending it there, without needing a trip to CS5), makes it easy to make this change after the fact leaving all other adjustments intact. This thread gave me the impulse to revisit this decision.

Thanks to Tim Lookinbill's Blue Ball, gollywop's sunset and a flower Raw files, all of which have clipped histograms in sRGB that are not clipped in ProPhoto in the areas of interest, I was able to use ACR 6.7 and CS5 to investigate the differences to be expected in 1 and 2 above.

If opened with the neutral camera profile and no adjustments are applied between opening and converting, the images resulting from the two workflows are virtually idistinguishable. Here Tim's blue ball is shown with workflow 2, 1 and just ProPhoto left to right on my calibrated/profiled Dell U2410 monitor run by Win7, which covers 95% of aRGB.

Very slight differences between the two workflows became apparent when adjustments were introduced. Something as simple as changing the ACR 6.7 camera profile from Camera Neutral to Camera Landscape caused some slight but visible differences to appear in the two resulting images, neither necessarily better than the other. Switching back to Camera Neutral and pushing a more aggressive adjustment (CEP3 Tonal Contrast at default settings) resulted in this comparison:

There are tiny localized differences (sRGB only to the left, ProPhoto converted to sRGB to the right). I know where they are, so I can spot them easily. But neither image is clearly more accurate or preferable to the other.

So unless someone has a good argument for otherwise, I think I am going to stick with my current approach: sRGB as my day-to-day working color space, and aRGB/MelissaD65 for the few occasions when I feel it is necessary. As opposed to the other way around.

Cheers,Jack

PS For those who are wondering, ProPhoto/aRGB from start to finish give clearly better colors on my U2410 monitor than the sRGB/ProPhoto+sRGB workflows discussed above. This is especially evident in gollywop's sunset image.

PPS Apologies for cross posting on LuLa, these two threads came about independently and at the same time.

Thanks for that comparison, Jack.

The strategy that you suggest is more or less where I've come down over the years as well, particularly since I've been far more web-oriented in recent times. I take my web-destined dngs into ACR and process them for optimal (my taste) results in sRGB and make a snapshot (labelled, of course, "sRGB"). This dng becomes my archive. I then either save a jpeg out of ACR or, more often, take the psd into PS for some further PP, particularly Shadows/Highlights (sometimes), sizing/resampling and output sharpening. If there is anything particularly arduous or non-obvious about the PP, I create a "settings" text file with a description of what was done. Using the short hand I've developed over the years, these entries are quick and easy and allow very quick and sure replication if needed.

[I've adopted a similar workflow when using RPP now that Andrey has added the possibility to output in sRGB. That was a nice move.]

If I now want a print-destined version, I simply go back to ACR and revamp things using either Adobe RGB or ProPhotoRGB for the working space. It rarely takes much change to get an "optimal" production in the broader space (usually just less reduction in Highlights, a touch higher White, and perhaps a little more aggression with Clarity and Vibrance). I then save this version as another snapshot (yep, you've guessed the name). The result then goes into PS for appropriate PP.

I find that, for my print-destined PP, relative to the web-destined PP, it helps to pull the center of the curve up just slightly (a couple of points vertically), add a touch more saturation (4-8), and give it some Local Contrast (Hiraloam - but employing an inverted Luminance mask -- I've written an action that does this quite nicely). I then tweak things with a soft proofed duplicate using the printer/paper profile. Then I resize/resample, output sharpen, and print.

I've got a number of sunset shots with saturated oranges that I've processed with both sRGB and Adobe RGB, and the differences are quite noticeable from my Canon PixmaPro 9000 II (particularly when using Red River papers). The sRGB results are dull and disappointing.

Is there any advantage for skintones (black, brown, white/ pink and yellowish) using larger than sRGB color spaces? Or sRGB is sufficient for skin tones?

sRGB is fine for skin tones; they fall within the gamut.

Indeed, sRGB may be preferable if you're working with 8-bit files. Because sRGB is a narrower space than Adobe RGB, but uses the same number of values (256) for each part of the RGB triple, the "distance" between colors is smaller with sRGB and so you can get a finer color gradation. However, if you're working with 16-bit files, this is a non-issue.

...this is something I've always wondered about. Whenever you expand the range of colors, you necessarily decrease the distance between colors, for a given bit depth. So, the question, then, is when the range of colors matters more than the gradations of color, and vice-versa, as a function of existing colorspaces.

I've got a number of sunset shots with saturated oranges that I've processed with both sRGB and Adobe RGB, and the differences are quite noticeable from my Canon PixmaPro 9000 II (particularly when using Red River papers). The sRGB results are dull and disappointing.

Is there any advantage for skintones (black, brown, white/ pink and yellowish) using larger than sRGB color spaces? Or sRGB is sufficient for skin tones?

sRGB is fine for skin tones; they fall within the gamut.

Indeed, sRGB may be preferable if you're working with 8-bit files. Because sRGB is a narrower space than Adobe RGB, but uses the same number of values (256) for each part of the RGB triple, the "distance" between colors is smaller with sRGB and so you can get a finer color gradation. However, if you're working with 16-bit files, this is a non-issue.

...this is something I've always wondered about. Whenever you expand the range of colors, you necessarily decrease the distance between colors, for a given bit depth. So, the question, then, is when the range of colors matters more than the gradations of color, and vice-versa, as a function of existing colorspaces.

The simple solution, of course, is to use 16-bits. That does fine even with ProPhoto RGB.

I've got a number of sunset shots with saturated oranges that I've processed with both sRGB and Adobe RGB, and the differences are quite noticeable from my Canon PixmaPro 9000 II (particularly when using Red River papers). The sRGB results are dull and disappointing.

Is there any advantage for skintones (black, brown, white/ pink and yellowish) using larger than sRGB color spaces? Or sRGB is sufficient for skin tones?

sRGB is fine for skin tones; they fall within the gamut.

Indeed, sRGB may be preferable if you're working with 8-bit files. Because sRGB is a narrower space than Adobe RGB, but uses the same number of values (256) for each part of the RGB triple, the "distance" between colors is smaller with sRGB and so you can get a finer color gradation. However, if you're working with 16-bit files, this is a non-issue.

...this is something I've always wondered about. Whenever you expand the range of colors, you necessarily decrease the distance between colors, for a given bit depth. So, the question, then, is when the range of colors matters more than the gradations of color, and vice-versa, as a function of existing colorspaces.

The simple solution, of course, is to use 16-bits.

I take that as a given!

That does fine even with ProPhoto RGB.

So 16 bits per channel gives at least enough color separation in PPRGB? Is the implication, then, that 16 bits is overkill of less expansive color spaces like sRGB?

I've got a number of sunset shots with saturated oranges that I've processed with both sRGB and Adobe RGB, and the differences are quite noticeable from my Canon PixmaPro 9000 II (particularly when using Red River papers). The sRGB results are dull and disappointing.

Is there any advantage for skintones (black, brown, white/ pink and yellowish) using larger than sRGB color spaces? Or sRGB is sufficient for skin tones?

sRGB is fine for skin tones; they fall within the gamut.

Indeed, sRGB may be preferable if you're working with 8-bit files. Because sRGB is a narrower space than Adobe RGB, but uses the same number of values (256) for each part of the RGB triple, the "distance" between colors is smaller with sRGB and so you can get a finer color gradation. However, if you're working with 16-bit files, this is a non-issue.

...this is something I've always wondered about. Whenever you expand the range of colors, you necessarily decrease the distance between colors, for a given bit depth. So, the question, then, is when the range of colors matters more than the gradations of color, and vice-versa, as a function of existing colorspaces.

The simple solution, of course, is to use 16-bits.

I take that as a given!

That does fine even with ProPhoto RGB.

So 16 bits per channel gives at least enough color separation in PPRGB? Is the implication, then, that 16 bits is overkill of less expansive color spaces like sRGB?

Well, there's more to it than that – as you well know, there always is. The real advantage of 16-bit images is the ability of the colors to survive tonal manipulations without posterization. This would apply equally well to sRGB as to a broader space. I doubt you can ever consider the potential of 16-bits overkill, but I suspect, even with sRGB you can consider the potential of 8-bits to be underkill.

...this is something I've always wondered about. Whenever you expand the range of colors, you necessarily decrease the distance between colors, for a given bit depth. So, the question, then, is when the range of colors matters more than the gradations of color, and vice-versa, as a function of existing colorspaces.

The simple solution, of course, is to use 16-bits.

I take that as a given!

That does fine even with ProPhoto RGB.

So 16 bits per channel gives at least enough color separation in PPRGB? Is the implication, then, that 16 bits is overkill of less expansive color spaces like sRGB?

Well, there's more to it than that – as you well know, there always is. The real advantage of 16-bit images is the ability of the colors to survive tonal manipulations without posterization. This would apply equally well to sRGB as to a broader space. I doubt you can ever consider the potential of 16-bits overkill, but I suspect, even with sRGB you can consider the potential of 8-bits to be underkill.

...this is something I've always wondered about. Whenever you expand the range of colors, you necessarily decrease the distance between colors, for a given bit depth. So, the question, then, is when the range of colors matters more than the gradations of color, and vice-versa, as a function of existing colorspaces.

The simple solution, of course, is to use 16-bits.

I take that as a given!

That does fine even with ProPhoto RGB.

So 16 bits per channel gives at least enough color separation in PPRGB? Is the implication, then, that 16 bits is overkill of less expansive color spaces like sRGB?

Well, there's more to it than that – as you well know, there always is. The real advantage of 16-bit images is the ability of the colors to survive tonal manipulations without posterization. This would apply equally well to sRGB as to a broader space. I doubt you can ever consider the potential of 16-bits overkill, but I suspect, even with sRGB you can consider the potential of 8-bits to be underkill.

What I'm asking, I guess, is if 16 bit sRGB will ever have any practical advantage over a larger 16 bit colorspace due to the finer gradations of the colors it does represent.

Well, I have certainly encountered significant posterization in large expanses of blue skies under aggressive tone mapping when PP-ing an 8-bit sRGB jpeg. This is not an uncommon experience.

And, unfortunately, once an image starts life as an 8-bit image, you don't gain a heck of a lot just converting to 16 bits; all those gradations in between don't suddenly get created. If you're going to use 16-bits with sRGB, you want to shoot raw and process with 16-bits from the outset.

As to just what color differences humans can see, and how fine practical gradations can be, we humans are apparently much more sensitive to small shifts in blues than reds, and more in reds than greens. So the whole notion of JND (just noticeable difference) in color is very wavelength related. Clearly going to 16-bits is a huge boost. It gives a 256 fold increase in the number of divisions over 8-bits, and none of the color spaces is anywhere nearly 256 times larger in linear dimension.

What I'm asking, I guess, is if 16 bit sRGB will ever have any practical advantage over a larger 16 bit colorspace due to the finer gradations of the colors it does represent.

Well, I have certainly encountered significant posterization in large expanses of blue skies under aggressive tone mapping when PP-ing an 8-bit sRGB jpeg. This is not an uncommon experience.

For sure. But I'm talking about 16 bit files.

And, unfortunately, once an image starts life as an 8-bit image, you don't gain a heck of a lot just converting to 16 bits; all those gradations in between don't suddenly get created. If you're going to use 16-bits with sRGB, you want to shoot raw and process with 16-bits from the outset.

That's what I mean. Will sRGB ever have an advantage over aRGB or PPRGB due to it's finer gradations under those circumstances?

As to just what color differences humans can see, and how fine practical gradations can be, we humans are apparently much more sensitive to small shifts in blues than reds, and more in reds than greens. So the whole notion of JND (just noticeable difference) in color is very wavelength related. Clearly going to 16-bits is a huge boost. It gives a 256 fold increase in the number of divisions over 8-bits, and none of the color spaces is anywhere nearly 256 times larger in linear dimension.

but beware in assessing the diagram they give there that they have exaggerated the sizes of the ellipses ten fold.

I'm fully aware of the advantages of 16 bits vs 8 bits. What I'm wondering is if 16 bits representing a less expansive colorspace might not have advantages, on occasion, over 16 bits representing a larger colorspace, due to the finer gradations of the smaller colorspace.