I couldn’t find a good answer via google. Should I be using 2.2 or sRGB when creating my profile? The only major difference I’ve noticed is that the darks have much lower gamma with sRGB. I think I want the setting that matches more closely with what people see, which I assume is 2.2.

But, what is your thought? Which is more common, and does the less common option have any advantages?

The reason you didn’t get an answer is that the entire context of the question is nonsensical…

sRGB is effectively gamma 2.2. So you should get the same tonal results from DisplayCal for sRGB and G2.2.

If you see a difference, something is going wrong.

As to CRTs, the concept of gamma is defined and understood in terms of the relationship between our visual response of lightness and the physics of CRTs, because the origin of electronic displays is about television. sRGB -> G2.2 is a compromise of television and computer graphics tonal response. The only way to understand why this is so is to recount the history of color science, TV and computers. Super complex.

For people just getting started with DisplayCal, its best to simply your environment to a decision of whether you are primarily interested in web or video.

If web, you want sRGB.

If video you want Rec.709, which is sRGB with G2.4.

If your display has settings pertaining to either of these options, make your choice and select these before running DispkayCal.

This is an oversimplification, but the place to start.

If you have a high-gamut display or UHDTV you’re in a whole other realm. Try to simplify, if you can, by setting your display per above until you can learn more about the implications which depend on many factors.

sRGB is effectively gamma 2.2. So you should get the same tonal results from DisplayCal for sRGB and G2.2.

No. As noted by the OP, sRGB has a linear segment near black, so in a color managed environment, if you assign a Rec. 709 “pure” gamma 2.2 profile to an sRGB image, you’ll see darker near-black tones (and vice versa).

Yes I am aware of the distinction but felt it was a distraction to mention it.

For most users the effect will not be visible.

For anyone who cares, with access to Photoshop, you can see for yourself.

Take the Adobe-supplied sRGB which is a true one with the split (sRGB IEC61966-2.1) and set Edit > Color Settings > Working RGB to it. Then use the same setting again but choose Custom RGB . This will create a “simplified” sRGB version using gamma 2.2. Save it.

Now get a black patch test image like you can find at lagom.nl/lcdtest into photoshop and assign the true sRGB to it, then assign or soft proof the simplified sRGB. It you can tell the diff I will be very surprised.

You can demonstrated there is a diff by taking the black patch test, duplicating it and assigning one true sRGB and assigning the other simplified sRGB and converting to true sRGB and overlaying the two versions in difference mode to see a barely discernible diff is present. This diff can be enhanced using a Levels adjustment layer.

If ordinary users can see the diff between true sRGB and G2.2 profiles in their photo work I will be dumbfounded to hear it! Tho no doubt there are golden eyes out there.

In my experience if the orginal poster, based on the way he posed his question and conclusion can see a diff, something else has gone wrong.

Also doesn’t Rec.709 use a split curve too?

The conventional wisdom is these splits are attributable to the limits of arithmetic units in embedded microprocessors for early digi devices, and according to C. Poynton in his Gamma FAQ using a linear-light segment at bottom end help control sensor noise in early designs. Both sound plausible to me.

I take back what I just said about discernabke differences between true sRGB and G2.2. I just went through the exercise I described in prev post and softproofing G2.2 against true sRGB creates a readily visibke diff!

FWIW anyone finding this post, I debated back and forth on the srgb issue for a while. What finally sold me was that the way photoshop displayed an image with a profiled display never really changed while the the gamma/lut table setting (2.2 vs srgb curve) did make a difference. I settled on srgb curve since it was visibly the smallest difference between photoshop and non-color managed programs. (And since the web & basic images are “supposed” to be srgb color space, not just gamma 2.2, I’m fairly confident that this is the correct setting.) OTOH your mileage may vary if you are trying to match what most people might see on a non calibrated system. They might be seeing things closer to gamma 2.2. I decided there’s a standard for a reason and maybe someday everybody will adhere to it ;), so sRGB it is for me (and I’d rather my photos look the same everywhere I viewed them on my system)

For me I also favored going into advanced in the profile tab and changing the gamut mapping for perceptual to rel color and the target viewing to darkened work environment. But that’s just me…

I don’t think most web -audiences- are looking at the srgb tone curve.

All the recent TVs and Monitors I’ve pluged the PC into do not exhibit the srgb tone curve natively. They’re all more similar to the 2.2gamma (relative) w/ black output offset.

Could you say a bit more about how you determined this? You are saying this is your visual experience. What sort of an image do you use to make the assessment?

When you say “black output offset” do you mean the brightness control?

I don’t know precisely when device makersmade the switch, but srgb is definitely not the standard tone curve on devices , even if it is the standard image profile/ working space in photoshop.

“Devices” vary a lot. No, they vary insanely a lot! We all see this or we wouldn’t bother trying to align them. So to say “…device makers made the switch…” is an odd turn of phrase. Can you say more about this?

For example, if devices perfectly targeted standards, which they usually don’t for many reasons, today there are so many standards it will make your head swim: sRGB, Rec709, AdobeRGB, DCI-P3, DisplayP3, Rec2020 (yikes this last one has to have support for a virtual gamut because no actual device can display its full gamut). Add gimmicky intermediates like BT.1886, HLG, S-Log, and on and on. Then compound it with profile formats, OS support, app support, colorimeters that can’t handle the light emitted by LEDs made from the ambergris of the White Whale… My god.

As an further aside, this creates a true conundrum. With so much device variation, and various standards to target, how do we agree? Anyone who has spent any time on this has seen color management can not solve this problem. It can help you align to a standard, and characterize a transformation from one regime to another. But there are always one-way streets and barriers. In a perverse sense, the promise of color management has always been a lie: You can never achieve repeatability across all devices without a lowest common denominator. And the industry is forever improving things! Where does that leave you? Chasing after the latest stuff. Don’t get me wrong, this can be fun. And this makes sRGB very valuable. It’s a good cut for a lowest common denominator. Not withstanding Rec709 TRC for video arrrgh! See? Pick any standard you like, you are likely looking at some content that’s made for a different assumption: chose sRGB assuming web graphics and play a video in VLC. Look at Display P3 photos from a phone camera, etc. Maybe you shoot raw and edit in Adobe RGB to preserve all the ju-ju then export to sRGB to post online. Etc and so forth. Something I’ve never heard anyone say—ever—is “my photos just looked so drab when I exported them to sRGB.” Something I hear people say all the time is “my Adobe RGB photos look drab to other people on the web.” This is the true promise of color management: to increase the accuracy by which you make mistakes <wink>

But back to topic at hand.

To me, looking at the distinction between sRGB’s hybrid response curve and pure G2.2 is noticeable only under highly controlled conditions. Yes, the difference can be seen!

But in my experience this difference is overwhelmed by other factors in out-of-box scenarios. I pay attention to image fidelity more than the average bear and I would never be able to pick a G2.2 vs true sRGB by surfing web images on tumblr or some-such. I wonder in amazement at an eye so well trained that it could; unless looking at a specific image with traits for the purpose.

So, a web artist working in this mismatched environment must be compensating for it, I find that may websites look more -correct- without firefox’s colormanagement turned on.

So it’s this specific statement that has me replying here…

What are the mismatched environments?

What’s a “web artist”, and what distinguishes his process?

How do they compensate?

How does Firefox color management differ from other applications?

I ask all this because I suspect you may be struggling with the details of your specific display rather than observing general principles of ICC color management.

For what it’s worth, Chrome users should probably target the sRGB TRC when calibrating (at least if the difference with gamma 2.2 matters to them), since: https://crrev.com/c/1592680

Well if you care about accurate color, you shouldn’t be using Chrome, plain and simple. All the issues they had with regards to color management were due to their own incompetence on the implementation side. Basically, they were ripping out parts of profiles without actually understanding what these parts were for (which is a borderline insane idea to begin with, there are libraries and tools for this, and they could have consulted experts for help), and trying to derive their own interpretation of how these things should work in order to simplify the color transform (I assume for performance reasons). This then exploded in their face (no surprise at all here), and they basically ended up removing their “color management” (which never really was proper color management to begin with). I’m somewhat glad they got mostly rid of their dysfunctional concoction in spring this year, watching the rot was painful.

My guess is that when sRGB was defined, the slope of the curve near black was considered so subtle compared to the chosen power curve (L* approximation) that no one would notice. And I’ll beg the question to say that I think such an assumption generally holds true these days.

It’s well-understood that the point of the linear slope and discontinuity of the bottom-end of the curve was to make certain calculations practical for the slow, low precision (integer arithmetic) micro-controllers that were becoming popular in imaging gear around the time the standard was defined. And—according to the Guru of gamma C. Poynton—it helped mitigate sensor dark noise typical of early CCDs.

If you accept these explanations then you see that the visual vs. functional trade-offs of sRGB vs G2.2 must be regarded as truly academic features of history.

Now let’s consider your conjecture that display makers tend to deploy G2.2. While it might be true, it seems to contradict the engineering justifications for sRGB’s spliced transfer function. So your conjecture really needs to be demonstrated. In my experience, out-of-box display gamma suffers from wild inaccuracies far beyond the sRGB discontinuity.

The actual effects you are considering are engineering artifacts, not stylistic conventions. And are often overwhelmed by factors you haven’t mentioned.

So I think your idea of “artistic intent” only has bearing in this matter as your personal way of adapting to unknowns about viewing conditions.

Nonetheless, if I follow your thinking about artistic fidelity, I see how you have to make a choice, both about how to render the images you view, and how to code your images for viewing by others.

The de facto standard is sRGB. When images are handled according to the web standards it yields results that are entirely correct, and it is an effective compromise assumption for your “uncalibrated” use case.

Uncalibrated is not a standard. It’s a category of the unknown. Therefore trying to second-guess G2.2 as a convention is a double standard.

Start with the principle that you can’t control what others do with their displays. You can control yours and your images, then make a choice about what standard to follow. From here, sRGB must be the obvious selection. And if you are doing sRGB, then you need to use that janky spliced TRC.

If you are doing natural scene photography, the difference between sRGB and G2.2 TRC is so subtle it’s typically overwhelmed by other factors. If you wanna use G2.2 for some local reason, no one will ever notice—‚at least not any more so than they would notice or question the auto-exposure of your camera or your development aesthetic.

If you design graphics which are composited on web, or deal in spot colors, etc, or have a numerical view of color, then you very much care about the curve sRGB vs. G2.2 difference. In this case, you will have to conform with the dictates of your project and team.