Unified Color Technologies (UCT), in their line of HDR image processing software, uses a color space they call "Beyond RGB". Just what is that?

Well, it turns out that the native file format for BRGB is BEF, and I suspect that describes a color space as well (Bef, actually).

One chart that appears here and there in the UCT documentation, "illustrating" the BRGB color space, is labeled "Bef2" (maybe an editorial slip). So maybe "Beyond RGB" is also "(a little) beyond Bef".

It appears that this color space is either a luminance-chrominance color space or a pseudo-luminance, pseudo-chrominance color space (I actually suspect the former). For reference, the L*a*b* color space is a pseudo-luminance, pseudo-chrominance color space.

Evidently, the coordinates of the color space are B, e, and f (fancy that). B is apparently the luminance (or pseudo-luminance) coordinate; the symbol is doubtless evocative of "brightness".

The coordinates e and f are apparently broadly similar in concept to the coordinates a* and b* of the L*a*b* color space (often called just "a" and "b").

Apparently 32-bit floating point representations of the coordinates are used (probably per IEEE-754).

It is apparently a section through the color space of the Bef2 color space at some value of B (a chrominance plane). Apparently it shows a section through the sRGB color space as plotted in the Bef2 color space. Its gamut boundary is apparently the boundary of the gamut of human visibility at that value of B.

In all their literature, they use the word "color" to mean either chromaticity or chrominance. (As I say in my papers on color models, "as lay people, unaware of the technical meaning of color, do".)

In one piece, they say:

Beyond RGB is similar in concept to the Lab color space, in that it is also a three dimensional space with brightness or luminance on one axis and color information on the other two. In practical application however they differ significantly. Brightness changes in Lab can often introduce changes in color too, where as Beyond RGB maintains the integrity of the color data as brightness is changed. [Color keying added.]

What does the red passage mean? It might mean this:

In the L*a*b* color space, if we change L* but leave a* and b* the same, the chromaticity of the represented color changes.

[True.]

or maybe this:

In the L*a*b* color space, if we change L* but leave a* and b* the same, the chrominance of the represented color changes.

[A little true.]

Or maybe something else.

What does the green passage mean? It might mean this:

In the BRGB color space, if we change B but leave e and f the same, the chromaticity of the represented color does not change.

[Hard to believe, if e and f define a chrominance plane.]

or maybe this:

In the BRGB color space, if we change B but leave e and f the same, the chrominance of the represented color does not change.

[Easy to believe, but so what.]

Or maybe something else (but no doubt very desirable).

It's also possible that e and f are chromaticity, rather than chrominance, coordinates. In that case, presumably, if we change B but not e or f, the chromaticity of the represented color does not change.

Beyond RGB is similar in concept to the Lab color space, in that it is also a three dimensional space with brightness or luminance on one axis and color information on the other two.

That they lump brightness (not Lightness) and luminance in the same sentence as if they are the same is enough to warrant suspicion here!

From what I understand, the L in Lab is Lightness (a property of color). Brightness is a human perception, not the same. Luminance is the properly of an emissive surface or reflection from a surface (cd/m2).

That they lump brightness (not Lightness) and luminance in the same sentence as if they are the same is enough to warrant suspicion here!

From what I understand, the L in Lab is Lightness (a property of color). Brightness is a human perception, not the same. Luminance is the properly of an emissive surface or reflection from a surface (cd/m2).

A distinction well taken.

The notion of lightness flows in part from the original orientation of the L*a*b* color space as a descriptor of "reflective color" (for paint finishes and the like).

But once we follow it to its later role for describing the color of light, the quantity L* tracks more closely with brightness (in the sense you mentioned) than luminance.

Just a small detail - the dimensionality of luminance is not luminous flux per unit area (as your unit suggests), but rather is luminous flux per unit solid angle per unit area, so its unit is:

cdsr^-2m^-2

(I have to use that form to avoid ambiguity when we have more than one "per".)

Another way to write it unambiguously (not considered editorially polite, because of the parentheses) is:

(cd/sr^2)/m^2

***********

Sparing no expense for our collective enlightenment, I have just sprung for the $18.00 cost of a paper in the SPIE journal by three Russian guys that discusses at some length the Bef color space and a later derivative, LinLogBef, comparing them with other color spaces of interest in the field of HDR imaging.

Interestingly, their paper shows the same figure I showed, and mentions that it is in fact a slice of the color space of the Bef color space for B= <black>. (I don't yet know how the coding works, so I cannot say what the numerical value of B is for <black>.)

We often run into this presentation when we show "chrominance" or "chromaticity" planes. The one shown is in fact most often for luminance=0 (although this is rarely mentioned). That of course seems paradoxical, since there is only one color with luminance=0 ("black"), for which chromaticity or chrominance is "undefined".

But in fact, an infinitesimal distance up the luminance axis, chromaticity or chrominance is meaningful. That is, a color with a very tiny, but not zero, luminance has a valid chromaticity or chrominance. Its range is the so-called "zero-luminance chromaticity [or chrominance] gamut". Formally (and less paradoxically), it is the limit of the chromaticity [chrominance] gamut as luminance approaches zero.

In fact, for the sRGB color space, the chromaticity gamut usually shown is that gamut (although it also applies through a finite range of luminance).

If you consider color in the CIE xyY system (the usual chromaticity presentation is an x-y plane of that three-dimensional space for some unmentioned value of Y), the sRGB gamut is of course a three-dimensional solid. If we look at its bottom (not the projection of the entire solid on that plane; just the "face" we would paste felt on so it did not scratch our coffee table), that is the sRGB gamut we usually see. And that face is in fact the "zero luminance chromaticity gamut" (indeed a mathematical fiction).

Well, the $18.00 paper was not of too much help (although certainly interesting).

But some clues in it allowed me to find another paper by two of the same authors that has been most enlightening.

What follows is my quick interpretation of what I have read so far.

**********

The Bef color coordinate system is a transform of another color coordinate system, DEF.

That coordinate system may be grasped from this figure (which I think is considered to be "drawn" in the CIE XYZ color space):

It is a three axis system.

 Axis D is the axis along which lie the color representations of all colors whose chromaticity if that of standard illuminant D65. (Yes, "D" is evocative of "daylight".)

 Axis E is the axis along which lie the color representations of all colors whose chromaticity is that of monochromatic (spectral) light of wavelength 700 nm.

 Axis F is orthogonal to the first two (that is, at right angles to both).

Now, there is a transform of this color coordinate system into one whose coordinates relate to a certain meaning of brightness (B), hue (H), and chroma (C). (We can inexactly think of chroma as being like saturation.)

For a color at point S, then B is the length of the vector from the origin to S; H is the angle the projection of that vector onto the E-F plane makes with the E axis; and C is the angle that vector makes with the D axis.

When C=0, we have a color along the D axis, where chromaticity is always that of D65; that is, such a color would have zero saturation (assuming a white point of D65).

Neat!

Note that C and H together define the chromaticity of the color.

B is something like luminance, but not exactly.

Now, the Bef color coordinate system transforms the DEF coordinate system into a form slightly reminiscent of the L*a*b* color space.

B is the same B we just saw in the BHC color space. Algebraically, from the DEF model:

B=sqrt(D^2+E^2+F^2)

Also:

e= E/B

f=F/B

Note that in this model, if we start with a certain color (a certain B, e, f) and change B (to make the "brightness" of the color greater or less, but not to change its chromaticity), the values of e and f do not change.

Thus, the e-f plane becomes a plane of chromaticity.

Why do I not speak of this as the "Bef color space"? Two reasons:

 I prefer, in this kind of work, to use the term color space in its original technical meaning: the "realm" of possible values of the color coordinates in a color representation system.

 I have not so far given (nor seen, actually) definitions of how the numerical values of B, e, and f are to be determined (part of the definition of a color space in the modern meaning of the term).

A parallel is that "RGB" is a color coordinate system ("color model"); sRGB is a color space.

*********

Now how this exactly relates to the Beyond RGB color space (GMWAS), I'm not sure.

By the way, the "2" in "Bef2" reminds me of the "DEF2" color coordinate system described by the authors of the paper I have been discussing. It means a DEF color coordinate system that is specifically predicated on the "2° standard observer" visual response (the one on which the 1931 CIE XYZ color space is predicated). "Bef2" may have the same significance.

And perhaps "Beyond RGB" is just a new marketing name (or accolade) for the Bef color space. "Where's the Be(e)f?"

Our purpose is getting to an impressive photograph. So we encourage browsing and then feedback. Consider a link to your galleries annotated, C&C welcomed. Images posted within OPF are assumed to be for Comment & Critique, unless otherwise designated.

In another paper by Bezryadin (the principal author of the papers I have been discussing), he shows that the DEF coordinate system is in fact a linear transform of the CIE XYZ color space (and gives the explicit transform matrix).

Now "according to whom" remains unclear. But in fact, the DEF space is likely a creature of his own work. And in fact I think he is the father of the Bef coordinate system.

It looks as if his company (KWE International Inc.) may be the source of some of the underlying technology of Unified Color Technologies' HDR stuff.

The firm seems to have Russian ties (and staff), but is owned by Kedah Wafers Emas Sdn Bhd in Malaysia.

Bezryadin points out in this paper that the coordinate B ("brightness") of the Bef coordinate system is dramatically different from luminance (insofar as consistency over different chromaticities). Nevertheless, he suggests that it correlates well with "human perception". I haven't followed his detailed sleight of hand yet.

I find further evidence that the "2" of "Bef2" is like the "2" of "DEF2", probably alluding to the use of the CIE 2° standard observer model.

I've been spending little time with various papers and presentations by Sergey Bezryadin, evidently the chief boffin at KEW/Unified Color. The papers are very technical, and involve some concepts in which I am not fluent, but they are well written and generally easy to follow. Although Sergey is Russian, his technical English is very good (if you don't always expect there to be definite articles in the expected place), and he has a good didactic approach.

Every time I read a statement in one of his PPS presentations, and thought, "Well, I think that would only be so if . . .", his next slide says, "Well, actually that is only so if . . .

The DEF color space (which is essentially the starting place for the Bef color space, which is what Beyond RGB is, or is derived from). It is based on what are called the Cohen metrics, which we can think of as defining a color space. The name refers to work by Jozef Cohen, who points out that a certain color space (or class of color spaces) has fundamental properties that commend them for certain work. They are, in a sense, "our native color space".

Most prominently, the "Cohen metrics" color spaces have the property that, overall, distances between points in the color space (using that term in its original meaning) are consistent with perceived color difference.

(I believe it was Bart who commented earlier that he thought the color space here was somehow connected with color difference formulas. I think that was right on.)

Now the DEF color space is evidently a transform of the "Cohen metrics" color space such that it is orthonormal with respect to the Cohen space.

That means that whatever is the distance between the points representing two colors in one color space, there will be the same distance between the point representing the colors in the other color space.

Sergey in fact in one of his presentations neatly shows that this is not true of a comparison between a linear version of sRGB (what I call srgb) and the CIE XYZ color space (the one used for "scientific" description of color.

In any case, it seems that the DEF color space is defined so as to meet that criterion of orthonormality with respect to the Cohen metrics plus these arbitrary (but certainly sensible) criteria:

 The D axis is the locus of all colors whose chromaticity is that of standard illuminant D65.

 The E axis is at right angles to that* and "heads toward" the chromaticity of monochromatic light of wavelength 700 nm.

 The F axis is at right angles to those two*.

*In the context of orthonormality with respect to Cohen metrics.

I've ordered Cohen's book.

All very interesting.

Next I will be attacking 32- and 16-but floating point numbers, and denormalization, and all that. Many with serious computer science experiences will already know about all that.

I'm so proud of you to be fathoming all this out AND presenting it in a lucid fashion. I can follow you here as easily on a professional guided tour of a city I've heard of but never visited before. I'd like to know if the Cohen book is as approachable!

Think that Dan Siman merely resented a Parakeet with overblown color and that had me trying to redo that and that's how we entered this rabbit's warren! We are now into color mapping on axes that relate to our perception, as Bart points out. Great potential for us here!

What a right bunch we have here!

The end result, I hope, is a better approach to our use of color correction, especially at all the perimeters of our adobe RGB or Prophoto RGB and other 3 D mapped color spaces we use.

Our purpose is getting to an impressive photograph. So we encourage browsing and then feedback. Consider a link to your galleries annotated, C&C welcomed. Images posted within OPF are assumed to be for Comment & Critique, unless otherwise designated.

I'm so proud of you to be fathoming all this out AND presenting it in a lucid fashion. I can follow you here as easily on a professional guided tour of a city I've heard of but never visited before.

Thank you so very much.

Quote:

I'd like to know if the Cohen book is as approachable!

I just got a notice that it has shipped. I'll let you know.

I looked at a snippet of it on Google Books and its tone and style look promising.

Quote:

Think that Dan Siman merely resented a Parakeet with overblown color and that had me trying to redo that and that's how we entered this rabbit's warren! We are now into color mapping on axes that relate to our perception, as Bart points out. Great potential for us here!

What a right bunch we have here!

Absolutely.

By the way, the ongoing interaction with (in particular) Bart and Cem (I have called it "triangulation") has been especially profitable on many of these technical issues.

and the reader might wish to review that. Nevertheless, hopefully my explanation to follow will be self contained with respect to the current issue.

For our purposes, the important aspect of a coordinate system that is said to be orthonormal is that (a) the abstract distance between two sets of values (considered to be represented by two points in the "number space" of the coordinate system) will correspond to (b) the actual distance between the two points. Lets look at those two things.

By "abstract distance" I mean what is formally called the Euclidean distance. If the space of the coordinate system were an actual, physical space (a meaningful metaphor for two- or three-dimensional coordinate systems), where the three axes are all mutually at right angles, then the Euclidean distance is in fact the physical distance between the points representing the sets of values. For the point X1,Y1,Z1 and X2,Y2,Z2, that distance, Se, is calculated as:

Se=sqrt((X2-X1)^2+(Y2-Y1)^2+(Z2-Z1)^2)

Ok. But what would it mean for that to be the same as the "true" distance between the points?

If only this coordinate system exists, then it has no meaning.

But in the situations of interest to us, it means that the distance would be the same as the distance between the points reckoned under some other stated coordinate system (by implication, one perhaps considered "fundamental").

But then what does "Cohen metrics" mean. For our purposes, we can consider that as describing a "color space" (or one of a family of color spaces) described by Jozef B. Cohen as being "fundamental". That means that they have certain properties that qualify them to be considered "the native color space of human vision" (my description, not Cohen's).

I have not yet read Cohen's book, but based on what Bezryadin says, I suspect that among the pivotal properties of the Cohen color space(s) is that distances in them correlate well with the magnitude of perceived color difference (this is related to the matter of color difference formulas).

Thus, by Bezryadin choosing a coordinate system for his color space that is orthonormal to the coordinate system of the "Cohen metrics" color space, he preserves that desirable situation.

If the other ingredients of the color space play along, this might mean that in image editing systems using that color space, we would have a roughly-uniform "perceived precision of color" across the entire gaumt.

One promotional piece on the Unified Color Bef color space says that is has a dynamic range of 1076.

Wow! I just ignored that.

Just a minute ago, I calculated the "dynamic range" of an IEEE-754 32-bit floating point number using only normal, nor denormal, values (the latter are really reserved for tiny intermediate calculation results, not for "data").

One promotional piece on the Unified Color Bef color space says that is has a dynamic range of 1076.

Wow! I just ignored that.

Just a minute ago, I calculated the "dynamic range" of an IEEE-754 32-bit floating point number using only normal, nor denormal, values (the latter are really reserved for tiny intermediate calculation results, not for "data").

it is approximately 2.88 x 10^76.

You got that: "10^76"!

"1076"!

Oh, brother!

Wow, this scientific stuff is tough.

Hi Doug,

It won't be the last time something gets lost in translation between the source and marketing ...
One could also argue that a colorspace doesn't have a dynamic range, but that the encoding precision allows to capture a certain range.

As a small side note, here is some information about other HDR file formats, their Dynamic range capabilities, and their precision. It also stipulates that while the dynamic range capabilities are more than adequate for most purposes, the precision of a 96 bits per pixel floating point TIFF is a bit of a waste because probably most of lower significant bits will just hold an accurate encoding of noise (given the input sources). It also makes for poor compression capability. That's probably also why the standalone Unified Colors software allows to save a (lossy ;-)) compressed version of the images in the BEF format.

One could also argue that a colorspace doesn't have a dynamic range, but that the encoding precision allows to capture a certain range.

Oh, quite so. I always try to maintain that distinction in my writing (but in some cases I don't, just for the sake of "colloquy").

As a small side note, here is some information about other HDR file formats, their Dynamic range capabilities, and their precision.[/quote]

Thanks for the link to that most useful paper. I think I have not seen it before (although the "spiral slice" illustrations - Fig. 15 in particular - seem strangely familiar - perhaps the format is not original with that author).

Quote:

It also stipulates that while the dynamic range capabilities are more than adequate for most purposes, the precision of a 96 bits per pixel floating point TIFF is a bit of a waste because probably most of lower significant bits will just hold an accurate encoding of noise (given the input sources).

Indeed.

[quote] It also makes for poor compression capability. That's probably also why the standalone Unified Colors software allows to save a (lossy ;-)) compressed version of the images in the BEF format. /QUOTE]
Yes, and I'm not sure what kind of non-reversible [;-)] compression they use.

Circling back to the notation front, I note that Holzer uses "dynamic range" to mean the encoding range of a coding scheme, and uses "accuracy" when "precision" is probably meant.

In the Bef scheme, it is not clear that the 32-bit floating point representation is really warranted for the two chromaticity coordinates e and f (that plane is in fact chromaticity, not chrominance, as I understand the derivation of the coordinates). It seems that Bef can encode about 4.6E+18 different values of chromaticity.

In the Bef scheme, it is not clear that the 32-bit floating point representation is really warranted for the two chromaticity coordinates e and f (that plane is in fact chromaticity, not chrominance, as I understand the derivation of the coordinates). It seems that Bef can encode about 4.6E+18 different values of chromaticity.

Which might seem overkill, although we're talking about linear gamma values that will undergo significant changes during the inevitable tonemapping process. The built-in encoding capabilities are probably exploited to a large extent, otherwise they would have tried to reduce the processing cost of manipulating such large numbers. Perhaps they use it to avoid cumulative errors building up to anything significant. Maybe they are just making things future proof (it can save a lot of cost if one doesn't need to re-write large amounts of code to accommodate larger values, think Y2K issues).

Which might seem overkill, although we're talking about linear gamma values that will undergo significant changes during the inevitable tonemapping process. The built-in encoding capabilities are probably exploited to a large extent, otherwise they would have tried to reduce the processing cost of manipulating such large numbers. Perhaps they use it to avoid cumulative errors building up to anything significant. Maybe they are just making things future proof (it can save a lot of cost if one doesn't need to re-write large amounts of code to accommodate larger values, think Y2K issues).

I'm a little baffled by Holzer's metric of "dynamic range", in particular by the value he assigned to sRGB.

He gives that as 1.6, and evidently his metric is the log10 of some ratio. That ratio would then be about 39.8. I have no idea where that comes from.

In sRGB, if we arbitrarily consider only achromatic colors (R=G=B), the ratio of the luminance implied by the largest value to that implied by the smallest value (or the steps between the lowest values) is about 3300.

I'm a little baffled by Holzer's metric of "dynamic range", in particular by the value he assigned to sRGB.

He gives that as 1.6, and evidently his metric is the log10 of some ratio. That ratio would then be about 39.8. I have no idea where that comes from.

In sRGB, if we arbitrarily consider only achromatic colors (R=G=B), the ratio of the luminance implied by the largest value to that implied by the smallest value (or the steps between the lowest values) is about 3300.

According to a reference Holtzer used as one of his main sources (Reinhard, Erik et al, High dynamic range imaging: acquisition, display, and image-based lighting), the concept of the dynamic range of sRGB referred to is the ratio between the highest luminance it can encode to the luminance at the point on the scale where the "error" becomes 5%. This is said to be the point at which "banding" due to quantization becomes noticeable; thus, lower luminance values are "damaged goods".

I suspect that a quantizing step of 5% is in fact what is meant - badly named as "error". (The quantizing error there would be ±2.5%.)

But based on the "quantizing step=5%" premise, I can't get Reinhard's result (as stated by Holzer). In the sRGB encoding system, the luminance step size becomes 5% of the luminance value at about RGB=37, at which point the relative luminance is 0.0185, for a ratio to maximum luminance of 54. That would have a log10 value of about 1.7.