The rainbow is dead…long live the rainbow! – The rainbow is dead…long live the rainbow! – Perceptual palettes, part 3

Inroduction

Matteo, so would I be correct in assuming that the false structures that we see in the rainbow palette are caused by inflection points in the brightness? I always assumed that the lineations we pick out are caused by our flawed color perception but it looks from your examples that they are occurring where brightness changes slope. Interesting.

As I mention in my brief reply to the reader’s comment, I’ve done some reading and more experiments to try to understand better the reasons behind the artifacts in the rainbow, and I am happy to share my conclusions. This is also a perfect lead into the rest of the series.

Human vision vs. the rainbow – issue number 1

I think there are two issues that make us see the rainbow the way we see it; they are connected but more easily examined separately. The first one is that we humans perceive some colors as lighter (for example green) and some as darker (for example blue) at a given light level, which is because of the difference in the fundamental color response of the human eye for red, green, and blue (the curves describing the responses are called discrimination curves).

There is a well written explanation for the phenomenon on this website (and you can find here color matching functions similar to those used there to create the diagram). The difference in the sensitivity of our cones explains why in the ROYGBIV color palette (from the second post in this series) the violet and blue appear to us darker than red, and red in turn darker than green and yellow. The principle … applies also to mixes involving the various cones (colours), hence the natural brightness of yellow which stimulates the two most reactive sets of cones in the eye. We could call this a flaw in color perception (I am not certain of what the evolutionary advantage might be), which is responsible for the erratic appearance of the lightness (L*) plot for the palette shown below (If you would like to know more about this plot and get the code to make it to evaluate color palettes, please read the first post in this series).

So to answer Steve, I think yes, the lineations we pick in the rainbow are caused by inflection points in the lightness profile, but those in turn are caused by the differences in color responses of our cones. But there’s more!

Human vision vs. the rainbow – issue number 2

The second issue with human vision is that our ability to perceive CHANGES in hue is also variable, depending on the wavelength. This is illustrated by the hue discrimination curve shown in the figure below, which compares wavelength of light with the smallest observable difference in hue (expressed as wavelength difference). The figure is from Gregory’s 1964 book Eye and Brain (which, by the way, is a wonderful read and I highly recommend it to those that are interested in human vision, together with The Vision Revolution by Mark Changizi).

There is a ‘digital era’ version of the curve by Dawson in Figure 13 of this color perception review, and some very interesting material here, but I prefer to quote directly from Gregory. According to him the curve in the figure shows that the hue discrimination is … smallest – best possible hue discrimination – where the response curves have their steepest opposite (one going up, the other down) slopes … [and] … we should thus expect hue discrimination to be exceptionally good around yellow – and indeed this is so’.

That’s it! It is all in this one paragraph. For example, yellow is such a hard edge because not only it is the lightest color (issue 1), but also because it is the one for which we can see changes more easily (issue 2), so when color palettes are built by interpolating linearly between hues it makes things worse.

Finding out about this discrimination curve also gave me an insight into a possible solution to correct the rainbow. It pushed me to ask: “can I use the curve to dynamically stretch the rainbow where transitions are too sharp (more evidently around the yellow, green, and blue), compared to everywhere else?” And the answer was: si se puede! It can be done, with a correction function calculated directly from the discrimination curve, and this is how I did it.

The top panel in the figure below is again the ROYGBIV rainbow color palette from the second post of the series. The second panel is a plot of the lightness L* corresponding to each sample in the color palette (x is sample number). In the third panel I reproduced Gregory’s wavelength discrimination curve (x is wavelength in nm). Notice that there are three major changes of gradient in the lightness profile and those correspond to three highs and lows in hue discrimination curve. It’s this consideration that brought the Eureka moment.

The fourth panel is my correction function, which is essentially an inverted and rescaled version of the discrimination curve. I used the function to resample the color palette at varying, non-integer sampling rate with up to 1.5 sample/nm in the yellow area, unmodified at 1 in the blue area, less than 1 everywhere else, with the total number of samples remaining 256. This resulted, for example, in a far greater number of samples around the yellow compared to the other areas. The next step was to force these new samples back to a rate of 1 sample/nm, achieving a continuously dynamic stretch and squeeze. This caused a broadening of the yellow area and eliminated the sharp edge, as can be observed in the resulting color palette in the fifth panel (please notice that this palette is not any more to scale with wavelength).

Fist impressions: the palette looks better in the area between the green, the yellow, and the red. The sharp edge at the yellow is gone, as mentioned, and the green area is less isoluminant. Now let’s look at the lightness L* profile for this palette which is in the last panel. This is definitely a more perceptual profile in the said area with smoother, gentler transitions and a compressive character. To me this is a very good result in principle, even though it’s not perfect in practice. For one, we’re now using a lot of the L* contrast between blue and green, and also the yellow is not an edge any more but at the cost of a loss of contrast (and from the feedback I got on a Matlab forum this was exacerbated for viewers with color vision deficiencies).

Part of the problem might be that human wavelength discrimination curves are empirical, and there are many (you can find them in Wyszecki and Stiles), so none really gave peaks and troughs in the correction function that fit perfectly with the edges in the L* profile of ROYGBIV. The function derived from Gregory’s curve is the one that gave me the best result however. Perhaps I could reduce a bit the amount of stretching and squeezing, say constrain it to something like between 0.6 and 1.2.

But part of the problem is that my idea was only ever going to address issue 2. After all my experiments I am now convinced issue 1 with ROYGBIV is insurmountable: we certainly can’t make red lighter than yellow. And we can’t make blue a bit less darker than green. Or can we? An idea started to form in my mind at this point. What if I tried to fit a straight line, monotonically increasing from low L* values to high L* values, at each L* assigning from scratch a hues with that particular lightness?

I will describe my efforts to produce a new, perceptual rainbow palette based on this premise in the last part of this series. But prior to that, in the next two short posts, I will discuss two really good perceptual palettes that are already available.

11 responses to “The rainbow is dead…long live the rainbow! – The rainbow is dead…long live the rainbow! – Perceptual palettes, part 3”

Thank you. I’m a mapmaker and this explains something that has always troubled me. To my eye, the transition between green and blue needs to be spread out, maybe by reducing the blue and violet areas. But maybe adjusting the brightness first will fix this. I look forward to the results of your continued efforts.

I agree with you on the green to blue transition. I could do it with a stretch of the correction function – for example by pinning the peak at the green and sliding the valley at the blue to the right – but then we’d lose the genetic relationship with the discrimination curve. Does it matter? I’m not sure. But the reality is I’ve tried a few things and did not go too far.

What I like of starting from scratch is that I make the rules of the game.
Stay tuned.
Matteo

Great to see that you are back from sabbatical. I think what you are showing here is that if we want to eliminate the false edges from the our color palettes that we cannot rely on just changing the palettes themselves. We are going to have to take into account both the lightness profiles and the hue discrimination profiles as well. That much is now becoming clear.

The problem, of course, is that human color perception at its best is very poor and it can be highly variable between individuals. Perhaps it will be possible to develop lightness and hue separation curves for individuals with varying degrees of color perception. Our eye is such an excellent edge detector and these false edges that we introduce into our images attract our focus when they really don’t mean anything at all. Perhaps by adjusting the transfer functions dynamically rather than the palettes themselves, we can even things out.

One of the issues that we have using color for seismic data is that seismic has both positive and negative amplitudes. We generally need to use colors with a high degree of contrast for the different polarities (i.e. red = positive, blue = negative) which works well in a gross sense when all we want to do is identify broad markers. When it comes to communicating fine scaled information, however, this red/blue split is problematic. We get good hue perception in the green-yellow-red but very poor in the blue. As a result, we only really can focus on one polarity at a time.

Evening out the palette using your transfer functions will help us in the red areas but can it do anything for us in the blue or can we do anything at all with the blue? As you point out, we certainly can’t make blue as bright as yellow but we could make yellow as dark as blue. I wonder what that would look like, perhaps dark and foreboding?

Anyway, nice to see you back and as always, I will look forward to your next posts.

In terms of getting a rainbow that works, that is coming with the last few posts of the series. Although you know already how it looks like.

I like your idea of lightness and hue separation curves for individuals with varying degrees of color perception. I imagine it would take a massive effort with viewer experiments.

As for color palettes for seismic:
Here is the standard red-white-blue divergent color palette you talked about

And here is what I think is an improved version:

The latter is based on 2 ideas. The first, inspired by the 2006 Leading Edge by Welland et al. [1] is to remap the red-white-blue color palette so that samples are equidistant in perceptual space, which they call psychological space. The authors do not clarify which space this is, so I used CIELab. Notice how the blue and red ramps now are more perceptual. The second idea is to account precisely for our poor perception of the blue hues. To solve that problem I allowed a greater L* range to be used for the blue. I compared the two palettes using some seismic maps (which I am unfortunately unable to publish) and I was happier with this new palette.

However there are still two problems. One, red and blue are not a great choice – in fact they are very confusing for dichromats. Two, using white in the middle is distracting and obfuscate important details. My solution is to use yellow and blue going through an achromatic grey in the middle, and to make a fundamental modification in the lightness profile. But that’s the topic of a future series, so I won’t spoil the suspense.

Blogroll

Meta

Go ahead if you want to use my code, modify it, improve it, for non-commercial AND for commercial use. You are also welcome to download and reuse my media files - unless otherwise stated. With both code and images, please give full and clear credit to Matteo Niccoli as the author and mycarta.wordpress.com as the source.
WordPress bloggers are welcome to reblog my posts. For republishing outside of WordPress or any other request, please e-mail me at: matteo@mycarta.ca