Introduction

Well, it's been quite a while since I wrote one of these. As work is a little slow at the moment, I thought I'd do one on an important topic, that of color. An important caveat - I am color blind. So when I talk about how the colors work, I'm largely taking other people's word for it.

Background (optional)

I'm sure most of us are aware that when you need to specify a color to your PC, you do it with an RGB triple, or ARGB if you want to specify transparency. What this in essence means is that your CRT has three color guns, and your LCD has sets of three colored lights to make up a pixel. The merging of differing levels of these three colors equate to the range of colors that can be displayed by your computer, like this:

This is, however, not the only possible way to describe color, nor is it a method that makes much sense to humans. If I were to ask you how to describe orange, or yellow, or purple, using RGB, chances are you'd have to undergo some trial and error to work it out. HSL is the most common color system that exists to be human friendly, rather than machine friendly.

As RGB stands for red, green, blue, so to, HSL is an acronym, in this case for hue, saturation, luminance. The three components of color are best specified in that order, as they represent a constant refining of the value ( that is, saturation and luminance values are close to meaningless without a hue ).

Hue

The hue is the actual base color being used, free of any modification to brightness or strength. It is commonly represented as a circle, in which the hue value ( which ranges from 0 to 360 ) indicates the angle in degrees of the color as present on the wheel. The following image shows the hue circle, with constant luminance and saturation at 50%.

Saturation

Saturation describes how 'colorful' a color is, for example, a fluorescent color would have a high saturation. In order to demonstrate this, I have provided three screenshots, all of the hue wheel with 0 luminance, and with .25, .5 and .75 saturation. A saturation of 0 makes an image greyscale ( as it has no color in it ), and so I don't provide an image at 100, because I wanted the range shown to be even.

Luminance

Luminance describes how bright a color is, so that full luminance is always white, and no luminance is always black. In order to demonstrate this, I have provided three screenshots, all of the hue wheel with 0 luminance, and with .25, .5 and .75 saturation.

The color chart

The sample application continues to build on previous installments, and thus builds the code base for use in other projects. There is now a new menu called 'colorspaces', with a view to expanding it to cover other color spaces in the future. The 'HSL Chart' menu item brings up a dialog with a hue circle, and a slider on the side, like this:

The hue wheel either modifies saturation, or luminance from the centre to the outside, and the slider then modifies luminance or saturation accordingly. This should give you a really good idea of exactly how these parameters work, and how they modify colors.

Using the code

So, we have this color space, but what do we do with it ? Well, two things pop immediately to mind. First, we can provide means so that a user can select colors using this color space, instead of having to provide an RGB triple. Secondly, we can provide image filters that allow modification of an image based on these three values. But in order to do any of this ( or even to do what you've seen already ) we need to be able to move within this color space, we need to be able to convert between HLS and RGB. In order to do this, the first component we will examine is the HLS class.

HLS class

All the new code is in the ColorSpace.cs file. The first class in there is called YUV, another color system that I wrote a class for, but do not examine here. Next is the HLS class, which encapsulates a HLS color. It keeps these values in private members and exposes them through properties, so that we can correct out of bounds values. I choose not to throw an exception, because when we use this class with filters, we will amost certainly pass in an out of bounds value.

The constructor with no arguments is private so that we can't construct an HLS object without specifying it's values. We also provide a property called RGB, which returns a Color that maps to the current HLS values.

In addition, two static methods are provided, which return an HSL object from either a Color, or specified red, green and blue values. Our filters will use these static methods to build an HLS object, then modify one of these values before requesting the modifed colors.

Color Picker

Most HSL color pickers present a hue wheel with varying saturation from the centre to the edge, and then a slider to set luminance for the chosen hue/saturation combination. I don't like this format, because naturally values towards the centre of the circle are underrepresented, and harder to pick. Instead, I propose a system of three sliders, one each for hue, saturation and luminance. Saturation and luminance on the hue slider are set to .5, as is luminance on the saturation slider. Therefore, it's intuitive to move from left to right and select a color.

As you can see, text boxes are provided as well as sliders, the selected color is shown on the right, and it's RGB values are also displayed. The test application will remember the selected color and initialise the dialog with that color when OK is pressed. The HSLColorPicker has a SelectedColor property which can be set before displaying the dialog, and which returns the selected color after the dialog is closed. It returns a Color rather than a HSL object, but can easily be changed if desired.

Image filters

Three image filters are provided, one each for hue, saturation and luminance. The filters take a float and multiply the value being filtered by that number, so that 1 is an identity transform. This causes all values to trend evenly, but has the side effect of stopping values of 0 from changing at all. There are numerous ways around this, including adding a small number to values before multiplication, or accepting a value to add as well as one to multiply by. The hue filter is kind of odd, give the nature of the hue wheel, it simply changes the colors to unrelated values. The saturation and luminance filters are, however, quite useful and worth incorperating into any image processing library.

As always, I present my son as a model for my filters. From top to bottom is the normal image, the hue filter, the saturation filter, and the luminance filter. I've tried to use extreme values to exaggerate the effect to make it obvious. The saturation effect in particular is not that obvious, because his car is fluorecently colored anyhow.

Conclusion

There are numerous ways to represent color, in this article I have focused on one way that is commonly used in paint programs and so on, and which translates easily to human understanding. This means both that it's a good way of asking people to select a color, and that filtering by enhancing or suppressing these values will result in an effect that has uniform meaning to the human eye. Any person who needs to ask a user to select a color should consider using HSL as the means of doing so.

History

1.0 First release. Also fixed a bug, I thought at the .Save method for an image would save in the correct format for the file extension, it seems it always saves as PNG. The code now works out the encoder on it's own.

Share

About the Author

Programming computers ( self taught ) since about 1984 when I bought my first Apple ][. Was working on a GUI library to interface Win32 to Python, and writing graphics filters in my spare time, and then building n-tiered apps using asp, atl and asp.net in my job at Dytech. After 4 years there, I've started working from home, at first for Code Project and now for a vet telemedicine company. I owned part of a company that sells client education software in the vet market, but we sold that and I worked for the owners for five years before leaving to get away from the travel, and spend more time with my family. I now work for a company here in Hobart, doing all sorts of Microsoft based stuff in C++ and C#, with a lot of T-SQL in the mix.

hey there...great article! This may be slightly off topic but...I'm trying to quantize an image using the median cut algorithm but im getting stuck on the ordering of the colors in the image. Basically I take all the colors present in the image and then need to sort them. I was trying to order them using their hsl values like you would in a color picker, but that mixes in gray and white colors with regular colors because they have the same hue but very low saturation. I've been searching for others ways to do it but have come up short so far. You've written a lot of good articles on image processing so i was thinking you may know of a better way to do it. any ideas?

how to change it from per file mode into more batch mode with the manner of do the same tasks for each file in the directory? anybody has time and knowledge of how to change the code to use all these filters in a batch "folder to folder" mode?

so i need to use open directory dialog, then for it use the method you present and i get a table of strings? i once tried this with "mass renamer" yet the "mass" didn't correspond well with actual results - which were simply nonexisting... to be exact - i haven't renamed anything

so any addon for the above-given app with batch mode functionality would be great...

i once tried this with "mass renamer" yet the "mass" didn't correspond well with actual results - which were simply nonexisting... to be exact - i haven't renamed anything

Then I suggest posting your code in the C# forum, to find out what you did wrong.

emkarwin wrote:

so any addon for the above-given app with batch mode functionality would be great...

The point of the articles is to teach some concepts of image processing. Getting a list of file paths and processing them is a simple enough exercise, it doesn't warrant an installment. I suggest trying to write it, and posting your code in the C# forum if it doesn't work. I'll probably answer you there, anyhow :P

Christian Graus - C++ MVP

'Why don't we jump on a fad that hasn't already been widely discredited ?' - Dilbert

i'll try to upload and (first of all) polish my code today once i get back from doctor's... hope you'll be able to help me - again, i'm really new to the programming stuff... so be gentle

ah there is a tiny error within the code - i believe in the random jitter method, where you call the math. something (tested on vs2.5 with .net 2.0), but all you've got to do is add a variable which is double - do the math on it, and then place it into the method you call... that's just for the information

Hey,
So I'm new to image processing in VB, so I was wondering: Since Image.FromStream can accept any System.IO.Stream (Image or otherwise), is it possible to convert any data file to an image (albeit an ugly one)? For instance, convert a long text file to an pictorial representation of the same data?

No way. The only way to do this is to write the text onto a bitmap yourself, then save the bitmap. A text file contains completely different data ( such as ANSI text ) to a bitmap ( either compressed or uncompressed pixel data ). That's why the bitmap is so much bigger than the text file.

Actually, despite what Christian said, I think what you are asking is possible. Since a stream of bytes is a stream of bytes whether it is interpreted as an image or a text file, you might be able to "incorrectly" interpret a text file as an image and get a garbage (noise) image.

However, since the alignment of the text file is not the same as an image, the rendering engine may crash or fail because the data is not in an expected format.

Hello,
I've converted some of your filters to C++ (the 3x3 convolution filters and a few others) and I was thinking of adding them to a SmartWin++ example project in a future version of my C++ IDE: http://sallyide.sourceforge.net/
It will be a tutorial on how to use a worker thread.

I don't like to see my code appearing in other articles, generally, because I think it dilutes their appeal here. Having said that, if they are going into an actual project ( that is, they are not going to appear in a 'how to do filters' article ), then that's fine. A source forge project isn't really an article anyhow. Apart from not wanting to see my code appear in other articles, there are no restrictions, you can use it in your project freely, and distribute it where-ever you like.

Basically, the conversion happens in a space that means we need int, byte goes from 0-255, which is the range of color we can show on a PC, and the range used to store color in a bitmap. So, we're casting down and clamping to make sure our values end up coming back to the range which is acceptable for a pixel under windows.

It is how I understood you code in the very beginning of reading your code too.

My problem is that the very last operation “cast to byte” works in your case also as clamp operations, as “Min” and “Max”, at the same time. As a result, operations “Min” and “Max” are unnecessary here. They are just “overkill”.

So my original question could be rephrased this way: “Is there any difference in the final result between your original variant

X = (byte)Max(Min(Z)) ,

on the one hand, and noticeably simpler and faster variant

X = (byte)Max(Min(Z)) ,

on the other hand?

Max and Min are _functional_ calls. It is in contrast to the arithmetic operations calculating color conversion itself, which is done in the registers only (no stack involved). In mass image processing operations in C# it may slow down the whole procedure.

Am I correct?

I see one potentially important reason to include operator Min and Max in your code. It is if to consider your code as a template for variants of YUV conversions different from the variant used in you example. Including operations Min and Max has to remind people about necessity to clamp final values IF IT IS REQURED. In your case you can do clamping as a side effect of down casting to byte, IMPLICITLY. In many different situations clamping HAS to be done via standalone EXPLICIT clamp operations.

But if above assumption correct then another important remark automatically appears: “You did not give a definition for the VARIANT of YUV conversion used in your code”.

I recalculated integer coefficients used in your code into float point ones, and found that you use approximation to the RGB-YCbCr conversion defined in the recommendation “ITU-R BT.601-5”. That variant uses normalization of values: RGB [0, 255], Y [16, 235], U and V [16, 240].

Your integer coefficients provide accuracy at one stage of conversion (RGB-YCbCr or YCbCr-RGB) about 1 percent. This accuracy is more or less acceptable for average quality image processing but is not enough for higher level operations. That level of accuracy is not enough to convert images taken with the modern photo and video recording equipment, even consumer grade, without degrading original quality of images. It is just amazing what modern equipment offers us. Imagine what will be available soon…!

And here is a problem. Now everybody can go to the store, buy a piece of engineering marvel, and make very high quality digital photo or video pictures. But very often that quality will never bee seen with its original quality as a picture at the screen; just because original data will be distorted/rejected by the image processing software at the very beginning stages of color space conversions…;o))..!

So my original question could be rephrased this way: “Is there any difference in the final result between your original variant

X = (byte)Max(Min(Z)) ,

on the one hand, and noticeably simpler and faster variant

X = (byte)Max(Min(Z)) ,

How are they different ?

If you run this code:

int n = 300;
byte b = (byte) n;

you will find that b = 44. Which is what I would expect. So, the Max/Min operations clamp the values. A cast does NOT clamp the values. Instead, if you go over 255, you get the value in the bottom byte, the higher bytes are tossed away. And that's most likely to turn 255 to 0. The cast is added simply because C# is very particular about forcing you to cast a lot.

V.S. wrote:

In your case you can do clamping as a side effect of down casting to byte, IMPLICITLY

No, I can't.

V.S. wrote:

Imagine what will be available soon…!

Yes, it's exciting to think that before too long we'll have display hardware capable of dealing with HDR images. I wonder how long it will take for Windows to support them.

1. As far as I understand the whole context of your code, you CAN avoid using Max/Min clamping without risking to loose quality.

2. n = 300…..? Where that “Beast” is came from?

Of course down casting

int n = 300
byte b = (byte)n;

gives 44 in most languages, not in the C# only.

The question is: “How your equations can produce that sort values?”
And I mean the set of YOUR equations placed inside the same class.

Your equations for direct conversion RGB to YUV has to yield values within the ranges Y[16, 235] , U and V [16, 240]. I did not test your equations statistically, but I expect that because you use low accuracy integer coefficients, some yielded Y,U,V values can be out of range. I expect accuracy about plus/minus 1. “Out of range” is always a problem for RGB/YUV conversion, even when much higher accuracy coefficients are used. Quantization from the continuous range of arguments to the discrete range may generate “out of range” values. In your case normalization and quantization are made from the continuous RGB [0,1] range to the discrete 8-bit [0,255] range. Usually “out of range” artifacts can be excluded by CLAMPING. For your RGB to YUV (YCbCb) conversion it can be something like

But you DO NOT do that. You just down cast values to the type “byte”…!

Can your equations for direct conversion RGB to YUV, used without clamping, yield values like 300. Definitely NO…!

Is it justifiable to omit clamping? Yes, of course, especially if the special care is taken about “Their Majesty SPEED”. Your code gives clear indications that you did care about speed. The fact, that you use rations of coefficients instead of float values, that you replace division by shift, that you OMMIT clamping of yielded Y, U and V values, has obvious meaning.

All that means, to me, at least, that you simplified your code for the sake of speed by sacrificing some accuracy. Fine, it is acceptable compromise…

Remark:
----------
In the professional code different technique is used usually. All values of summands like [Kr * R] are pre calculated, and placed into the integer arrays. As a result ALL multiplications are excluded completely. Multiplications are replaced by finding the element in the array. It is much faster. Clamping is often done with the same “pre calculated array” technique.

Obviously, it is natural to expect that your code will be consistent, and the set of rules defined for RGB to YUV conversion, will be applied for inverse YUV to RGB conversion too.

Your equations for a direct RGB to YUV conversion can not generate “wild” RGB. Term “wild” here means combination of RGB values, which may yield in the inverse conversion YUV to RGB value laying “far from range”, like 300 or 500. That sort of RGB combinations can be produced artificially or in real life. For example in NOISY YUV data transmission lines; as a result of degrading with time an analog storage of YUV data, like magnetic of photo tape etc. But it is absolutely different case from you situation.

In that case, filtration of “out of range” values has to be done at the very first stage of YUV to RGB conversion, before calculating equations. And what is important, simple clamping for “wild” values does not give much improvement usually. More complicated logical filters have to be used to sieve “grains from the chuffs”, and, if it is possible, restore back "some chuffs into grains".

Make a simple test. Take some statistically regular picture and compare PSNR values for that picture produced by two variants of conversion RGB to YUV, and back YUV to RGB. Use your equations, of course. First variant is with Max/Min clamping and second variant without clamping. I am sure almost, that difference between values of PSNR will be small.

“Almost” means that I did not test you coefficients. And I know well what is a “wild” beast RGB / YUV conversion can be, when low accuracy coefficients are used. Surprises are possible…

I have to admit that it's years since I wrote this. I have not looked over it for some time, but it would have been built out of existing reference material that I purchased. It's possible that the clamping I am doing is not needed. If that's the case, feel free to remove it.

Hello Christian,
the artikel and the code provided is just what i was searching for.
Your code sample includes all for example what MS Photo Editor can do.
The code is very compact and good to understand.
I would like to use the processing engine (CSharpFilters) with your permission in my current project at http://doodle.go-on-software.de.

That's fine,
I will add a credit to my web site.
Currently i develop an AJAX-Based Web-Application, that carrys some image and foto editing functions to the Web. At the moment i support the base functionality like resize, rotate, crop, and base filters for brightness, hue and saturation.
With your code (e.g. edge detection) I hope to add more sophisticated funtionality to my application.
If you are interested, feel free to have a look on it. I would be pleased to get your opinion.