6 bit gives you 64 levels of tonality per channel. 8 bit gives you 256 levels per channel. That's 262 thousand colours versus 16.77 million colours. Seems pretty clear which spec will give you more tonal separation, and tonal separation is what we want for seeing detail anywhere in the luminosity scale.

6 bit gives you 64 levels of tonality per channel. 8 bit gives you 256 levels per channel. That's 262 thousand colours versus 16.77 million colours. Seems pretty clear which spec will give you more tonal separation, and tonal separation is what we want for seeing detail anywhere in the luminosity scale.

Can't disagree with that, Mark.

So, having established that the monitor is capable of 256 levels per channel we can perhaps ignore the CR spec? Or perhaps we should try to ensure the contrast ratio spec does not exceed that of a glossy print, otherwise it might be difficult to calibrate the monitor for softproofing purposes? Is that correct?

OK Ray, I've been talking about the fineness of the steps within the scale, and you're asking about the size of the scale itself. Re the latter, we're trying to relate the contrast ratio of the display with the dynamic range of the print. One can go about this mathematically, empirically, or both together. I simply haven't done the research to relate the numerical specs from one form of output to the other in terms of their impact on comparative appearance. That would be a considerable exercise. I'm not sure either how fruitful it would be.

Most of our displays probably have more potential or actual contrast than most printers and papers can reproduce, hence the challenge is to "dumb down" the former so that image appearance will be reliable relative to printed output. This is something I do with the calibration settings in ColorEyes Display, and then visually compare appearances between monitor and print. It's totally judgmental and totally non-mathematical (in my head, not under the hood where it is only mathematical), but it works pretty well. My waste ratio (discarded material out of the printer) is in the range of 10%, and more often than not this is a result of "pilot error" rather than miscues from the display. Pilot error usually arises because of that inherent difference between the appearance given from transmitted versus reflected light. This can be bridged with appropriate calibration settings and by soft-proofing to a great extent, but not totally, so depending on the image, stuff can emerge from the printer which appears to be less vibrant than it looked on display. Of course if I thought that by changing my calibration settings I could get the waste ratio down further I would do so, but there seems to be a rather narrow range which works for "most" images and reliability deteriorates once I move outside that range in any direction.

6 bit gives you 64 levels of tonality per channel. 8 bit gives you 256 levels per channel. That's 262 thousand colours versus 16.77 million colours. Seems pretty clear which spec will give you more tonal separation, and tonal separation is what we want for seeing detail anywhere in the luminosity scale.

I would amend Mark's statement to read 6 bits gives a possible 64 levels of tonality, but that is merely the width of the channel and that does not mean that the device can actually reproduce those levels. The situation is similar with cameras. A camera may have a 14 bit ADC, but that does not mean that its dynamic range is greater than in 12 bit mode. With many cameras, the extra bit depth is wasted and is used mainly to digitize noise.

The number of bits determines the potential number of levels, but the distribution of levels in the various zones is also critical. With the linear capture used in digital sensors, most of those bits are wasted in the highlight zones and the shadows are relatively impoverished in levels. The use of a gamma 2.2 tone curve redistributes some of those levels to the shadows where they are needed. For an explanation, see the table on Norman Koren's web site. Some users calibrate their monitors to an L* TRC with equal visual perceptual steps between the levels, and this may be slightly better than a gamma 2.2 TRC. In any case, 6 bits is insufficient to reproduce good gradation in the shadows.

I would amend Mark's statement to read 6 bits gives a possible 64 levels of tonality, but that is merely the width of the channel and that does not mean that the device can actually reproduce those levels. The situation is similar with cameras. A camera may have a 14 bit ADC, but that does not mean that its dynamic range is greater than in 12 bit mode. With many cameras, the extra bit depth is wasted and is used mainly to digitize noise.

The number of bits determines the potential number of levels, but the distribution of levels in the various zones is also critical. With the linear capture used in digital sensors, most of those bits are wasted in the highlight zones and the shadows are relatively impoverished in levels. The use of a gamma 2.2 tone curve redistributes some of those levels to the shadows where they are needed. For an explanation, see the table on Norman Koren's web site. Some users calibrate their monitors to an L* TRC with equal visual perceptual steps between the levels, and this may be slightly better than a gamma 2.2 TRC. In any case, 6 bits is insufficient to reproduce good gradation in the shadows.

Bill

Bill,I have no doubt that 6 bits per channel from a monitor is inadequate for critical photography. The sorts of questions I'm asking are as follows.

(1) Is a 6 bit per channel monitor with a high contrast ratio better than a 6 bit monitor with a low CR?

(2) Do manufacturers of monitors match the bit depth to the contrast ratio so that monitors with a 6 bit output will always have a 'real' CR which is lower than that of a monitor with an 8 bit output?

(3) Do manufacturers of monitors spend resources in producing a higher 'real' CR than is useful for photographic purposes, as a sales technique?

(4) If so, does the higher than useful CR present a disadvantage for calibration purposes in relation to print output?

-> (1) Is a 6 bit per channel monitor with a high contrast ratio better than a 6 bit monitor with a low CR?

Depends on your use. Most monitors achieve high contrast ratios by having the maximum luminance so high that you need wear sunglasses. LCD contrast ratio is governed by how little light leaks through when all the filters are active (black level) and how bright the backlight is when the filters are turned off (white level). Hitting 1000:1 or higher contrast ratios may well entail a white luminance of at least 200 cd/m2. That's bright.

-> (2) Do manufacturers of monitors match the bit depth to the contrast ratio so that monitors with a 6 bit output will always have a 'real' CR which is lower than that of a monitor with an 8 bit output?

No. 6-bit displays are used instead of 8-bit because it reduces response time. Making higher bit depth displays that are also visually appealing for watching video or action games gets expensive.

-> (3) Do manufacturers of monitors spend resources in producing a higher 'real' CR than is useful for photographic purposes, as a sales technique?

Photography is a niche market. Video games that have extensive dark, muddy scenes can benefit from a screen running at blazingly bright levels. Likewise, working in a brightly lit office environment is easier with a display set to higher luminance than one wants for an extended photo editing session. That said, there certainly is a sales factor at work as well. Judging from the contents of my junk mail folder, bigger numbers are the key to a happy, fulfilling life.

-> (4) If so, does the higher than useful CR present a disadvantagefor calibration purposes in relation to print output?

It could well. Dialing down the backlight of a LCD too far usually results in smaller color gamut, increased banding, and other such goodies. Note, however, that the ISO spec for print viewing calls for an illuminance of 500 lux. This translates into a white level of 160 cd/m2 on your monitor for exact matching.

-> (1) Is a 6 bit per channel monitor with a high contrast ratio better than a 6 bit monitor with a low CR?

Depends on your use. Most monitors achieve high contrast ratios by having the maximum luminance so high that you need wear sunglasses. LCD contrast ratio is governed by how little light leaks through when all the filters are active (black level) and how bright the backlight is when the filters are turned off (white level). Hitting 1000:1 or higher contrast ratios may well entail a white luminance of at least 200 cd/m2. That's bright.

Good response. So, comparing two 6 bit monitors which are equally bright, which would you prefer, the one with the higher CR, or the one with the lower CR? (Everything else being the same, of course).

Quote

-> (2) Do manufacturers of monitors match the bit depth to the contrast ratio so that monitors with a 6 bit output will always have a 'real' CR which is lower than that of a monitor with an 8 bit output?

No. 6-bit displays are used instead of 8-bit because it reduces response time. Making higher bit depth displays that are also visually appealing for watching video or action games gets expensive.

The one with the higher CR would still be preferred for the nich market of photography, no?

Quote

-> (3) Do manufacturers of monitors spend resources in producing a higher 'real' CR than is useful for photographic purposes, as a sales technique?

Photography is a niche market. Video games that have extensive dark, muddy scenes can benefit from a screen running at blazingly bright levels. Likewise, working in a brightly lit office environment is easier with a display set to higher luminance than one wants for an extended photo editing session. That said, there certainly is a sales factor at work as well. Judging from the contents of my junk mail folder, bigger numbers are the key to a happy, fulfilling life.

Nevertheless, it would always be preferable for purposes of the niche market of photograhy to get the monitor with the higher CR, all else being equal, would it not?

Quote

-> (4) If so, does the higher than useful CR present a disadvantagefor calibration purposes in relation to print output?

It could well. Dialing down the backlight of a LCD too far usually results in smaller color gamut, increased banding, and other such goodies. Note, however, that the ISO spec for print viewing calls for an illuminance of 500 lux. This translates into a white level of 160 cd/m2 on your monitor for exact matching.

Dialling down the backlight may be necessary if the screen is too bright. Contrast ratio is a different spec from luminance brightness. If two monitors have equal maximum luminance brightness, but one has a higher contrast ratio, which would you prefer? (All else being equal).

Thanks for the added dimensions Bill. Extracted above is basically what I was getting at.

Hey! Mark,

We all know here that 6 bits per channel is not ideal for photography. The issue is 'contrast ratio' and any disadvantages a specific and particularly high contrast ratio may have for photography and calibration purposes.

For the same reason having a 12 stop scene range and a 6 stop capture device is problematic. Or a 10,000:1 display contrast ratio trying to soft proof a print that has a 250:1 ratio.

On the other hand, one would have no trouble reproducing a 6 stop scene with a 12 stop capture device, provided that the 12 stop device uses enough bits to get smooth gradation of tones. This reverse analogy is more appropriate to the case being discussed here. Certainly, 8 bits would be insufficient to prevent banding with a CR of 10,000:1. In the case of a 12 stop capture device, one would likely want to use some type of HDR Encoding. A CR of 10,000:1 is about 13.3 stops.

We all know here that 6 bits per channel is not ideal for photography. The issue is 'contrast ratio' and any disadvantages a specific and particularly high contrast ratio may have for photography and calibration purposes.

Let us assume as Andrew says that the contrast ratio of a print from a specific printer/paper combination is 250:1. Your display has a contrast ratio of 500:1. Firstly, from the point of view of appearance, I'm not sure whether the human eye/brain would perceive the display to be twice as contrasty as the print. One needs to get that relationship settled before being able to quantify much else with any precision. Secondly, if the display has a higher DMax than the paper there will be a limitation to the accuracy of soft-proofing, insofar as depending on the bit depth, display quality etc. there may be more deep shadow detail from the display than in the print, the display black would look darker than the print black and the display white whiter than paper white (depending on display temperature) absent soft-proofing. When we soft-proof with "Simulate Paper Color" and BPC checked, the purpose is to bridge these perceptual gaps by displaying the output-related portion of the luminance range, hence I would think the appropriate contrast ratio and display settings depends essentially on which calibration parameters provides the most successful and reliable soft-proof. Logically one would think the results will be more reliable with lower rather than higher contrast ratio in the display, given what the printer/paper can reproduce. I couldn't put specific numbers to it, and those numbers would vary depending on the image and the output characteristics. Not having tried to quantify these relationships, I depend on a combination of logic and looking at inputs and outputs, and calibrate my display accordingly.

-Hitting 1000:1 or higher contrast ratios may well entail a white luminance of at least 200 cd/m2. That's bright.

That's usually the problem. Turns out, though, that my couple-years-old MVA Westinghouse 24" is calibrated to 1300:1 at 124cd/m^2. Utterly unexpected, and I didn't have the equipment to measure it until recently. IPS panels don't do nearly as well for contrast.

Regarding contrast: It seems to me, as long as one is calibrating with a Hardware LUT of sufficient bit depth (eg. ~10), a high contrast ratio could simply be calibrated away if it was not desired.

[quote name='MarkDS' date='Dec 20 2009, 08:30 AM' post='334270']Firstly, from the point of view of appearance, I'm not sure whether the human eye/brain would perceive the display to be twice as contrasty as the print.

Quote

I don’t know that it would appear twice as contrasty but I’m pretty darn sure, they wouldn’t match/

Currently we have two ways to attempt to simulate the contrast ratio on screen. Soft proof using the simulate options which are problematic because Adobe (no one) can as yet, control the “paper white” or “ink black” of the UI. Meaning you’re going to have to view in full screen mode with everything but the image under simulation blacked out (or as LR does, using Lights out). The other way, or in combination of the above which I suspect is necessary, alter the contrast ratio of the display itself. My take is, the more you do in the later, the less that needs to be accomplished in the former. So having the ability to control the display contrast ratio seems useful and probably why the high end reference displays have provided this since (if memory serves me), Sony Artisan (I don’t recall being able to do this in the old days on my Barco).

Better still, the ability to calibrate numerous contrast ratio’s for the type of print work you are currently soft proofing and being able to update this (and the matted ICC display profile), on the fly.

Let us assume as Andrew says that the contrast ratio of a print from a specific printer/paper combination is 250:1. Your display has a contrast ratio of 500:1.

It is not necessary to use the full contrast ratio of the monitor. For example, consider an idealized print such as used for the ICC PRMG. That print has a 288:1 dynamic range, having a neutral reflectance of 89% and a darkest printable colour having a neutral reflectance of 0.30911%. If the print is viewed under the recommended illuminance of 500 lux and behaves as a Lambertian reflector, the paper base would have a reflected luminance of 500*0.89/Pi = 141 cd/m2. The darkest printed color would have a reflected luminance of 0.5.

To reproduce the print as best as possible on the screen, one could calibrate the white point of the monitor to 141 cd/m2 and the black point to 0.5 cd/m2. The effective contrast ratio of the monitor would then be 282:1. Reflective and emissive sources may be perceived somewhat differently, but I would think the match would be reasonably close.

It is not necessary to use the full contrast ratio of the monitor. For example, consider an idealized print such as used for the ICC PRMG. That print has a 288:1 dynamic range, having a neutral reflectance of 89% and a darkest printable colour having a neutral reflectance of 0.30911%. If the print is viewed under the recommended illuminance of 500 lux and behaves as a Lambertian reflector, the paper base would have a reflected luminance of 500*0.89/Pi = 141 cd/m2. The darkest printed color would have a reflected luminance of 0.5.

To reproduce the print as best as possible on the screen, one could calibrate the white point of the monitor to 141 cd/m2 and the black point to 0.5 cd/m2. The effective contrast ratio of the monitor would then be 282:1. Reflective and emissive sources may be perceived somewhat differently, but I would think the match would be reasonably close.

That sounds like a very useful approach Bill. Reading that ICC reference, it appears to be a "virtual print", the file for which I could not find. How representative do you think this would be given the large variety of image characteristics and printer/paper combinations we need to deal with - would you say this is a good protrayal of approximate boundary conditions? And from what you are saying here we need not worry about the contrast ratio of our displays as long as they are at least 288:1 and we calibrate properly according to these calculations?

It is not necessary to use the full contrast ratio of the monitor. For example, consider an idealized print such as used for the ICC PRMG. That print has a 288:1 dynamic range, having a neutral reflectance of 89% and a darkest printable colour having a neutral reflectance of 0.30911%. If the print is viewed under the recommended illuminance of 500 lux and behaves as a Lambertian reflector, the paper base would have a reflected luminance of 500*0.89/Pi = 141 cd/m2. The darkest printed color would have a reflected luminance of 0.5.

To reproduce the print as best as possible on the screen, one could calibrate the white point of the monitor to 141 cd/m2 and the black point to 0.5 cd/m2. The effective contrast ratio of the monitor would then be 282:1. Reflective and emissive sources may be perceived somewhat differently, but I would think the match would be reasonably close.

Interesting. I have a NEC P-221 and SpectraView software. The software allows you to set the white point and contrast ration but not the black point. I can adjust the contrast ratio and perhaps that would effect the black point. When I first got the monitor and software I didn't pay much attention to the contrast ratio allowing it to be set at the maximum. I found that there was a pretty significant mismatch when using the full contrast and changed the setting to 450:1. From the ICC site it looks like I might be able to go a bit lower. I'll try that the next time I calibrate and see what the black point ends up being. I do have a lower white point than above because of the lighting in my "work room." Very useful reference and thanks for posting.

The software allows you to set the white point and contrast ration but not the black point.

When you ask for a specific contrast ratio, the software is adjusting the black point (and luminance) to hit that desired target (as close as possible while still maintaining the other target calibration aim points like the cd/m2 you asked for).

That sounds like a very useful approach Bill. Reading that ICC reference, it appears to be a "virtual print", the file for which I could not find. How representative do you think this would be given the large variety of image characteristics and printer/paper combinations we need to deal with - would you say this is a good protrayal of approximate boundary conditions? And from what you are saying here we need not worry about the contrast ratio of our displays as long as they are at least 288:1 and we calibrate properly according to these calculations?

Mark, if you look at the white paper by Karl Lang on the Adobe site, the best photographic or inkjet prints can have a CR of 275:1, but more typically 250:1. Of course, matt papers will have a lower DMax. My post was more in the order of a thought experiment and I haven't done such a calibration as I can not adjust the black point of my monitor. I would think that a monitor that displays an accurate rendering of the image at a CR of 288:1 would be adequate for soft proofing, but I would like to hear from Ethan Hansen or others who have actual experience in this area. Until someone convinces me otherwise, I think that a high CR is an advantage for a monitor.

When you ask for a specific contrast ratio, the software is adjusting the black point (and luminance) to hit that desired target (as close as possible while still maintaining the other target calibration aim points like the cd/m2 you asked for).

That's what I figured after reading the documentation. Right now I have the contrast set at 450:1 which makes the black point about 1/2 of what is suggested in the ICC article previously posted. I'll back the contrast down some and see what happens.