Not sure if I can help you with the color science stuff yet, but I did have a question. How did you determine that NES uses Y'IQ colorspace instead of Y'UV? To the best of my knowledge, Y'IQ hasn't been used since the 1970s due to decoder cost. And I think you had the in-phase and quadrature-phase components backwards on page 2. Composite color for Y'IQ should be:C = Q∙sin(ωt + 33°) + I∙cos(ωt + 33°)

I'm using YIQ because the NTSC standard uses YIQ for composite color encoding in every documentation I can find. Even if modern TVs use YUV decoders, YIQ and YUV are the same, the only difference is that YIQ's axis is rotated slightly. A simple tweak of the hue knob could convert between the two encodings, hypothetically.

Moreover, the NES doesn't actually "use" YIQ or YUV like you might be thinking; YIQ and YUV are the same; there's Y=Luminance, and then both I/Q and U/V are sine waves that are superimposed on the Y signal. If you disregard the exact definition of I/Q and U/V, this signal looks like each scanline is just a sine wave where the phase is the hue, the amplitude (of the sine wave) is the chroma, and the "bias" is the luminance. This is all the NES is doing; it's outputting a hue and a luminance, it's not actually converting RGB to YIQ first or anything like that.

The reason I'm working with color spaces is because the way my TV displays rgb[FF,AA,00] (for example) is different from how rgb[FF,AA,00] appears on my computer screen, and I want to be able to display the color my TV makes, but on my computer screen. However, I'm having a LOT of trouble getting it to look right.

The biggest problem I have is that YIQ generates a lot of colors that are out of the RGB range. I need to do something called "gamut mapping" in order to make an approximation of the intended color, but I have NO idea how to do this correctly, and Google doesn't turn up any programmer-friendly help. Hence, I've been stuck.

I think what you're supposed to do is take the voltage difference between $0F and $20 to represent only 0% to 75% or 80% luma (in sRGB, #000000 to #BFBFBF or #CCCCCC). That'll darken the overall picture but give headroom for over-bright colors.

The reason I'm working with color spaces is because the way my TV displays rgb[FF,AA,00] (for example) is different from how rgb[FF,AA,00] appears on my computer screen, and I want to be able to display the color my TV makes, but on my computer screen. However, I'm having a LOT of trouble getting it to look right.

Can you elaborate on this? All I can think of is gamma. Does it display say rgb[FF,00,00] the same, and rgb[00,AA,00] the same, but not when combined? I know there was talk here many years ago about TVs doing some adjustment of hues in the skin-tone range of hues.

Well, that was just an example, really; the idea was that the colors that my (or anyone's) TV displays are different from the colors that a computer displays. TVs are a real crapshoot when it comes to standards, because every manufacturer uses a different gamut, which means the colors will always be slightly different from TV to TV. However, the colors seem to be consistently different in the same way, when you compare to a computer or a digital (LCD) tv.

Gamma probably has a huge role in this difference, and the way you need to apply gamma is by applying it individually to the R channel, the G channel, and the B channel, which means the hue shifts as the brightness changes. I still haven't figured out a good way to simulate this with the CIE graph.

As far as "looking right", I'm mostly talking about colors x2, x8, and xC. x2 ends up looking too purple in a lot of the available NES palette generators, x8 is a brilliant "simpsons" marigold color which gets browner as it gets darker (and 08 is the darkest color on the NES palette; very close to black, actually), and xC is cyan when it's light, and turns much bluer as it gets dark. These are the biggest points I have right now.

Gamma probably has a huge role in this difference, and the way you need to apply gamma is by applying it individually to the R channel, the G channel, and the B channel, which means the hue shifts as the brightness changes. I still haven't figured out a good way to simulate this with the CIE graph.

As stupid as it sounds, in what other way could one end up possibly implementing it? Doing the calculation on each component as-is seems the easiest way o_O;

As stupid as it sounds, in what other way could one end up possibly implementing it? Doing the calculation on each component as-is seems the easiest way o_O;

What I meant was:

YIQ -> CIE -> RGB

I wanted to apply the gamma to the color while it was still in the CIE stage. That way, the gamma is respective to the red, the green, and the blue defined by the FCC. If I apply the gamma to the color after I convert it to RGB, by applying gamma to R, G, and B, I'm applying the gamma to the red, the green, and the blue defined by sRGB, instead of the ones defined by FCC.

The reason I don't know how to do it is because of the way YIQ -> CIE works; I start with a luminance and then add the chroma to it. This is different from RGB, where you start with black, and then add color to it. To simulate the gamma, I need to change the luminance as well as the chroma (I think), but I don't know the way I need to do this.

And while we're on the topic, gamma is not an unwanted side-effect; it's a deliberate scheme for encoding luminance into an electrical/digital signal suited for human eyes' greater sensitivity to variations in luminance in the darker end than in the lighter end. A linear encoding would waste accuracy in the light tones and bring out more noise/quantization effects in the darker tones.

The 2.odd gamma characteristic was a fortunate side-effect of the roughly quadratic response of CRT kinescopes. The picture signal represents voltage, but light emission is roughly proportional to beam power, which in turn is proportional to the square of voltage: P = I²R = V²/R.

Gamma should be applied to the RGB values, nothing else. The whole point about gamma is how the screen shows the RGB ramp, it has nothing to do with YUV or stuff like that.

Ok, maybe I'm not explaining myself clearly.

YIQ generates a signal for R that goes to the electron beam, and similar signals for G and B. The red, green, and blue phosphors of the screen are excited, and the level of light they emit, compared to the voltage being sent to the gun, represents a 2.2 gamma curve.

This is what I'm trying to simulate; 3 phosphors, each where the input generates an output with a gamma curve of 2.2, combining to form a color which I can plot on the CIE graph, to be converted to sRGB to display on a computer. I'm not an idiot trying to do something incorrect with the gamma, I assure you; I want the resulting CIE color to represent the thing I just mentioned. Converting YIQ -> CIEXYZ -> sRGB, and applying the gamma right before I display it on screen is not what I'm trying to do, because the gamma is not relative to the TV's phosphors in that case.

As far as I can tell, YIQ produces a linear value for red, green, and blue. The gamma curve comes from the phosphor. If I'm rendering YIQ directly to CIEXYZ (or CIELuv, like I'm actually doing), the color I get does not take the phosphor's gamma into account. This is the problem I'm having, and why I can't get the colors to look right.

YIQ can produce negative values for R, G, and B. I have no clue how to apply a gamma curve to this kind of output, nor how a negative value affects the overall color.

As far as I can tell, YIQ produces a linear value for red, green, and blue. The gamma curve comes from the phosphor. If I'm rendering YIQ directly to CIEXYZ (or CIELuv, like I'm actually doing), the color I get does not take the phosphor's gamma into account. This is the problem I'm having, and why I can't get the colors to look right.

This is not quite right. The R, G, and B components represent perceptually linear brightness (or something close to it), not linear light intensity. That means a 50% R component is roughly 22% as bright as a 100% R component if the CRT has a gamma of 2.2. It also means it's impossible to convert YIQ directly to CIELuv without first converting it to some intermediate RGB colorspace to do gamma correction.

Drag wrote:

YIQ can produce negative values for R, G, and B. I have no clue how to apply a gamma curve to this kind of output, nor how a negative value affects the overall color.

Those RGB values are used to drive the CRT. (Actually, they're adjusted by the contrast and brightness knobs, and then used to drive the CRT, so they may not be negative by the time they get to the cathode.) Any negative values in a properly-adjusted TV set will represent a voltage too low to drive the phosphors, and are equivalent to zero. It is possible for negative values to affect the picture: way back when the local cable company broadcast the SMPTE color bars on channel 70, the darker-than-black portion of the PLUGE pulse would indeed appear darker than the black level if you turned the brightness up too high.

What does it mean to say that something "uses" a particular color space?

The NES uses a particular color space in the sense that if the PPU engineer assumed that the displaying CRT used NTSC phosphors, he would have chosen resistors for the color-signal-generating resistor ladder that produce a lower saturation level than if he assumed that the displaying CRT used SMPTE-C phosphors.

Drag wrote:

Even if modern TVs use YUV decoders, YIQ and YUV are the same, the only difference is that YIQ's axis is rotated slightly.

The main difference between YUV and true YIQ decoding is that with the latter, the I signal is assumed to occupy a 1.5 MHz bandwidth (Q a 0.5 MHz bandwidth), whereas with YUV decoding, both are assumed to occupy a 0.5 MHz bandwidth. Unless you are decoding with different bandwidths for I and Q, you are not truely doing YIQ decoding, but merely "rotated YUV" decoding.

blargg wrote:

Can you elaborate on this? All I can think of is gamma. Does it display say rgb[FF,00,00] the same, and rgb[00,AA,00] the same, but not when combined? I know there was talk here many years ago about TVs doing some adjustment of hues in the skin-tone range of hues.

Television sets differ in many ways other than gamma. They differ at least also in

phosphor chromaticities ("So what does 100% red look like?")

white point ("What does 100% white look like?")

behavior with out-of-spec signals.

The first two points define the color space. Three hexadecimal RGB values say nothing about how a color actually looks until you specify a color space. If none is specified, modern systems assume sRGB. The last point is of particular importance for home computers and consoles. Signals can be out-of-spec with regards to

the peak-to-peak signal level

the sync level relative to blanking level

the white level relative to blanking and/or sync

the color burst amplitude

oversatured red/green/blue levels after decoding

In the case of the NES, pretty much everything except the peak-to-peak signal level is wrong. Since the NTSC standard does not define how to deal with non-standard signals, this should be the greatest source of variation between different television sets and computer images. In particular, an analogue CRT can be expected to react very differently to a >100% red/green/blue signal than a digital device (which will probably just clip).

For now, I've updated the palette generator. Some of the new tweak settings are from suggestions. The hue was me, because even though a hue tweak of 0.0 should represent the exact hues being sent, it still doesn't look right unless I shift them by -0.15. Your mileage may vary.

I agree. With the color burst being at $x8, there is no red. Just pink and orange, with nothing in between.

Quote:

The main difference between YUV and true YIQ decoding is that with the latter, the I signal is assumed to occupy a 1.5 MHz bandwidth (Q a 0.5 MHz bandwidth), whereas with YUV decoding, both are assumed to occupy a 0.5 MHz bandwidth. Unless you are decoding with different bandwidths for I and Q, you are not truely doing YIQ decoding, but merely "rotated YUV" decoding.

Some machines even use 1.5Mhz for U and V which is pretty stupid since you are left with only 2Mhz of Y bandwidth, and both color axis have a cropped upper sideband, due to the 4.2Mhz channel limit.

Has it been confirmed that color burst is always at the same phase as color $8? I'm asking because of the several Famicom and Twin Famicom consoles I have, each outputs colors with a slightly different hue shift. In particular, while the -G 2C02 revision seems to place burst at color 8 +/- 0 degrees, the -H 2C02 (in a AN-505BK Twin Famicom) seems to output burst at 8 -10 degrees, equivalent to a palette setting of +10 degrees, which is very greenish.

which is pretty stupid since you are left with only 2Mhz of Y bandwidth

Not with a notch or comb filter.

psycopathicteen wrote:

and both color axis have a cropped upper sideband, due to the 4.2Mhz channel limit

... which doesn't exist in studio applications. (Leaving me to wonder what a broadcast TV station does when playing back a composite studio tape --- just crop the upper sideband, or decode and reencode with narrowband color difference signals before transmission?)

Edit: found the answer myself.

SMPTE EG 27 wrote:

When this signal is transmitted, a low-pass filter in the transmitter bandwidth limits the luminance (Y) signal and the upper sidebands of the color-difference signals (either B-Y and R-Y or I and Q) to 4.2 MHz. Transmission of equal-bandwidth color-difference signals to the receiver has the effect of limiting the recoverable chroma bandwidth to 0.6 MHz as a result of the truncation of the upper sidebands of the chroma modulation in the transmitter’s 4.2 MHz filter.

Has it been confirmed that color burst is always at the same phase as color $8? I'm asking because of the several Famicom and Twin Famicom consoles I have, each outputs colors with a slightly different hue shift. In particular, while the -G 2C02 revision seems to place burst at color 8 +/- 0 degrees, the -H 2C02 (in a AN-505BK Twin Famicom) seems to output burst at 8 -10 degrees, equivalent to a palette setting of +10 degrees, which is very greenish.

In the digital domain, it's definitely always phase $8, and there's no ability to specify anything other than some multiple of 30°. Whether subsequent analog effects skew the phase afterwards is something still under discussion. ( e.g. viewtopic.php?t=10101 )

As far as I've been able to tell from the discussions, it really looks like there shouldn't be an appreciable phase error between colorburst and other voltages. Due to the common collector amplifier that's certainly present in the NES, and in the schematic for the Famicom, the output impedance shouldn't vary appreciably by output voltage.

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum