Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

ArmageddonLord writes with this news from the IEEE Spectrum, reporting on display industry gathering Display Week: "Liquid crystal displays dominate today's big, bright world of color TVs. But they're inefficient and don't produce the vibrant, richly hued images of organic light-emitting diode (OLED) screens, which are expensive to make in large sizes. Now, a handful of start-up companies aim to improve the LCD by adding quantum dots, the light-emitting semiconductor nanocrystals that shine pure colors when excited by electric current or light. When integrated into the back of LCD panels, the quantum dots promise to cut power consumption in half while generating 50 percent more colors. Quantum-dot developer Nanosys says an LCD film it developed with 3M is now being tested, and a 17-inch notebook incorporating the technology should be on shelves by year's end."

This technology is nothing new. Its been used heavily since the sixties to bring out vivid colors in all manner of displays (its actually even older than traditional color tv displays). Sometimes they refer to the technology as microdots [wikipedia.org]. I'm not sure I need a LSD screen yet or one that uses PCB bus instead of a PCI bus one.

Because the energy levels of the electrons are at quantum levels. They transition between these levels and emit light. This is an absolutely correct usage of the word "quantum". You are a foolish troll.

How is this different than anything that absorbs and releases photons as electrons move up and down these levels. In other words how is this differentiated between everything else that has atoms and electrons. Don't the electrons in neon raise quantum levels when neon is excited. So do I have quantum beer sign? Still seems to be a buzzword.

The term is related to Quantum Well and Quantum Wire. A quantum well is a system where particles (electrons) are confined to move in 2D by two very large potential barriers on either side of the well. It's generally one of the first systems studied in quantum mechanics. Quantum wires are like quantum wells except the potential barriers also exist in a second dimension, so that the particle is confined to move in 1D along the "wire". A quantum dot is a small box which is confined by potential barriers in all directions so that the electron can only exist within the extremely small dot.

Obviously quantum dots are going to be around the nm range so that they can actually confine the particles in any meaningful sense, but the point is the effects that QM predicts for that particular configuration. The size and shape of the dot allows us to precisely tweak the energy levels and wavefunction symmetries involved, something fairly particular to the "nano 3D potential barrier" system.

Well, yes and no the chart is technically not wrong if you have a single frequency light source like a laser. The trouble is that most real world objects emit a spectrum of light. This chart [wikimedia.org] shows the cone response relative to frequency so the cone's response is an integral over the spectrum*sensitivity. The problem is that in all commonly current display technologies (CRT, LCD, LED, OLED, 3-chip DLP) you only have a fixed number of frequencies to work with. For example say you have red (600nm), green (540nm) and blue (440nm). Well, it turns out you can't actually produce all combinations with just three wavelengths as real world objects do with infinite wavelengths.

The reason for this if you look at the response chart is that the curves overlap, you can't simply decompose them into three components you can set individually. Any wavelength you send to stimulate the M cones also stimulate the S or L cones. And our vision is particularly good at picking up on those differences, it's a two-stage process like illustrated here [wikimedia.org]. Even if the mix in the SML cones is mostly right the Cg and Cb cells are extremely good at picking up on differences in the relative mix. Ideally you'd like more wavelengths or white light + a color wheel like used in single chip DLP, but it's not that easy and you need a signal with the extended information like xvYCC.

Are you sure about this? While you can't create an arbitrary response with just a single frequency, different linear combinations of three single frequencies should be enough to create all possible responses. This is basic linear algebra, and it is equivalent to saying that you only need three linearly independent vectors to span a 3-dimensional space. The lum, Cg, Cb cells only process the *output* of the tricromatic cells, and so do not affect this picture. Of course, if you had tetracromatic vision, one

Are you sure about this? While you can't create an arbitrary response with just a single frequency, different linear combinations of three single frequencies should be enough to create all possible responses. This is basic linear algebra, and it is equivalent to saying that you only need three linearly independent vectors to span a 3-dimensional space.

That is true but your analysis is wrong, that is not the mathematical equivalent because we can only send light with positive intensity. Say you had f1 = [1,0,0], f2 = [0,1,1/2] and f3 = [0,0,1]. Sure with a linear combination of vectors you can express [0,1,0] but only using negative intensity which is impossible.

I don't think that's quite right. If you had three sources of perfectly pure colours positioned at the very peaks of the responses of our rods and cones we could approximate very close to every colour based on the combination of the the three. Our eyes effectively only pick up 3 colours like a camera, monochrome with a response based on the curves above.

As such our interpretation of a pure cyan at say 500nm can be made up of appropriate peaks at 440nm, 530nm, and 590nm as the eye will simply integrate the r

Oh, and even if what you were saying was true, it wouldn't really change the resolution at all. That's not how sampling works. If your display is 1024*768, you have that many pixels. Making it so each pixel can show any color wouldn't really increase the resolution. Your ability to resolve spatial changes in color is lower than in intensity. So adding "color spatial resolution" is not equivalent to adding "intensity spatial resolution" - this is why many encoding schemes use more bits for intensity than color information - it's more efficient.

If you're going to call somebody out for being wrong, you might want to actually do some research. Those 1024x768 pixels are made up of basically triple that in terms of red, green and blue sites that emit the actual light. If you replace those with ones that can handle the entire gamut you would need a third of them and you lose the overhead from having to have individual shutters on each one.

Which would then give you enough space to triple the resolution (which may be what the GGP was driving at) or, if not, would increase sharpness in any case. As demonstrated by the effectiveness of Cleartype, RGB subsampling does have an impact on perceived resolution, and RGB subpixel techniques can be applied to colour images as well as text.

I guess you think the whole bit about primary colors is just made up stuff huh? So when you split up white light with a prism you get an almost infinite range of colors only 1nm apart instead of 7 very definite colors (Red Orange Yellow Green Blue Indigo and Violet).

What good is light with a narrow spectral bandwidth?? The point of a TV is to make images life-like. Light sources in real life have wide bandwidth, and objects generally reflect relatively large swaths of frequency. It would be a nightmare to produce images using lots of pixels with 1 nm bandwidth... it's much better to just choose 3 or 4 primaries and mix them... but mixing works just fine with wide bandwidth primaries.

What good is light with a narrow spectral bandwidth?? The point of a TV is to make images life-like. Light sources in real life have wide bandwidth, and objects generally reflect relatively large swaths of frequency. It would be a nightmare to produce images using lots of pixels with 1 nm bandwidth... it's much better to just choose 3 or 4 primaries and mix them... but mixing works just fine with wide bandwidth primaries.

Mixing works much better with tight primaries. sRGB cannot correctly depict the selective yellow headlights of an old French car, the ubiquitous green LEDs of early '90s electronics, the GaN blue LEDs Shuji Nakamura cursed us with since, nor an LPS streetlight. Not what I'd call "just fine".

Single frequency peak. That is a pure colour. When you look at a typical incandescent light it is a broadband signal spread across the visible range and well into infrared (hence their inefficient at lighting a room despite being very efficient way of converting electrical energy into photons). For an LCD displaying pure red the peaks actually look rather fat around the red with minor peaks in the green and blue range as well as the backlight bleeding through the display. These imperfections is what makes t

Soooo, any idea what they mean by "50% more colours"? Do these allow the screen to display a wider set of the visible spectrum than LCD screens? Do they allow the same set but at a higher bitrate? Do they simply display the desired colour more precisely? Is this "extra" in the range that consumer GPUs and OSes can display?

The whole field of computing is built on three-primary color specification anyway. Either RGB, or HSV, or YUV, or some varient of them. Or CMYK, in which the K is really a fudge-factor used to account for real inks not behaving like mathematically ideal inks. So even if someone built a display of a wider gamut, good luck finding any content to use it. I suspect this is just marketing being allowed to write the press report.

The whole field of computing is built on three-primary color specification anyway. Either RGB, or HSV, or YUV, or some varient of them. Or CMYK, in which the K is really a fudge-factor used to account for real inks not behaving like mathematically ideal inks. So even if someone built a display of a wider gamut, good luck finding any content to use it. I suspect this is just marketing being allowed to write the press report.

RGB has nothing to do with computing, but everything to do with the physics of light. Printing uses CMYK also because of the physics of light. The difference is RGB is when light is emmitted and CMYK when it is reflected. That is why blue and yellow paint make green, but blue and yellow light make magenta. With light, mixing colors is additive, with painting/printing, it is subtractive.

RGB were the chosen wavelengths only because by mixing them in appropriate ratios it is possible to reproduce the perception of most colors to human vision. If there were any non-humans animals smart enough to judge, they'd tell you that all the colors on television look wrong. Humans see subjective colors, not spectrographs. To represent a color with precision would require storing the entire spectrum, which is impractical.

The mathematics of CMYK say that f you have full use of C, M and Y all absorbing yo

I have a buddy who used to teach ophthalmic surgery at Georgetown U. and did research in this area. He also did computer animation as a hobby (one that actually made him good money, to where I think his teaching later became the hobby). Wish I could locate one the papers, but most of his work was done pre-WWW and probably has never been put up. His info showed that most people can resolve 8 bits of red, 9 bits of green, and 8 bits of blue. That extra bit sucks from a memory usage point of view, though,

But expensive to buy for sure. And will only be slightly cheaper when the next superior tech is at the door. Rinse and repeat...

Well, yes, that's how capitalism works. Someone invents something useful, and then they try to maximize the profit from their labor by selling it for as much as the market will bear. Eventually the price comes down due to competition. You can either pay top dollar for the new hotness now, or wait a while for the price to come down, your choice.

It's a feature. Note that you can buy a $99 LCD display at Walmart today that performs better in all respects than the $9,000 LCD display of the same size you cou

Gamut is about being able to represent all colors that are perceptible in the real world. You're making an argument that because of immature technology, we should handicap our displays. It's the dumbest thing I've read in a long time on this site. The difficulties with neutral color are based on 1) poor calibration, and, more importantly, 2) insufficient quantization -- due to the extended gamut you need around 10 to 12-bit quantization _per channel_ to have sufficient precision. Considering most LCDs and O

Why do we need this? The power savings is a plus, but the human brain can only "see" and distinquish an estimated 10 million colors ( http://hypertextbook.com/facts/2006/JenniferLeong.shtml [hypertextbook.com] ) and current display technologiy produces 16.7M colors (24-bit True Color). Having a display show 24M colors (50% increase) won't look any different since current technology already exceeds our ability to percieve the differences.

Why do we need this? The power savings is a plus, but the human brain can only "see" and distinquish an estimated 10 million colors ( http://hypertextbook.com/facts/2006/JenniferLeong.shtml [hypertextbook.com] ) and current display technologiy produces 16.7M colors (24-bit True Color). Having a display show 24M colors (50% increase) won't look any different since current technology already exceeds our ability to percieve the differences.

You answered your own question. It's worth it for the Power Savings, IMO, the fact that it shows colors possibly better then we can see them is just the bonus.

Apparently from all the other posts, the 16.7M colors we can get now do not overlap 100% with the 10M colors we can see. I believe this is called the Gamut range of colors being produced vs the Gamut we can see.

Supposedly these light emitters can create a Gamut of light frequencies (colors) that overlaps more, thus can produce more colors (that we can see).

Because the gamut [wikipedia.org] of 24-bit RGB doesn't cover the entire range of visible colors and intensities. While we can only distinguish ~ 8M colors, we can distinguish a huge range of intensities. 24-bit displays cover 16M colors AND intensities, so in this case, 16M is not > 8M because they're counting different things.

While current displays are adequate for most purposes, they do not display all of the colors we can see, nor all the intensities we can see. Typical displays only cover 45%-75% of the AdobeRGB (1998) color-space [wikipedia.org], which itself is a subset of the visible gamut. Some (more expensive) displays cover a greater percentage of the visible range, but none cover the entire range.

Because the gamut [wikipedia.org] of 24-bit RGB doesn't cover the entire range of visible colors and intensities. While we can only distinguish ~ 8M colors, we can distinguish a huge range of intensities. 24-bit displays cover 16M colors AND intensities, so in this case, 16M is not > 8M because they're counting different things.

While current displays are adequate for most purposes, they do not display all of the colors we can see, nor all the intensities we can see. Typical displays only cover 45%-75% of the AdobeRGB (1998) color-space [wikipedia.org], which itself is a subset of the visible gamut. Some (more expensive) displays cover a greater percentage of the visible range, but none cover the entire range.

As stated in another post, the color problem you are referencing is one of physics -- producing the various wavelengths. What we see, however, is one of biology and the human brain cannot differentiate between similar wavelengths. Therefore, including all of them does not mean that we will see the image any better. Intensity is an issue, but the summary is talking about color, not intensity, although they are related.

The limiting factor in all of this is not going to be the production of the visible wavel

Yes, the human eye and the brain are going to be the limits. And given the range of intensities (e.g. contrast) one can see at any given time, and the ability to discern continuous color gradients, it appears that we'll need somewhere between 24 and 36 bits driving displays with contrast ~5000:1, using at least 3 color narrow band color sources centered on the frequencies to which the eye cones respond, and capable of delivering more than 1000 lux [wikipedia.org] (the brightness of an overcast day) at the viewer's position

This is one of the dumbest comments I've read on slashdot. You're confusing quantization with extent. The article is very obviously talking about covering a larger part of the visible color gamut. RGB is represented by the triangle in this graph: http://upload.wikimedia.org/wikipedia/commons/8/8f/CIExy1931_sRGB.svg [wikimedia.org] You'll note it doesn't even cover 50% of visible colors. Most TVs and displays can't even reproduce the full RGB space. The 24-bit/16.7M merely refers to the number of colors and affects how smoo

This is one of the dumbest comments I've read on slashdot. You're confusing quantization with extent. The article is very obviously talking about covering a larger part of the visible color gamut. RGB is represented by the triangle in this graph: http://upload.wikimedia.org/wikipedia/commons/8/8f/CIExy1931_sRGB.svg [wikimedia.org] You'll note it doesn't even cover 50% of visible colors. Most TVs and displays can't even reproduce the full RGB space. The 24-bit/16.7M merely refers to the number of colors and affects how smooth gradients are, and has nothing to do with the range of colors that can be reproduced.

For fuck's sake, I didn't expect this level of stupidity from someone with a sub-1M user ID!

Has nothing to do with how much the TV or screen can reproduce. It has everything to do with how well the brain can discriminate the various wavelengths. So while it is theoretically true that the technique may produce more colors, whatever that means exactly, if the human brain cannot discriminate between them, what good does it do?

This is not an issue of physics, but of biology, but then maybe I'm just to much of a dumb fuck to know what I'm talking about.

It _is_ an issue of biology. And that's exactly what the larger encompassing graph represents: the perceptual color space for humans. Humans can see all colors in that; RGB can only represent the colors in the triangle, and most monitors are a subset of the triangle. This has nothing to do with physics so I'm not sure why you brought physics into the discussion. Next time I recommend counting to 10 before letting an itchy Submit-clicking finger take action. It gives you time to save later embarrassment.

It's not the number of colors but the color gamut. You seem to lack reading comprehension. The issue is not quantization (bit depth) but the saturation that can be achieved. One is completely unrelated to the other.

It's not the number of colors but the color gamut. You seem to lack reading comprehension. The issue is not quantization (bit depth) but the saturation that can be achieved. One is completely unrelated to the other.

And you seem to lack comprehension of simple concepts. I said I want pixels. Not color. Hell, give me monochrome, but give me 19200x12000.