No illusions: toward brighter flat displays

EET: What aspects of the eye's physiology helped you construct subpixel-rendering algorithms?

Elliott: For one thing, the focal plane is different for red, green and blue receptors. Deep blues can be as much as 1.5 diopters out of focus, so they are of limited value in making a display appear sharp. The spatial frequency of perceptual channels also shows that red-, green- and blue-cell outputs vary depending on how dense the cells are, which in turn depends on their location on the retina. The eye is less sensitive to blue resolution, because it has no blue receptors in its highest-resolution photoreceptors in the center called the fovea. The fovea is also covered by a yellow filter, called the macula lutea, that blocks any blue light that might stimulate the green receptors there. If there were blue cells in the fovea, their output would make a negative image, because the density of cells there is too high. The optical-modulation transfer function goes negative at blue wavelengths with high spatial frequency, like in the fovea. But the peripheral-vision cells have wider spacings to tolerate the negative dips, so there are more blue cells on the periphery.The lesson? Blue subpixels don't count in high resolution.

EET: In your lectures, you say there are 20 red and 10 green cells for every blue retinal cell in the human eye. Is that ratio in the eye directly transferred to the ratio of different-colored subpixels in PenTile technology?

Elliott: No. Our displays are more closely aligned with the resolution of the three perceptual channels that come out of the retinal processing: luminance, red vs. green chrominance and yellow vs. blue chrominance. These channels, at their peak usefulness, are more like 5:2:1. Our layouts match this very well.

Though our subpixel layouts may use fewer blues, we also add a white subpixel. So we have four colors, red, green, blue and white. The white subpixels stimulate both the red- and green-sensitive photoreceptors to give higher resolution in the luminance channel, as well as add brightness to the display. Our algorithms are also matched to the perceptual-channel resolutions.

EET: So you take whatever percentage is common to the red, green and blue channels and transfer that to the white pixel  making the image appear to be brighter.

Elliott: Approximately, but not exactly, because there are many more factors involved. For instance, perceived brightness enables the brain to integrate over smaller areas, thereby achieving greater sharpness. At the lower luminance of traditional LCDs [that are built] without a white pixel, the brain is forced to integrate over more retinal cells to interpolate, which makes our displays appear sharper though we use fewer pixels. Also, to keep the color the same, one must adjust the red, green and blue values as one adjusts the white, boosting some, reducing others.

This also means that there is more than one combination of RGBW that gives the same perceived color and brightness. These different combinations are called metamers. Our algorithms adjust the metamer value to further increase the sharpness of the image, moving values between the red, green and blue vs. white, depending on the image and how it lands on the subpixel mosaic, so that each subpixel is reconstructing the luminance channel of the image.

EET: You say that PenTile displays use 33 to 50 percent fewer pixels yet can render an equal number of closely spaced lines as normal, striped RGB displays. How does that math work?

Elliott: We use an average of just two subpixels, or even one and a half, instead of three for red, green and blue, but we also have a white pixel in the matrix. But the layout is in the form of a checkerboard mosaic [rather than stripes], interweaving the colors, so that full-color lines may be rendered with fewer subpixels in all orientations. The other side of the math equation is the subpixel-rendering algorithms that place the luminance information onto every subpixel, down to the subpixel resolution, using the same DOG wavelet function as the eye, but spread the chrominance information over a larger number of subpixels. [This ensures] proper color appearance, which matches the way that the eye perceives luminance and chrominance resolution. The new layouts work hand-in-hand with the new algorithms. And both work hand-in-hand with the retinal processing of the eye.