Comment activity

Tech —

The evolution of computer displays

We've come a long way since the days of blinking lights and teletypes. Ars …

Jim Belcher
- Jan 24, 2011 4:00 am UTC

Microprocessors show the way forward

What was really needed was something with more flexibility that could read the textual input from a computer, and create the necessary contents in RAM. Better yet, perhaps it could also read the input from a keyboard, and serve as a terminal, both sending text to the computer and displaying it.

They just needed one more new development: the microprocessor. Intel announced the first unit in 1971: the 4004, which was a rather sickly 4-bit unit. But it was still a processor, and it still fit on one integrated circuit, although a number of peripheral chips would be necessary to actually perform any useful function.

The 4004 did find some hardware control applications, but when the 8008 was introduced in 1972, and the 8080 in 1974, things took off. Here were units that could read an 8-bit ASCII code in, look up the bit pattern which should be displayed on the screen, and write it into RAM. From there, hardware could scan it and output a video signal to a display.

This opened a couple of other possibilities. Some manufacturers incorporated special characters that could be used to build up an image. True, the image was crude, but it was better than no image at all.

Such characters would, for example, take the shape of a square, a rectangular horseshoe (facing either up, down, right, or left). Such shapes as simple “L”, with roughly equal length sides, also facing up, down, left, or right. Add to these horizontal and vertical lines, and slants facing left or right, and a image or sort could be built up. Early graphics were costly, and if the application was cost-sensitive, such primitive images were acceptable to some users.

But where alphanumerics are, graphics often follow. The same microprocessor that could look up the pixel arrangement that represented a character could also calculate which pixels to draw to make straight or curved lines. In fact, the same algorithms could be used that were used with the pen plotters. Surfaces could be drawn, and the area inside filled.

Vector graphics had never been able to draw solid objects satisfactorily, nor do an adequate gray scale. Raster-scan graphics could. With a memory plane of several bits, instead of the single bit needed for alphanumerics, it was easily done. Memory contents could be read out into a digital to analog converter, and, voilà! Grayscale video!

But why stop there? Why not output color? Add more planes of memory, for a total of 12, and output the equivalent of a grayscale in each of the primary colors. Color raster scan computer graphics was born!

A separate memory plane was sometimes used to provide a cursor, so cursor movement could be made independent of the image of primary interest. Drawing the image was still very time-consuming, and moving the cursor meant redrawing parts of the image.

Motorola, with its 6800 family, offered a single integrated circuit (the 6847) that drew simple images and text, synchronized with the clock of their microprocessor. Other manufacturers did similar things. Although these were separate graphics processors, these were still very primitive units, and no one would mistake their displays for commercial television imagery.

Memory was less expensive, but the cost was still a consideration. Some commercial raster scan graphics techniques used a look-up table. Smaller numbers of memory planes could be used; each bit pattern would feed a lookup table that would output the large number of bits needed for a specific color. This limited the total number of colors on the screen at any one time, but it permitted selecting those colors from a much larger palette.

But speed was still an issue. The CPU on the early microprocessors used clocks in the 2MHz range, or less. Several clock cycles were needed to execute an instruction, and several instructions—sometimes many—were needed to draw a single element of an image. Displaying text was no longer a major challenge, but images with good resolution and color had not yet been conquered.

Drawing a color image with these chipsets could take several minutes. The choice was really whether a really good, detailed, color image was wanted, but one that would take a relatively long time to draw, or a vector image with less detail that could be drawn rapidly, and manipulated in real time.

Color vector graphics existed, but they were never very satisfactory. It could be used to distinguish between different classes of lines, for example, but it simply wasn’t as useful as raster scan color. Worse, it slowed the drawing process, allowing less detail to be created in the time available in a refresh cycle.

While vector graphics became an almost de facto standard for some applications, raster scan wasn’t standing still. Processor speed was improving: Intel introduced the 8086, and Motorola introduced the 68000. Memory speed and density grew, while prices dropped. Relatively large arrays became possible with enough speed to support a full standard TV color image. But movement, or animation, at least of a useful sort, remained elusive.

The GPU arrives

Falling back to the lessons learned from vector graphics, it became increasingly evident that the main processor in a computer needed to be general purpose. It made more and more sense to have a totally separate graphics processor, with instructions geared solely towards creating imagery. Those instructions could be smaller, simpler, and faster.

While the professional world was concerned about commercial and industrial use of graphics, something very different was stirring public interest. The computer arcade game, Pong hit the streets. Pong was really a hardware solution, in which a very limited number of black and white objects were created in raster scan graphics. It was really designed around the limited graphics a few hardware chips could generate.

The movement was limited, but the imagination was not. Suddenly, the idea existed, even amongst some of the general public, that computer graphics were possible.

This helped create a market for computer games, which increased the volume of parts sold, causing prices to drop. Computer games became not just a reason to buy standalone games, but a reason to own personal computers. The excuse for buying a personal computer may have been that it would help balance the checkbook, but buyers usually looked at the list of games available for the computer before purchasing.

Computer games needed graphics. Lots of graphics, with color and movement.

As personal computers came on the scene, they initially repeated many of the lessons that had been learned with other computers, including graphics. Finally, in 1985, the Amiga improved on the idea of Motorola’s separate graphics chip, and introduced a machine in which graphics were an integral part, with its own very carefully designed and thought-through graphics processor. Full-color animation was suddenly possible on a personal machine, right in a user's own home!

The Amiga was arguably the first computer with a standard resolution full color display

Jim Belcher

The Amiga became known for its ability to play games in full color, with better quality animation than the other machines then available. The Amiga, incidentally, used a system clock whose frequency was based on that used by commercial color television in the United States. The Amiga didn’t just generate pretty pictures, it was designed to do them on standard TV monitors. It could also generate color video to commercial TV standards. Some Amigas found use at cable TV outlets, generating various forms of imagery. Graphics generated by the Amiga began to appear on network television shows.

This did not escape the notice of a small startup company, NewTek, which created something called the “Video Toaster.” The Toaster took advantage of the video properties of the Amiga to create video switching and transitions using the Amiga, with some special-purpose hardware. The Toaster/Amiga combination became a standard in small video studios. Computer-generated graphics in commercial television had arrived.

A 1992 demo of the Video Toaster 2.0

There was another, parallel development. Hard copies of computer graphics had always been a handy thing to have. High-end solutions had hard-copy devices based on various technologies. But while they sometimes produced very high quality images, they tended to be expensive.

New printer technologies, such as ink jet and laser, began to emerge that could print not only text, but imagery as well. If an array can be created in memory that can be scanned to produce an image on a screen, it can be scanned and formatted for output to a printer. These devices began to emerge, in somewhat primitive form, in the early 1980s. By the 1990s, ink jet and laster printers were spitting out computer-generated drawings and text in numbers not imagined a few years before.