The XGA Graphics Chip

After covering the 8514/A and its clones, it’s only appropriate to write a few words about the XGA (eXtended Graphics Array), IBM’s final attempt at establishing a PC graphics hardware standard.

The XGA was introduced on October 30, 1990, about the same time when several other companies just started selling their own 8514/A clones. The XGA was a combination and superset of VGA and 8514/A: VGA compatible, high-resolution, accelerated graphics chip. Initially, an XGA chip was built into the new PS/2 Model 90 and 90 XP, and also available as a stand-alone upgrade for existing PS/2 systems in the form of the “IBM PS/2 XGA Display Adapter/A” (a typical IBM product name). The initial price was $1,095 for an XGA with 512KB VRAM and additional $350 for a memory upgrade to 1MB VRAM.

OS/2 1.3, which was announced on the same day as XGA, shipped with built-in XGA drivers. IBM also supplied drivers for Windows 2.1 and 3.0, OS/2 1.2, and several popular software packages such as AutoCAD. The XGA also shipped with an implementation of the AI (Adapter Interface). Existing applications written to the AI and supported on the 8514/A continued to work on the XGA.

In mid-1992, IBM released an updated version called XGA-2 or XGA-NI (Non-Interlaced), with significantly more flexible display support and several other enhancements.

XGA vs. 8514/A

Perhaps the biggest architectural change compared to the 8514/A was that the XGA integrated a VGA subsystem. In a way this was an admission of defeat: IBM’s earlier strategy of providing an on-board VGA chip with an additional high-resolution accelerator such as the 8514/A clearly hadn’t worked out.

In terms of capabilities, an XGA was very much like a VGA + 8514/A. In addition to standard VGA functionality, there was a new 132-column text mode. With 1MB VRAM, there was also a new high-color mode with 640×480 resolution and 65,536 colors (16 bits per pixel).

The mode support was otherwise the same: 640×480 and only interlaced 1024×768. IBM did not support the increasingly popular 800×600 resolution. The likely reason was that at the time, IBM didn’t sell multi-frequency monitors; as a consequence, the original XGA only supported four pixel clock frequencies (four separate oscillator chips are clearly visible near the center of the board).

The XGA draw engine was very similar to the 8514/A in terms of capabilities (hence the AI level compatibility), but was not compatible on the register level. A significant change was that IBM fully documented the XGA register interface, something that was never officially available for the 8514/A.

XGA Hardware

The VGA compatibility circuitry on the XGA was nothing new, but the accelerator engine and the overall architecture were significantly different from the 8514/A.

A major new feature (if not a mainstream one) was the fact that up to 8 XGA boards could coexist in a system, with a single one providing VGA compatibility. On system start-up, each XGA would be assigned an instance number which determined the I/O addresses used by the board.

Unlike the 8514/A, the entire XGA framebuffer could be directly accessed by the host CPU. There were three options: a 4MB aperture could be mapped anywhere in the 4GB address space, ideal for 32-bit operating systems and any number of XGAs; a 1MB aperture could be mapped below 16MB for 16-bit protected-mode operating systems; and a 64KB window was optionally available for a single XGA at the VGA A000h or B000h segment (real-mode accessible).

It should be noted that some XGA drivers did not require any memory aperture to be enabled, resulting in a framebuffer not addressable by the host CPU at all, similar to the 8514/A. Due to the bus-mastering capabilities of the XGA, the inability to access the framebuffer directly did not necessarily cause any performance loss, whereas on the 8514/A transfers to/from video memory required programmed I/O and hence CPU attention.

The look-up table (LUT) on the XGA was still essentially the same as VGA, with the addition of a direct-mapped 65,536 color mode.

A new feature was a 64×64 hardware sprite, almost exclusively used as a mouse cursor. On previous devices, including the EGA, VGA, and 8514/A, a mouse cursor had to be managed entirely in software, a non-trivial implementation which incurred a sizable performance hit when the cursor was moving quickly. With the 8514/A, the implementation was less costly as offscreen memory could be used to save the underlying image as well as provide a cursor image to be drawn with the BitBLT engine, but it still hardly counted as simple.

With the XGA, the mouse cursor could be implemented as a completely independent sprite displayed on top of framebuffer contents. Moving the cursor was simply a matter of updating the X/Y position registers, but more importantly, the cursor did no longer need to be hidden and re-drawn every time when the current drawing operation and the cursor intersected.

To support saving and restoring of the XGA state, all registers were readable (while many 8514/A registers were write-only). The XGA could even save and later continue the operation currently in progress, a potentially useful feature as blitting a large rectangle could take a relatively long time.

The Draw Engine

The XGA draw engine was very similar to the 8514/A but enhanced in several respects. A new (for IBM) concept was that of maps. An XGA map provided a translation between linear memory and a 2D bitmap. The XGA supported four maps, each with a given starting address, pixel depth, width, and height. An unusual feature of the XGA was that maps could reside both in video memory and in system memory. The XGA could not only copy between system and video memory but also draw into system RAM using bus mastering. In a multi-XGA system, one XGA could potentially even access another XGA’s memory without host CPU intervention.

A similar implementation of maps was found in the earlier Intel 82786 graphics coprocessor (1986), one of Intel’s several unsuccessful attempts to enter the graphics hardware market.

With the 8514/A, there was essentially a single fixed map covering the entire framebuffer. The XGA enabled much more flexible use of offscreen memory because linear (one-dimensional) memory management was possible. The 8514/A forced rectangle-based 2D memory management which is much more complex to implement and less efficient.

The pixel pipeline and drawing operations in the XGA were similar to the 8514/A, with mixes, color comparisons, plane mask, etc., although the hardware interface was different and used 32-bit memory-mapped registers—very typical for 1990s 2D accelerators.

The XGA supported Bresenham lines, short stroke vectors, rectangle fills, and BitBLTs just like the 8514/A. One new feature was the mask map which allowed an arbitrary mask bit-map to be applied to operations, allowing the user to draw arbitrarily-shaped objects. The mask map could be empty/undefined and only its dimensions used, in which case it worked much like the 8514/A scissors (clip rectangle).

Another new feature was the support for patterns (“brushes” in Windows GDI parlance, “tiles” and “stipples” in X11 terminology), both monochromatic and color. The patterns were very rather naturally specified through the use of XGA maps. On the 8514/A, the same effect could be achieved but required more effort.

XGA Virtual Memory

The most exotic feature of the XGA was probably its support for virtual memory. The XGA essentially replicated the 386 memory-management unit (MMU). Just like the 386, it used physical addresses by default, but virtual memory (i.e. paging) could be enabled. The XGA had its own PDBR (Page Directory Base Register) corresponding to the CR3 register in the x86 architecture.

With virtual memory enabled, the XGA used the host’s page tables and was able to process page faults. Protection violations and non-present pages encountered while the XGA accessed system memory were reported via an interrupt. The equivalent of the CR2 register also existed on the XGA in order to report the fault address.

The same basic idea was used by Intel several years later in the form of the AGP GART, except the XGA had gone much further. Both solved the same underlying problem, namely taking discontiguous physical memory pages and transforming them into contiguous regions that the graphics adapter could access. The XGA also made it possible to support task switching and easy access from user mode applications.

The XGA virtual memory feature was used by IBM’ OS/2 2.x VDM (Virtual DOS Machine) subsystem; it was utilized to provide support for DOS software which used XGA bus mastering. It is unclear whether any other drivers used the virtual memory features of the XGA. Since no hardware besides the XGA supported such technology, it was not useful to general-purpose software which also needed to run on non-IBM hardware.

The XGA-2

On September 21, 1992, IBM announced the XGA-2, an improved XGA with support for non-interlaced 1024×768 resolution and 1MB VRAM standard. The XGA-2 sported a programmable PLL circuit and handled pixel clocks up to 90MHz; this enabled support for up to 75Hz refresh rate at 1024×768 resolution. Finally, the 800×600 resolution was also supported, at up to 16bpp.

The XGA-2 had an improved DAC with 8 bits per channel, rather than 6 bits like the original XGA (as well as the VGA and 8514/A). The draw engine was enhanced to support 16bpp maps and performance was generally increased by using faster VRAM and several minor optimizations. At introduction, the “IBM PS/2 XGA-2 Display Adapter/A” cost mere $360.

XGA Clones

For the XGA, IBM chose a novel strategy: Instead of making the hardware specifications secret, the register interface was fully documented; in addition, IBM licensed the XGA chip design to SGS-Thomson (inmos) and Intel. Worth of note is that Radius manufactured ISA-based XGA-2 adapters built around chips from inmos (as usual, IBM didn’t bother with non-MCA adapters).

There were no clones to speak of in the true sense of the word, i.e. non-licensed chip designs based on reverse-engineering the IBM originals. IBM’s strategy was not particularly successful—while the XGA-2 was a decent chip, there were very capable alternatives already available (from S3, Tseng, and others), and implementing the bus-mastering design in an ISA environment was troublesome.

The XGA Legacy

The XGA architecture was a very modern design, with a linear framebuffer aperture, highly flexible bus-mastering draw engine, and hardware cursor. It was released at a time when most PC graphics cards were dumb framebuffer SuperVGAs limited to banked memory access; it took the rest of the PC graphics hardware industry several years to catch up with XGA’s capabilities. In many ways, the XGA was a classic 1990s design even if it never reached its full potential (it could have easily supported up to 4MB VRAM as well as 24/32bpp True Color pixel formats).

However, measured against IBM’s own goals, the XGA was a failure. Not only was the XGA unsuccessful in establishing a hardware standard, the experience persuaded IBM to quit the PC graphics market altogether and rely on graphics chips from companies like Cirrus Logic or S3 for its own systems.

There is little doubt that Microsoft Windows and IBM’s own OS/2 were a major cause of this failure. When a graphics chip vendor only needed to supply one or two drivers for widely used GUI environments (rather than a dozen or more, with custom driver development often needed), there simply wasn’t enough value in register-level compatibility. The hardware was evolving too quickly for that.

As a consequence of XGA’s failure to establish itself, VGA has remained as the only hardware standard to date, a sad fact more than 25 years since its introduction.