You may recall the recent OSNews article about Linux Fund getting donations to supply developers with OGD1 boards. (OGD1 is a what you might call an "open source graphics card," with all designs, documentation and source code available under Free Software licenses. Technically, however, OGD1 is an FPGA-based prototyping platform with memory and video encoders on it. See the wikipedia article.) Since then, the FSF got involved and is asking for volunteers to help with the OGP wiki. The OGP had shown OGD1 driving a graphics display back in 2007 at OSCON. And now, the OGP has just announced technical success with the rather difficult challenge of emulating legacy VGA text mode. They even put up a video on YouTube of a display, driven by OGD1, showing a PC booting into Gentoo.

As the original architect of the way VGA is done on this board, perhaps I can offer an explanation.

There is perhaps a more straightforward way of implementing VGA than the way we did it. The direct route would require two components. One piece is the host interface that interprets I/O and memory accesses from PCI and manipulates graphics memory appropriate. The other piece is a specialized video controller that is able to translate text (which is encoded in two bytes as an ASCII value and color indices) in real-time into pixels as they're scanned out to the monitor. This is actually how others still do it.

To us, VGA is legacy. It should be low-priority and have minimal impact on our design. We didn't want to hack up our video controller in nasty ways (or include alternate logic) for such a purpose, and we didn't want to dedicate a lot of logic to it. Doing it the usual way was going to be too invasive and wasteful. Also, we want eventually to do PCI bus-mastering, which requires some high-level control logic, typically implemented in a simple microcontroller.

So we thought, if we're going to have a microcontroller anyhow, why not give it dual purpose. When in VGA mode, the uC we designed (which we call HQ) intercepts and services all PCI traffic to OGD1. Microcode we wrote interprets the accesses and stores text appropriately in graphics memory. Then, to avoid hacking up the video controller, we actually have HQ perform a translation from the text buffer to a pixel buffer over and over in the background. Its input is VGA text. Its output is pixels suitable for our video controller.

Aside from the logic reduction, this has other advantages. The screen resolution as seen by the host is decoupled from the physical display resolution. So while VGA thinks it's 640x400, the monitor could be at 2560x1600, without the need for a scaler. It's easily programmable, and we have complete control over how the text is processed into pixels; for instance, we could have HQ do some scaling or use a higher-res font different from what the host thinks we're using.

We call it emulation because, in a way, our VGA is implemented entirely in software, albeit microcode that's loaded into or own microcontroller.