If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

What is: X.Org? X11? Gallium3D? Mesa? Etc...

01-03-2008, 09:06 PM

Here's my 2 cents.

X11 is the 11th major revision of the X windowing system. I believe it includes mice, keyboards etc. because input from those devices is intimately tied in with the windowing since input events need to be fed to whatever application owns the window where the input is directed -- and only the windowing system knows that.

Xorg is the group of people who manage the direction of the most common implementation of the X windowing system -- also, confusingly, called Xorg. In general "xorg" refers the windowing system while "x.org" refers to the people.

The X protocol was designed around "indirect rendering", where the application could be on one computer, the display server on another computer, and all user interaction was routed through the X11 protocol (and, typically, through a network). The downside of this is that there is some overhead from the protocol, although it's apparently pretty tiny these days. Originally X just handled 2d-type graphics and the concept of "direct rendering" didn't really exist.

3d rendering can be done in (at least) two ways -- "indirect" through the same X protocol as text and 2d graphics, or "direct" which bypasses the X protocol and lets the application talk directly to the graphics driver. AIGLX is "Accelerated Indirect GLX", while "normal" 3d is direct rendering. DRI is the protocol that allows the X server (which knows about the windows) and the direct rendering 3d stack (which actually draws on the screen) to stay in touch so that when windows move the 3d rendering knows to draw in a new place.

Most of the X driver code actually runs in user mode and accesses hardware by getting generous privileges from the OS. DRM is a "kernel component" for graphics which was, I believe, originally developed as part of the Direct Rendering Infrastructure initiative. It mostly handles the really performance-critical work associated with pushing commands into graphics chips, getting results back, and deciding what to do next.

If you are not using 3D then all of the acceleration is done by the X driver, so it is possible to run without DRM on many GPUs. If, however, the 3D (mesa) driver is being used (which always communicates with hardware through DRM) then the X driver acceleration functions must go through DRM as well to avoid confusing the GPU. This means that the acceleration code in the X driver needs to have two different code paths, one for "no DRM" and one for "with DRM". In our case we found that using the 3D engine without DRM (and the DMA command buffer which DRM manages) was unreliable on 5xx and above, so today we only use the 3D engine through DRM on newer GPUs.

Mesa is an implementation of a 3d GL stack, and is the standard for pretty much all open source 3d graphics. AFAIK it is still not strictly considered "OpenGL" because it does not include compliance with the (relatively expensive) OpenGL validation suite as one of its requirements. I forget the details, but basically the Mesa developers would have had to pay a bunch more money to officially use the OpenGL name.

Mesa was written as a pure software renderer at first, then added acceleration at a time when most high performance graphics chips implemented a programming model similar to "the openGL pipeline", ie a fairly well defined transform, clipping, lighting and rendering pipeline. Since then, graphics chips have evolved into much more general purpose parts with large numbers of parallel processors generically called "shaders". This is IMO partly because of the influence of DirectX and partly because process shrinks allowed much higher transistor counts so fixed function blocks could be replaced with larger but more versatile programmable blocks.

Over the last 10-odd years, the major GPU vendors moved to 3d driver architectures which were better suited for modern GPU designs and could extract the most performance from those chips. Gallium is a similarly new architecture which, among other things, is being used as a replacement for the existing Mesa driver model.

One abbreviation you didn't mention, but is pretty important, is "TTM". The abbreviation originally stood for "translation table maps" but has kinda become a proposed standard for memory management in the DRM code, allowing all video memory to be managed in the kernel code rather than by the X server as it is today. Good and flexible memory management is one of the keys to making a high performance 3d driver, so as I understand it getting TTM into DRM is one of the pre-requisites for rolling Gallium out into production. I'm not sure about this last point but that's the way it looks. Note that the TTM API is being largely replaced by a new API proposed by Intel, called "GEM". While GEM is being used as-is on Intel parts (which have no local video RAM), the current direction for Radeon parts is to use the GEM API but to retain many of the TTM concepts for the internal implementation.

In order to get a complete display system running, you need the X server (xorg), an X driver (eg radeonhd), a DRM driver and Mesa. If you look at the source code on freedesktop.org, you'll see that each different X driver has its own tree, while DRM and Mesa each only have one tree each with per-chip drivers buried further down the tree.

Hey my co-drivers didnt tell me till way after tech that the 1450 both as at the end of tech. I was at the drivers meeting so I didnt pay the 25 dollars for the 1450-org series. Any way I can pay now or double at the next event??