Linux-GGI Project

The Linux-CGI Project goals are explained—what it intends to accomplish and how it will do it.

Introduction

In this article, we will explain the intentions and goals of
the Linux-GGI Project along with the basic concepts used by the GGI
programmers to allow fast, easy to use access to graphical
services, hide hardware level issues from applications and
introduce extensible support for multiple displays under Linux. The
Linux-GGI project wants to set up a General Graphical Interface for
Linux that will allow easy use of graphical hardware and input
facilities under the Linux OS. Already existing solutions and
standards like X or OpenGL do deal with graphic's issues, but these
current implementations under Linux have several (sometimes
serious) drawbacks:

Console switching is not deadlock-free, because the
kernel asks a user-mode application to permit the switch causing a
problem in terms of security. Since
any user-mode application can lock
the console, the kernel has to rely on the application to allow a
user-invoked switch. For stand-alone machines, if the console locks
in an application without a switch, a system reboot will have to be
done.

The Secure Attention Key (SAK), which kills all
processes associated to the current virtual console might help with
the above problem, but for graphics applications the machine might
still remain locked, because the kernel has no way to do a proper
reset of the console—after all, it has no idea which video
hardware is present.

Any application accessing graphical hardware at a
low level has to be trusted as it
needs to be run by root to gain
access to the graphical hardware. The kernel relies on the
application to restore the hardware state when a console switch is
initiated. Relying on the application might be okay for an X server
that needs superuser rights for other reasons, but most of us would
not want to trust a game that is available to us only in binary
form.

Input hardware (such as a mouse or a joystick) can
be accessed using the current approach, but it can't easily be
shared between several virtual consoles and the applications using
it.

No clean way is available to use more than one
keyboard and monitor combination. You might think that this is not
possible on PC hardware anyway; but in fact, with currently
existing hardware there are ways to have multi-headed PCs, and the
USB peripheral bus to be introduced soon may allow for multiple
keyboards, etc. Besides, other architectures do support multiple
displays, and if Linux did also, it would be a good reason to use
Linux for applications like CAD/CAE technology.

Games cannot use the existing hardware at maximum
performance, because they either have to use X, which introduces a
big overhead (from a game programmer's point of view), and/or
access the hardware directly, which requires separate drivers for
every type of hardware they run on.

GGI addresses all these points and several more in a clean
and extensible way. (GGI does not wish to be a substitute for these
existing standards nor does it want to implement its graphical
services completely inside the kernel.) Now, let's have a look at
the concepts of GGI—some of which have already been implemented
and have shown their usability.

Video Hardware Driver

The GGI hardware driver consists of a kernel space module
called Kernel Graphical Interface (KGI) and a user space library
called libGGI. The KGI part of GGI will consist of a display
manager that takes care of accessing multiple video cards and does
MMU-supported page flipping on older hardware. This method allows
for incredibly fast access to the frame buffer from user space
whenever possible. (This technique has already been proven—the
GO32 graphics library for DJPGG, the GNU-C-compiler for DOS, uses
this method and has astonishingly fast graphical support on older
hardware.) If this memory-mapped access method can be used in GGI,
there will be no loss in performance as the application reads or
writes the pixel buffer directly.

Each type of video card in the system has its own driver, a
simple loadable module that registers as many displays as the card
can address. (Video cards exist that support two monitors or a
monitor and a TV screen.) The driver module gives the system the
information needed to access the frame buffer and to access special
accelerated features, the setup of a certain video mode and the
limits of the hardware (e.g., the graphic card, the monitor, and
any other part of the display system). The module can either be
obtained from a single source file or be linked using precompiled
subdrivers for each graphical hardware subsystem (ramdac, monitor,
clock chip, chipset, accelerator engine). This last option is the
favourite approach, since it allows support for new cards to be
added quite easily, as only the subdrivers for hardware not already
supported need to be implemented and tested. (The others are
already in use or bug fixes there will improve all drivers using
them.) This scheme has been used to derive support for many of the
S3 accelerator-based cards, and has proved to be very efficient and
easy to use. It also allows for efficient simultaneous development
for several graphic cards. The subdrivers to be linked together are
now selected at configuration time, but they can also be selected
after automatic detection or according to a database (not yet
built). Note that the subdrivers do not need to be in source form;
as a result, precompiled subdriver object files can be linked
together during installation.

As each subdriver knows the hardware exactly, it can prevent
the display hardware from being damaged due to a bad configuration
and make suggestions about the optimal use of the hardware. For
example, the current implementation has drivers for fixed- and
multisync monitors that allow optimal timings for any resolution to
be calculated on the fly without any further configuration. Of
course, completely user- configurable drivers are also possible. In
short, in addition to the hardware level code, the subsystem
drivers provide as much information about the hardware as possible.
This way the kernel will have sufficient methods to initialize the
card, to reset consoles and video modes when an application gets
terminated, and to make optimal use of the hardware. The KGI
manager will allow a single kernel image to support GGI on all
hardware, as any hardware-specific code is in the loadable module
and only common services (such as memory mapping code) are provided
from the kernel. The KGI manager will also provide data structures
and support to almost any imaginable kind of input devices.

The user space library, called libGGI, will implement an
abstract programming interface to applications. It interfaces to
the kernel part using special device files and standard file
operations. Applications should use this interface (or APIs
provided by applications based on it) to gain maximum performance;
however, other APIs can be built accessing the special files
directly. Understand that in this case the X server will just be a
normal application in terms of graphic access. Since X is
considered to be the main customer for graphical services, the API
will be designed according to the X protocol definition and will
implement a set of low level drawing routines required by X
servers. The library will use accelerated functions whenever
possible and emulate features not efficiently supported by the
hardware found. An important feature of future generation graphical
hardware is 3D acceleration which easily fits into the GGI point of
view. We plan to provide support for 3D features based on MESA,
which is close to OpenGL and ensures compatibility with other
platforms than Linux.

Another issue when dealing with graphics is game programming
as games need the highest possible performance. They also need
special support by the video hardware to produce flicker-free
animation or realistic images. The current approaches can't support
this need in a reasonable way, since they cannot get help from the
kernel (e.g., to use retrace interrupts). GGI can provide this
support easily and give maximum hardware support to all
applications.

Trending Topics

Upcoming Webinar

Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report

August 27, 2015
12:00 PM CDT

DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.