Manuel Teira wrote:
> Have you got errores related to the glide library?
> Perhaps you should comment out the line:
> #define HasGlide3 YES
> in the host.def file.
> Or perhaps would be good to comment it out in our mach64 branch.
oops. That's likely the problem. I got so used to configure-like
scripts to determine what I have installed that I just skipped the
Glide stuff in host.def. This might actually help, yes. :-)
> What I made for the tests was using:
> export LD_PRELOAD=/usr/X11R6-DRI/lib/libGL.so
ok, sure that'll work.
> > That is probably due to my card not being an AGP variant (also my
> > mainboard does have a - currently empty - AGP slot).
>
> I don't know. We are not using any AGP feature just now. What processor does
> your computer have? I'm getting about 215-220 fps in hw mode and no more than
> 100 (not exactly) in software mode.
ah, this is gears fps now, not gltron, right? ok gears does 160
software now on my Duron 800, while on Mach64-accel it does 260.
gltron does 5-15 on mach64, 5-15 on plain mesa, too; although it
subjectively seems to be a bit jerkier. Anyway, with the old Utah
code I got more (at least 20fps, but on a K6-2 333) but that has
time. I'm more concerned about glxgears: in software mode, it shows
the three gears moving; in hardware mode, it just shows a huge
close-up of the red one moving. Strange, since gltron looks almost
equivalent under both modes, with hardware having a bit better
texture filtering IMHO. BTW, why does mach64 module insertion
fail when agpgart isn't installed if it doesn't use any features
from AGP?
HTH, Yours Malte #8-)
PS: no need to Cc me, I'm on this list.

> From: Sottek, Matthew J [mailto:matthew.j.sottek@...]
[...]
> #1 A kernel API for mode setting, mmaping of the framebuffer and
> video memory management.
Truely needed. Something like the Linux version of the VESA interface.
I think the Linux framebuffer project took this thing as their basic idea.
> #2 A kernel api for only the most basic drawing. i.e. Blit and
> data copy.
I personally dont think these specific tasks neccessarily need
to be implemented into the kernel itself. In normal operation
most of the rendering will be initiated from userland. So i
rather see the need to move grafics adapter programming out of
the kernel space. This will speedup anything because fewer
kernel calls will be needed for just the same.
Userland library level would fit here much better. For those
that do need grafics at boot time the option of having some
sort of initial ram disk should serve them much better.
As i have read there will be further improvements to the concept
of the initial ramdisk when kernel 2.5.x branch is launched.
Okay, dri/drm is already making use of several kernel modules,
but here rather thought about userland modules.
So its just a question which kernel service (i.e. syslogging
or oopses) is so important that it really needs direct access
to routines that i.e. can blit a sequence of character bitmaps
from main memory to framebuffer. Maybe it sounds odd, but i
could imagine some sort of library that is "mirrored" between
kernel and user land. (Compare: Atari TOS had only one sprintf
coding for any protection ring in the whole system.)
> #3 A framework do allow the implementation of the other hardware
> specific functions.. basically the drm. So that higher level
> interfaces can use them. (Mesa and X)
Again this souds to me much more like a userland library.
Hmm, the Mach system (microkernel architecture) sounded to
me pretty reasonable because of the strict concept of resource
servers that do provide anything to anyone with sufficient
access privileges. Such a server is just the keymaster to
the resources - and that is the thing that i suppose to be
the best way to evolve. So the backend of the X11 support
of any device would be built with just a specification that
describes the hardware and the resources that the sever
should deliver. If the concept is reasonable, then things
like the VT system should attach to this lock as well and
register there with a callback that lets the VT component
"shut down" usage of the textmode framebuffer in favorite
of some memory based character buffer. (I think this sheme
could even work for hotplugging.)
Anybody that has ever tried to run X11 dualheaded or with
multiple useser might have found out that its not really
that pleasant as soon as third party components like the
adapter(s)'s or the mainboard's BIOS do come into effect
in special moments like mode switching. Maybe a single
resource server might help to fixup such critical phases
a bit better by providing an optional locking sheme
between those multiple adapters if found to be critical.
> 3) It is easier for everyone writing graphics applications if they
> don't have to debug drivers. Having drivers in 3 places already
> (framebuffer, drm, XFree) plus any other upcoming api's isn't
> helping.
OpenGL is a good standard for a big bunch of targest
but there was already the question if the "ever moving target"
DirectX already has outdone this concept just because it has
implemented several more or less important features of current boards.
(Okay, only OpenGL does have extensions, but is that idea a good one?)
There was the idea of "Fahrenheit" but i assume its dead before it ran.
With the Video4Linux folks we are facing another halfways sane
idea of bringing interesting stuff into the Linux world. And
as a counterpart again there is the DirectX implementation.
Of course i think its nice to have an API that anybody interested
in is allowed to play with, but in the view of the application
programmer, its much more important to have a reliable, simple
and flexible interface that stays as it is. (GDI is such a thing,
of course for the Windows world.) The current question is not that
much to change the interfaces, simply merging them into a single
API would be enough if this is at all seen to be required.
But the supposed works here have to go on under the hood.
Since the device is claimed and the application has ist handle,
we just need ways to let an OpenGL context coexist with an X11
window and any sort of video that the device can do. I would
base the whole system on memory management - if an application
does open a specific interface that refers to the hardware,
it can only operate on the hardware if it got the respective
resources, either as shared access or exclusive (i.e. a pice
of framebuffer memory).
> Most of the replies have been addressing why putting X in the
> kernel is a bad idea without addressing the real (unstated) problem.
> Linux doesn't have a graphics architecture that handles
> the basic needs that should be provided by a kernel. As a result
> the basics get reimplemented in incompatible ways every time
> someone tries something new.
So i consider it lacks in the design of the Linux kernel and
the X11/dri programmers cannot be blamed for finding ways to
work around that lack at first. In the long term someone has
to startup and find ways to solve this lack smoothly on the
level that serves the purpose best.
> In my opinion the drm should become _the_ interface for graphics
> on Linux (and other kernels). The kernel should use drm interfaces
> for console drawing, and user libraries should only access the
> device through the drm.
I dont care for this question now, at least as it concerns the
proposed winner of this contest. If we can find an alternative
approach that fits the needs much better plus throwing some other
historic loads overboard, we then can give it a different name.
At this point i have to state that i am not really that satisfied
the way current XFree86/drm is searching for devices and matching
those devices with device nodes. Not that it does not work in the
end, but i am sure that there can be nicer and more generic ways
of doing device management. If this will finally cancel the need
for a single and big XF86Config file by splitting it up then this
could be another advantage.
Regards AlexS.
PS: dont flame me about Outlook, i know how to manually line wrap.
--- these are all just personal thoughts in a public discussion ---

El Lun 22 Oct 2001 17:52, Malte Cornils escribi=F3:
> Manuel Teira wrote:
> > If you find any problem compiling the new branch, please make me kno=
w.
>
> OK, let me see. With regards to that libXau problem: it
> 's sufficient to just copy /usr/X11R6/lib to /usr/X11R6-DRI/lib, the
> rest of the tree isn't necessary. Otherwise, I followed the DRI
> compilation guide under "Documentation".
O.K. This is just a issue derived from the trimming of the DRI trunk, I =
hope.
>
> The build (or rather, the make install) failed until I removed tdfx
> from line 821 in file
> X11R6-DRI/build/xc/lib/GL/mesa/src/drv/Makefile.
Have you got errores related to the glide library?
Perhaps you should comment out the line:
#define HasGlide3 YES
in the host.def file.
Or perhaps would be good to comment it out in our mach64 branch.
>
> The instructions for making the nls stuff seem to be outdated, since
> there no longer is any xc/nls in CVS.
>
> taking /usr/X11R6-DRI/lib into ld.so.conf doesn
> 't help for libGL and libGLU, since those already should exist from
> any previous X installation in /usr/lib, and /usr/lib is implicitly
> given preference over anything form ld.so.conf. I had to move the
> old ones away and symlink/copy over the new ones.
What I made for the tests was using:
export LD_PRELOAD=3D/usr/X11R6-DRI/lib/libGL.so
or
export LD_LIBRARY_PATH=3D/usr/X11R6-DRI/lib
>
> Unfortunately, I have a PCI Mach64; modprobe mach64 failed without a
> helpful error message since agpgart wasn
> 't installed into the kernel. After modprobing agpgart, then
> modprobing mach64 (that last one is probably also handled
> automagically at X startup), glxinfo showed the valued "Direct
> Rendering enabled". And it was; small differences in the display of
> 3D apps showed that. However, performance was about as slow as
> software-rendering; at least for gltron, I got about the same
> average fps as with software mesa.
>
> That is probably due to my card not being an AGP variant (also my
> mainboard does have a - currently empty - AGP slot).
I don't know. We are not using any AGP feature just now. What processor =
does
your computer have? I'm getting about 215-220 fps in hw mode and no more=
than
100 (not exactly) in software mode.
>
> That
> 's about it - I tested 3D with gears, gltron and blender and all
> "worked" with a few glitches (not important right now).
>
> So, I hope you'll find my report useful. It certainly was fun for
> me, believe it or not.
Thank you for your report.

>>I'm really concerned about your answer. There was a whole thread
>>on the linux-kernel mailing list about the hypothesis of the
>>release of an X-Kernel, a kernel which would include built-in
>>desktop support. Most people answered, no, this would be
>>ridiculous, other said, yes, but hardware manufacturers are
>>too unhelpful therefore this would be totally a totally unstable
>>release. Others said.. other various things.
>>
>> So, what do you think?
Linux is badly in need of some sort of sane kernel graphics
architecture, but certainly the answer is not a kernel version of
X. In order to do a good driver model you need both a kernel api
and a client api. The client api is implemented via a library that
is flexible enough to handle differing kernel api's. (This is how
libGL works) The client API, opengl, is the same for everyone. But,
the kernel->library api can be hardware dependent. As Daryll said
the kernel driver should provide the leanest possible interface to
the hardware, the library should then smooth out the hardware
differences into a common API.
So putting the X api in the kernel isn't a good idea. Just as
putting the opengl API in the kernel isn't a good idea. Daryll
said as much here:
>No, I don't think it is a good idea. Kernel's should provide
>the minimum layer needed to securly and efficiently implement
>solutions in user space. The DRI has a kernel component to
>access the graphics hardware. The rest of OpenGL is in user
>space.
I do want to argue that the kernel has another, just as important,
role as security. That is resource allocation. The video resource
allocation is handled via the DRM with locking, but there is no
kernel level resource allocation for video memory, modes etc.
I really think that the concept of framebuffer (The concept, not the
implementation) and the concept of the drm need to be combined such
that we have the following:
#1 A kernel API for mode setting, mmaping of the framebuffer and
video memory management.
#2 A kernel api for only the most basic drawing. i.e. Blit and
data copy.
#3 A framework do allow the implementation of the other hardware
specific functions.. basically the drm. So that higher level
interfaces can use them. (Mesa and X)
Daryll wrote:
>1) The kernel remains small. No wasted memory. Less security
> problems.
>2) You can layer different graphics systems on top of the same
> kernel interface. (For example the Xv guys wanting to use it)
>3) It easier to change, debug, etc.
Allowing resource management (via a common api) and drawing
(via a device specific api) makes all 3 of these things better than
they are today.
1) The kernel remains small. Only a little added code since a lot
of people have drm and framebuffer already. The added size is as
small as possible. Security is much improved. Having a huge setuid
root binary that accepts remote connections is not a good security
model. XFree is pretty good about having tight security, but the
model is broken from the beginning.
2) You can not layer anything on top of what we have today. You have
to totally reimplement a 2d driver with complete mode setting,
drawing and memory management. Only then can you play nice with the
3d interfaces in the drm. If hardware specific drawing api's were
in the kernel then everyone could layer on top of them. X, Mesa,
and any new graphics library. All without reimplementing the basics.
3) It is easier for everyone writing graphics applications if they
don't have to debug drivers. Having drivers in 3 places already
(framebuffer, drm, XFree) plus any other upcoming api's isn't
helping.
>There's essentially no advantage to having X or OpenGL in the
>kernel. Do you really need 3D during boot? I'd say no. It can
>wait until you mount a file system. If you want to get graphics
>running earlier in the boot sequence, go right ahead and work
>on that.
Most of the replies have been addressing why putting X in the
kernel is a bad idea without addressing the real (unstated)
problem. Linux doesn't have a graphics architecture that handles
the basic needs that should be provided by a kernel. As a result
the basics get reimplemented in incompatible ways every time
someone tries something new.
In my opinion the drm should become _the_ interface for graphics
on Linux (and other kernels). The kernel should use drm interfaces
for console drawing, and user libraries should only access the
device through the drm.
-Matt

From: Brian Paul <brian_e_paul@...>
Date: Mon, 22 Oct 2001 10:16:38 -0700 (PDT)
Jeff's in the process of moving from Colorado to Oklahoma. I'm sure
he'll tend to this when he gets settled in.
I already fixed the problem in current 2.4.13-preX linux sources.
The FFB DRI driver is in fully working condition once again.
Franks a lot,
David S. Miller
davem@...

Jeff's in the process of moving from Colorado to Oklahoma. I'm sure
he'll tend to this when he gets settled in.
-Brian
--- Leif Sawyer <lsawyer@...> wrote:
> Don't know if this will get through or not, but since Jeff doesn't seem
> to (want to?) respond directly, perhaps somebody on this list can take
> a look at this issue.
>
>
> -----Original Message-----
> From: David S. Miller [mailto:davem@...]
> Sent: Thursday, October 11, 2001 4:07 PM
> To: lsawyer@...
> Cc: linux-kernel@...; jhartmann@...;
> gareth.hughes@...
> Subject: Re: [BUG] Linux-2.4.12 does not build (Sparc-64 & DRM)
>
>
> From: Leif Sawyer <lsawyer@...>
> Date: Thu, 11 Oct 2001 15:52:01 -0800
>
> Just a quick bug report -- I haven't had time
> to track this one down yet.
>
> Enabling DRM/DRI support on a Sparc64 kernel
> with Creator/Creator3D graphics does not build
> correctly:
>
> I've tried to contact the DRM folks (specifically Jeff Hartman) on
> many occaisions (at least 3 times) about the fact that using
> virt_to_bus/bus_to_virt generically in the DRM broke the build on
> several platforms.
>
> As stated often, virt_to_bus/bus_to_virt are deprecated interfaces.
> Yet, it is use explicitly in the debugging macros.
>
> Not only has it not been fixed, all of my queries to Jeff have fallen
> on deaf ears and I get no response whatsoever.
>
> Franks a lot,
> David S. Miller
> davem@...
>
> _______________________________________________
> Dri-devel mailing list
> Dri-devel@...
> https://lists.sourceforge.net/lists/listinfo/dri-devel
__________________________________________________
Do You Yahoo!?
Make a great connection at Yahoo! Personals.
http://personals.yahoo.com

Don't know if this will get through or not, but since Jeff doesn't seem
to (want to?) respond directly, perhaps somebody on this list can take
a look at this issue.
-----Original Message-----
From: David S. Miller [mailto:davem@...]
Sent: Thursday, October 11, 2001 4:07 PM
To: lsawyer@...
Cc: linux-kernel@...; jhartmann@...;
gareth.hughes@...
Subject: Re: [BUG] Linux-2.4.12 does not build (Sparc-64 & DRM)
From: Leif Sawyer <lsawyer@...>
Date: Thu, 11 Oct 2001 15:52:01 -0800
Just a quick bug report -- I haven't had time
to track this one down yet.
Enabling DRM/DRI support on a Sparc64 kernel
with Creator/Creator3D graphics does not build
correctly:
I've tried to contact the DRM folks (specifically Jeff Hartman) on
many occaisions (at least 3 times) about the fact that using
virt_to_bus/bus_to_virt generically in the DRM broke the build on
several platforms.
As stated often, virt_to_bus/bus_to_virt are deprecated interfaces.
Yet, it is use explicitly in the debugging macros.
Not only has it not been fixed, all of my queries to Jeff have fallen
on deaf ears and I get no response whatsoever.
Franks a lot,
David S. Miller
davem@...

On Mon, Oct 22, 2001 at 05:48:56AM +0100, MichaelM wrote:
> Would you consider it a good idea to make DRI part of the source of a
kernel? Direct 3d graphics supported from the boot sequence.
>
> I'm really concerned about your answer. There was a whole thread on
the linux-kernel mailing list about the hypothesis of the release of
an X-Kernel, a kernel which would include built-in desktop
support. Most people answered, no, this would be ridiculous, other
said, yes, but hardware manufacturers are too unhelpful therefore this
would be totally a totally unstable release. Others said.. other
various things.
>
> So, what do you think?
No, I don't think it is a good idea. Kernel's should provide the minimum
layer needed to securly and efficiently implement solutions in user
space. The DRI has a kernel component to access the graphics
hardware. The rest of OpenGL is in user space.
There are lots of advantages to doing it this way:
1) The kernel remains small. No wasted memory. Less security
problems.
2) You can layer different graphics systems on top of the same
kernel interface. (For example the Xv guys wanting to use it)
3) It easier to change, debug, etc.
There's essentially no advantage to having X or OpenGL in the
kernel. Do you really need 3D during boot? I'd say no. It can wait until
you mount a file system. If you want to get graphics running earlier in
the boot sequence, go right ahead and work on that.
- |Daryll

Manuel Teira wrote:
> If you find any problem compiling the new branch, please make me know.
OK, let me see. With regards to that libXau problem: it
's sufficient to just copy /usr/X11R6/lib to /usr/X11R6-DRI/lib, the
rest of the tree isn't necessary. Otherwise, I followed the DRI
compilation guide under "Documentation".
The build (or rather, the make install) failed until I removed tdfx
from line 821 in file
X11R6-DRI/build/xc/lib/GL/mesa/src/drv/Makefile.
The instructions for making the nls stuff seem to be outdated, since
there no longer is any xc/nls in CVS.
taking /usr/X11R6-DRI/lib into ld.so.conf doesn
't help for libGL and libGLU, since those already should exist from
any previous X installation in /usr/lib, and /usr/lib is implicitly
given preference over anything form ld.so.conf. I had to move the
old ones away and symlink/copy over the new ones.
Unfortunately, I have a PCI Mach64; modprobe mach64 failed without a
helpful error message since agpgart wasn
't installed into the kernel. After modprobing agpgart, then
modprobing mach64 (that last one is probably also handled
automagically at X startup), glxinfo showed the valued "Direct
Rendering enabled". And it was; small differences in the display of
3D apps showed that. However, performance was about as slow as
software-rendering; at least for gltron, I got about the same
average fps as with software mesa.
That is probably due to my card not being an AGP variant (also my
mainboard does have a - currently empty - AGP slot).
That
's about it - I tested 3D with gears, gltron and blender and all
"worked" with a few glitches (not important right now).
So, I hope you'll find my report useful. It certainly was fun for
me, believe it or not.
Thanks for the great work so far,
Yours Malte #8-)

On Mon, 22 Oct 2001, Peter Surda wrote:
> On Sun, Oct 21, 2001 at 10:01:33PM -0700, Jeffrey W. Baker wrote:
> > Send us a mail that isn't from a windows machine, and you might get an
> > interesting discussion. As it stands, I can barely tell what you are going
> > on about.
> Dude, I think that Outlook is crap too, I had to administer a couple
> of them for a year and it was a nightmare. But that isn't a reason to
> flame. Any decent mailclient (such as mutt I'm using) can display
> mails with lines longer than 72 chars and html attachments without
> hassle. I'm pretty sure there is a way to tell your pine to do that as
> well. If there isn't, "use the source" and "make it so" :-).
There is, but that isn't what I'm talking about. I don't want that
pointless wankerfest to spread from linux-kernel to every other mailing
list I am on.

On Mon, 22 Oct 2001, Peter Surda wrote:
> On Mon, Oct 22, 2001 at 05:48:56AM +0100, MichaelM wrote:
> > Would you consider it a good idea to make DRI part of the source of a
> > kernel? Direct 3d graphics supported from the boot sequence.
> Hmm I thought DRI is part of the kernel? Perhaps you meant the DRM part of it.
>
> > I'm really concerned about your answer. There was a whole thread on
> > the linux-kernel mailing list about the hypothesis of the release of
> > an X-Kernel, a kernel which would include built-in desktop support.
> I think it is a great idea to have a kernel implementation of Xserver. But it
> would have to be more modular than current XF86, and also have a highly
> flexible structure, so that adding new types of devices and functionality
> wouldn't pose problems. I think this is currently the biggest XF86's drawback.
XFree86 can run on top of the framebuffer (fdbev I think, but maybe
vesafb or something else - I haven't been keeping up).
Last time I looked there was a specific accelerated framebuffer interface
for MGA cards, so there may be a problem making the interface sufficiently
general for acceleration on all cards.
Provided that this can be done, it seems to me that fbdev + DRI could
be the basis for a kernel level graphics driver, with a user level X
server on top.
I believe I read that SGI Irix works like this (or did once), and
I believe it is also the model that GGI is aiming for.
However, moving all the hardware drivers from the Xserver to the
kernel will be a big job (it took 3-4 years to move them all from
XFree86 3.3 to 4.x). Even if this kernel graphics system works on
Linux and the *BDS OSes, XFree86 runs on another dozen unixes,
not to mention OS/2 and Win32, and possibly other non-unix platforms.
I think that most active developers would find that they had to
concentrate on either this kernel based graphics, or the platform
neutral user level XFree86. Dividing development like this would be
bad for both projects.
> Oh and one more thing: the driver should autodetect if it is running on
> the same videocard as the virtual terminal stuff, so that the first card
> will simply open a new VT but secondary card will run independently of
> this VT stuff. This would finally allow a decent way to concurrently run
> 2 separate X sessions on the same machine using local hardware.
I'm convinced that the solution to that is for the kernel VT support to
support multiple sessions. Then the user-level X server can just take over
a single VT session (possibly via fbdev).
--
Dr. Andrew C. Aitchison Computer Officer, DPMMS, Cambridge
A.C.Aitchison@... http://www.dpmms.cam.ac.uk/~werdna

> we move the whole driver structure to kernel? Drivers for every other device
Not really.
> STRUCTURE. For a great UI, we need DMA, vsync and devices communicating with
> each other directly or with little overhead. Why insist on doing this in
A video driver has to have extremely good latency, syscalls are overhead that
you generally do not want. There are specific things you want kernel help
with - agp management (and thus AGP DMA), context switching on DRI and maybe
some day interrupt handling for video vsync events and wiring them into
the XSync extension.
The rest is a bit questionable as a kernel space candidate, but if you
want it in kernel go ahead - XFree86 supports both models.

On Mon, Oct 22, 2001 at 02:27:23AM -0400, volodya@... wrote:
> The biggest reason against this is that X (as it is now) support not only
> Linux but many other OSes: in particular BSD(s) and Solaris. Moving
> stuff into Linux kernel creates a fork of the drivers which is
> undesirable..
That's a lame excuse. I'm using Linux so I won't suffer from Windows, why
should I suffer because of BSD or Solaris?
<Rant>
About the precise vsync thingy we're talking about in xpert: we need kernel
support anyway. So why instead of calling a video driver in kernel "lame" and
"uncool" and adding a strange inflexible function god-knows-where, shouldn't
we move the whole driver structure to kernel? Drivers for every other device
type are in kernel. What would the anti-video-in-kernel-guys think if I
claimed that network cards should have userspace "drivers" in sort of "uber
daemon" and if an app wants to make a TCP connection it should contact this
"uber daemon"? I don't want to have staroffice in kernel, but the DRIVER
STRUCTURE. For a great UI, we need DMA, vsync and devices communicating with
each other directly or with little overhead. Why insist on doing this in
userspace? The reasons to put it into kernel aren't speed, but because it's
much more easier to add/maintain drivers, add functionality, share code and do
fancy stuff. DRI is a very good example of what I mean.
</Rant>
Short explaination of "the precise vsync thingy": For fluent video playback it
is necessary to precisely coordinate number of frames the monitor displays.
It is very visible on a TV. When I have a 25fps video, it should be EXACTLY
"one frame of data == one frame on TV". Currently, I can tell the card (ATI)
to blit on vsync (so it won't tear), but I can't tell it "don't miss a frame",
or "block until vsync". This results in visible "jumps" when suddenly the same
picture is staying on screen for the double duration than the others and it
sucks and I can't do anything about it without SOME kernel support. Telling
Xserver to poll for vsync and eat CPU is lame.
> Vladimir Dergachev
Bye,
Peter Surda (Shurdeek) <shurdeek@...>, ICQ 10236103, +436505122023
--
Disc space - The final frontier.

On Mon, Oct 22, 2001 at 05:48:56AM +0100, MichaelM wrote:
> Would you consider it a good idea to make DRI part of the source of a
> kernel? Direct 3d graphics supported from the boot sequence.
Hmm I thought DRI is part of the kernel? Perhaps you meant the DRM part of it.
> I'm really concerned about your answer. There was a whole thread on
> the linux-kernel mailing list about the hypothesis of the release of
> an X-Kernel, a kernel which would include built-in desktop support.
I think it is a great idea to have a kernel implementation of Xserver. But it
would have to be more modular than current XF86, and also have a highly
flexible structure, so that adding new types of devices and functionality
wouldn't pose problems. I think this is currently the biggest XF86's drawback.
It would allow many cool things that XF86 is now struggling with (e.g. check
xpert mailing list for thread about precise vsync coordination).
Each device would have flags like:
- can the device serve as a keyboard?
- can the device serve as a pointer (mouse, joystick, touchpad, ...)
- can it be used for video output?
- can it grab/capture?
- can it convert between colorspaces?
- can it do DMA?
This would allow to write drivers easily and also support combined devices
(keyboard+touchpad, video+capture, ...).
Second: provide data structures
- keypress
- mouse movement
- image
- font
etc.
and hooks for these devices to:
- input data (e.g. keypress).
- output data (e.g. draw pixel)
- transfer data (from/to other devices, system RAM, etc).
- combination of those (e.g. transfer an image from system ram and draw it)
- process data internally (e.g. deinterlace?)
- report status (refresh rate, vertical retrace, ...)
- do something (e.g. wait for nth vsync)
(think ioctl). Currently in XF86 (IMHO) for each new type of use a new
standard has to be made. In this "ioctl" version you simply define a new value
and add a function to a driver that should do it. Other drivers, or older X
(as they would have something like "switch (ioctl) default: return
E_UNSUPPORTED;") will return an error, but nothing will crash or seize to
work.
Another thing is code reuse, so that several drivers can call generic
functions for doing the same thing (I think the "combination of transfer +
output" is a very good candidate for this). This is also a problem in XF86
imho.
> Most people answered, no, this would be ridiculous,
I wouldn't put it on a server there because imho server shouldn't even have a
monitor (mine don't). But for embedded and desktop, all way.
But supposing you want to use a graphical interface on a box, then this kind
of stuff simply DOES belong to the kernel (no I'm not an idiot and I don't
have MSWindows anywhere on my computers).
> other said, yes, but hardware manufacturers are too unhelpful therefore
> this would be totally a totally unstable release.
There isn't a reason why Xserver in kernel should be more unstable than
user-space Xserver. Both have direct access to all memory and hardware and can
lock up the machine.
One thing though: There should be an interface to reload a driver that is
currently in use, so that when developing it I wouldn't have to reboot
everytime I recompile it.
Oh and one more thing: the driver should autodetect if it is running on the
same videocard as the virtual terminal stuff, so that the first card will
simply open a new VT but secondary card will run independently of this VT
stuff. This would finally allow a decent way to concurrently run 2 separate X
sessions on the same machine using local hardware.
> Others said.. other various things.
Ok I'll check the thread.
> So, what do you think?
So, what do YOU think? :-)
Bye,
Peter Surda (Shurdeek) <shurdeek@...>, ICQ 10236103, +436505122023
--
Reboot America.

On Mon, 22 Oct 2001, MichaelM wrote:
> Would you consider it a good idea to make DRI part of the source of a kernel? Direct 3d graphics supported from the boot sequence.
>
> I'm really concerned about your answer. There was a whole thread on the linux-kernel mailing list about the hypothesis of the release of an X-Kernel, a kernel which would include built-in desktop support. Most people answered, no, this would be ridiculous, other said, yes, but hardware manufacturers are too unhelpful therefore this would be totally a totally unstable release. Others said.. other various things.
>
> So, what do you think?
>
The biggest reason against this is that X (as it is now) support not only
Linux but many other OSes: in particular BSD(s) and Solaris. Moving
stuff into Linux kernel creates a fork of the drivers which is
undesirable..
Vladimir Dergachev

On Sun, Oct 21, 2001 at 10:01:33PM -0700, Jeffrey W. Baker wrote:
> Send us a mail that isn't from a windows machine, and you might get an
> interesting discussion. As it stands, I can barely tell what you are going
> on about.
Dude, I think that Outlook is crap too, I had to administer a couple of them
for a year and it was a nightmare. But that isn't a reason to flame. Any
decent mailclient (such as mutt I'm using) can display mails with lines longer
than 72 chars and html attachments without hassle. I'm pretty sure there is a
way to tell your pine to do that as well. If there isn't, "use the source" and
"make it so" :-).
> -jwb
Bye,
Peter Surda (Shurdeek) <shurdeek@...>, ICQ 10236103, +436505122023
--
There's no place like ~

On Mon, 22 Oct 2001, MichaelM wrote:
> Would you consider it a good idea to blah blah blah....?
Send us a mail that isn't from a windows machine, and you might get an
interesting discussion. As it stands, I can barely tell what you are
going on about.
-jwb

Would you consider it a good idea to make DRI part of the source of a =
kernel? Direct 3d graphics supported from the boot sequence.
I'm really concerned about your answer. There was a whole thread on the =
linux-kernel mailing list about the hypothesis of the release of an =
X-Kernel, a kernel which would include built-in desktop support. Most =
people answered, no, this would be ridiculous, other said, yes, but =
hardware manufacturers are too unhelpful therefore this would be totally =
a totally unstable release. Others said.. other various things.
So, what do you think?