This was a topic on today's IRC session. Here's a very rough list of issues
and goals to consider. After we get a good list we can move onto implementation
proposals. If we're really going to do this, let's be sure we do it right.
Please contribute comments, new issues, etc.
-Brian
1. Single-copy textures
Don't want to have every texture duplicated in two places: client
memory (libGL) and on the card.
If a texture is only present in the card's memory, what happens
when we need to (re)move it to make room for new stuff?
Need to make sure we never lose or corrupt a texture image.
Consider glCopyTexImage(), don't ever want to lose the contents of
texmem since we have no backup of the image.
2. Share texmem among N OpenGL clients.
This works in recent DRI drivers, but is kind of klunky. Basically,
if the working set of textures for all clients can simultaneously fit
in texture memory, we don't want to reload textures when we context
switch.
3. Dynamic allocator, to accomodate vertex buffers, pbuffers, etc.
Beyond textures, there are vertex buffers, pbuffers, back buffers,
depth buffers, etc that may be competing for card memory.
4. AGP texturing (i.e. textures reside in AGP memory).
Any circumstances when we'd have to move the textures to card memory
or vice versa? Render to texture?
5. Render to texture.
Can cards render to AGP memory? Yes?
This interacts with pbuffers (bind pbuffer to texture, render to the
pbuffer texture).
6. GL_SGIS_generate_mipmaps
Use h/w image scaler to generate filtered mipmap levels?
Or, for NxN texture, render a (N/2) x (N/2) polygon? (w/ render-to-texture)
7. Allen Akin's memory management proposal: 'pinned' textures, etc.
If we ever expose memory management to the user (beyond texture priorities)
we want to be sure our allocator is designed with this in mind.
8. 1-D, 3-D, cube maps, texture rectangles, compression, etc.
Don't forget that there's more than just traditional 2-D textures.

Michel Dänzer wrote:
> On Mon, 2002-09-30 at 06:13, Jason Cook wrote:
>
>>Nicholas,
>>
>>I have followed the advice that others gave you in their replies and
>>have been able to find the info on environmental variables but I have
>>not looked in the right place for XF86Config Options. Do you know
>>where to find them?
>>
>>I use for my Radeon VIVO (QD)
>>
>> Option "AGPMode" "4"
>
>
> Can cause instabilities or even failures, and doesn't provide too much
> benefit in my experience.
>
>
>> Option "AGPSize" "64"
>
>
> This one doesn't do much for the radeon driver currently, but might
> become important when we start using the AGP memory manager, like the
> r200 driver already does. (Increase the default to 16?)
>
>
>> Option "RingSize" "8"
>> Option "BufferSize" "2"
>
>
> You shouldn't have to change the defaults for these (though we might
> want to increase the default for the latter to try and avoid
> intermittent radeon_freelist_get failures?). In particular, the ring is
> plenty big by default I think.
>
>
>> Option "EnableDepthMoves" "true"
>
>
> This one's gone, plus it had a comment about being slow in the source...
> (which is probably why it was an option in the first place)
>
>
>> Option "EnablePageFlip" "true"
>
>
> This one is only an option until all the issues around it are resolved.
>
>
>> Option "AGPFastWrite" "1"
>
>
> This one is an option because it causes failures on some systems.
>
>
>
>>Are there others I've missed? Are these Radeon specific?
>
>
> Most are.
>
>
> As you see, most options are options because the defaults should be fine
> and/or they might cause problems.
>
>
Okay. The default for the RingSize is 1 Mb. Default for the BufferSize
is 2 Mb. It seems the BufferSize cannot be set any higher than this. The
X log shows the option enabled at a higher setting, but later states
that the vertex/indirect buffer is only the default 2 Mb. Does the
Option "BufferSize" not function at the moment? Or does this setting
refer to a different buffer?
Also, my xserver would complain if pageflipping was not enabled and then
it would segfault. So I can't turn it off at the moment (not that I'd
want to). Kinda strange...

I'm pretty unfamiliar with OpenGL programming. I have an idea for an
xfree module that I suspect would not be too hard to implement, but I
wanted to get some other opinions on it. What I'd like to do is create
a module called perhaps ogl-xv or glx-xv that would provide a generic
Xv adapter on the front end and on the back end would implement it
using openGL calls to basically create an RGB or YUV texture to render
the video to. this would have the advantage of acceleration on cards
with accelerated 3D, and would provide generic Xv support to cards
lacking an overlay engine by using SW mesa, and it could provide for
more than one Xv adapter, so you could theoretically have more than one
Xv at a time.
Alex
__________________________________________________
Do you Yahoo!?
New DSL Internet Access from SBC & Yahoo!
http://sbc.yahoo.com

On Mon, 2002-09-30 at 19:04, Jeff Hartmann wrote:
> I know we have talked about this issue before, but I want to rehash. The
> statement that only what the bios programs is not entirely correct, but this
> does hold true for some chipsets where we don't know all the details of agp
> mode switching.
OK. So why do we honour AGPMode for the others ?
> correctly. According to the agp spec whatever capability bits are available
> are the modes that are valid for use. The agp kernel module will look at
> the capability bits and only allow bits that are set in the capability
> register to be used. It was designed this way to protect the user against
> setting agp modes that are not supported by the hardware, thus a user can't
> set agp mode to 4x if 4x isn't in the capability register. According to the
> agp spec, thats how mode setting is supposed to work.
Ok
> memory. This is why the Xserver defaults to agp 1x in almost all cases,
> since its the most reliable setting. This variable actually can be tuned,
> but unless you are agp bandwidth limited (which is not the common case,
I get reports from people quite often (because agp is clearly the
kernel) where setting AGP to the mode currently active according to the
cap registers (typically 2x) works and 1x hangs randomly.
> If someone will give me a list of chipsets (pci vendor/device pairs) that
> we know require the agp mode to be programmed by the bios I will write an
> overrided function for their agp drivers so they will not set the graphics
> mode to anything but what is required. This is the simple solution to the
> problems people are having, and I can get a patch ready very shortly.
I will try and get PCI idents with future bug reports. Actually I seem
to remember at least one recent report like this to dri-devel as well ?
> I hope this clears up the issue, cause I know it has been a source of pain
> for Alan and many other kernel developers. I wish this was a simple issue,
> but it is not unfortunately.
Thanks. As ever the truth and the specification tend to be different
things 8(

Alan,
I know we have talked about this issue before, but I want to rehash. The
statement that only what the bios programs is not entirely correct, but this
does hold true for some chipsets where we don't know all the details of agp
mode switching. In fact the agp specification does not require that the
bios setup and enable the agp aperture on bootup. So if we stick to your
premise that the bios has to deal with it, we will break on such setups. On
the broken chipsets we need to override the code in the setmode function,
but we can also get the details from the vendor to support this properly. I
would recommend that these chipsets override the base set mode function and
not allow any mode switching until we find out the details on how to do it
correctly. According to the agp spec whatever capability bits are available
are the modes that are valid for use. The agp kernel module will look at
the capability bits and only allow bits that are set in the capability
register to be used. It was designed this way to protect the user against
setting agp modes that are not supported by the hardware, thus a user can't
set agp mode to 4x if 4x isn't in the capability register. According to the
agp spec, thats how mode setting is supposed to work.
The biggest problem with only using the bios default is that there is alot
of motherboard and graphics card combinations which don't work reliably at
anything but AGP 1x. The timings for 2x and 4x if they are off by even a
little bit, the graphics card will not be able to reliably DMA from main
memory. This is why the Xserver defaults to agp 1x in almost all cases,
since its the most reliable setting. This variable actually can be tuned,
but unless you are agp bandwidth limited (which is not the common case,
especially if your rendering tri/line strips) this variable will not get you
much performance improvement. However if you are doing alot of blits to and
from agp mapped memory, or other extremely bandwidth intensive applications
the agp mode setting can completely detrimine your performance.
So if the bios says we want agp 2x but the card in their agp slot only
works at agp 1x with that motherboard then we have broken someones
configuration that was working before. agp 1x is the only safe setting IF
we are sure we have the mode setting code correct for that chipset. If we
don't know that ALL bets are off. For tuning purposes depending on they
usage of agp bandwidth, agp mode 1x is not always the optimal setting. I
wish we could just say always go to the max agp setting or whatever the
current chipsets mode register is set at. Unfortunately that will not work
in many cases. Just going with what is already programmed in the mode
register is going to break peoples configurations that worked fine before.
If someone will give me a list of chipsets (pci vendor/device pairs) that
we know require the agp mode to be programmed by the bios I will write an
overrided function for their agp drivers so they will not set the graphics
mode to anything but what is required. This is the simple solution to the
problems people are having, and I can get a patch ready very shortly.
I hope this clears up the issue, cause I know it has been a source of pain
for Alan and many other kernel developers. I wish this was a simple issue,
but it is not unfortunately.
Hope this clears up some things,
-Jeff

Hi everybody,
I just remembered that I had similar symptoms to the ones frequently
described on the dri-users list lately (screen goes to power saving
mode, system locked). It was after the cvs update that introduced
interrupt controlled frame throttling.
I noticed that my BIOS assigned IRQ 10 to the graphics card but lspci
and the Xserver log reported IRQ 5. I solved the problem by removing an
ACPI patch from my kernel (2.4.19) which obviously reassigned the VGA
interrupt. Since then everything works fine with IRQ 10.
Best regards,
Felix
__\|/__ ___ ___ ___
__Tschüß_______\_6 6_/___/__ \___/__ \___/___\___You can do anything,___
_____Felix_______\Ä/\ \_____\ \_____\ \______U___just not everything____
fxkuehl@... >o<__/ \___/ \___/ at the same time!

Ian Romanick wrote:
> On Mon, Sep 30, 2002 at 09:22:00AM -0600, Brian Paul wrote:
>
>>Ian Romanick wrote:
>>
>>>Hello all.
>>>
>>>I noticed that NV_texture_rectangle appeared in the R100 driver string after
>>>then recent R200 merge. I did some looking around, and found that it is
>>>explicitly enabled in the R200 driver and has code to support rectangular
>>>textures. It seems that it is enabled by default in Mesa. Is this correct?
>>
>>It's enabled for the R200 driver and software rendering only.
>
>
> That may be the desired result, but extras/Mesa/src/extensions.c, line 113
> tells me differently:
>
> { ON, "GL_NV_texture_rectangle", F(NV_texture_rectangle) },
>
> It shows up in the extension string from glxinfo on the R100 driver.
Oops, that's an accident. I'll fix it.
>>>I didn't dig deep enough in Mesa to see if it would somehow automatically
>>>convert a rectangle to a power-of-two, so this might be okay.
>>
>>It does not do that. The texture targets GL_TEXTURE_2D and
>>GL_TEXTURE_RECTANGLE_NV are distinct.
>
>
> Okay. That's what I figured. Thanks for saving my dig time. :)
>
> [snip]
>
>
>>Simulating NPOT textures with conventional targets could be pretty tricky.
>>Note that NPOT texture coordinates range from [0,Width]x[0,Height], not
>>[0,1]x[0,1] as normal textures do. Some clamp/repeat modes aren't supported
>>either.
>
>
> Right. Mipmap (and ansio?) filter modes are also forbidden. I gave this a
> bit more thought after my last message. I don't think this is something
> that Mesa could automatically do. It would have to be done on a per-driver
> basis. You'd have to change the texture upload routines (easy) and the emit
> routines (hard). If NV_texture_rectangle still used [0,1]x[0,1], it would
> be pretty easy.
I suppose we could multiply the S and T coords by the texture size, but that
might upset the LOD calculation which is needed to choose between the mini-
fication and magnification filter (even though we don't have mipmaps).
-Brian

On Mon, Sep 30, 2002 at 09:22:00AM -0600, Brian Paul wrote:
> Ian Romanick wrote:
> > Hello all.
> >
> > I noticed that NV_texture_rectangle appeared in the R100 driver string after
> > then recent R200 merge. I did some looking around, and found that it is
> > explicitly enabled in the R200 driver and has code to support rectangular
> > textures. It seems that it is enabled by default in Mesa. Is this correct?
>
> It's enabled for the R200 driver and software rendering only.
That may be the desired result, but extras/Mesa/src/extensions.c, line 113
tells me differently:
{ ON, "GL_NV_texture_rectangle", F(NV_texture_rectangle) },
It shows up in the extension string from glxinfo on the R100 driver.
> > I didn't dig deep enough in Mesa to see if it would somehow automatically
> > convert a rectangle to a power-of-two, so this might be okay.
>
> It does not do that. The texture targets GL_TEXTURE_2D and
> GL_TEXTURE_RECTANGLE_NV are distinct.
Okay. That's what I figured. Thanks for saving my dig time. :)
[snip]
> Simulating NPOT textures with conventional targets could be pretty tricky.
> Note that NPOT texture coordinates range from [0,Width]x[0,Height], not
> [0,1]x[0,1] as normal textures do. Some clamp/repeat modes aren't supported
> either.
Right. Mipmap (and ansio?) filter modes are also forbidden. I gave this a
bit more thought after my last message. I don't think this is something
that Mesa could automatically do. It would have to be done on a per-driver
basis. You'd have to change the texture upload routines (easy) and the emit
routines (hard). If NV_texture_rectangle still used [0,1]x[0,1], it would
be pretty easy.
--
Smile! http://antwrp.gsfc.nasa.gov/apod/ap990315.html

Ian Romanick wrote:
> Hello all.
>
> I noticed that NV_texture_rectangle appeared in the R100 driver string after
> then recent R200 merge. I did some looking around, and found that it is
> explicitly enabled in the R200 driver and has code to support rectangular
> textures. It seems that it is enabled by default in Mesa. Is this correct?
It's enabled for the R200 driver and software rendering only.
> I didn't dig deep enough in Mesa to see if it would somehow automatically
> convert a rectangle to a power-of-two, so this might be okay.
It does not do that. The texture targets GL_TEXTURE_2D and
GL_TEXTURE_RECTANGLE_NV are distinct.
> In the R100 driver, we could be in one of two situations.
>
> 1. NV_texture_rectangle is exported, but not really supported. This is bad.
> If this is the case, then extras/Mesa/src/extensions.c should be changed
> to mark NV_texture_rectangle OFF by default. I'd really hate to see a
> crash, incorrect rendering, or a SW fallback for a non-required extension.
>
> 2. Mesa automagically does some stuff to make rectangular textures work on
> hardware that only supports power-of-two textures. In this case, there
> could be a lot of wasted space on R100 (which does support rectangular
> textures). In this case, Keith: what do I need to do to the R100 driver
> to make it work? I don't want to have to just look for differences from
> the R200 driver, as that is a lot of (error prone) work. :)
NPOT textures are really supported in the R200. It's a feature we exposed
for the Weather Channel project.
Simulating NPOT textures with conventional targets could be pretty tricky.
Note that NPOT texture coordinates range from [0,Width]x[0,Height], not
[0,1]x[0,1] as normal textures do. Some clamp/repeat modes aren't supported
either.
-Brian

Hello all.
I noticed that NV_texture_rectangle appeared in the R100 driver string after
then recent R200 merge. I did some looking around, and found that it is
explicitly enabled in the R200 driver and has code to support rectangular
textures. It seems that it is enabled by default in Mesa. Is this correct?
I didn't dig deep enough in Mesa to see if it would somehow automatically
convert a rectangle to a power-of-two, so this might be okay.
In the R100 driver, we could be in one of two situations.
1. NV_texture_rectangle is exported, but not really supported. This is bad.
If this is the case, then extras/Mesa/src/extensions.c should be changed
to mark NV_texture_rectangle OFF by default. I'd really hate to see a
crash, incorrect rendering, or a SW fallback for a non-required extension.
2. Mesa automagically does some stuff to make rectangular textures work on
hardware that only supports power-of-two textures. In this case, there
could be a lot of wasted space on R100 (which does support rectangular
textures). In this case, Keith: what do I need to do to the R100 driver
to make it work? I don't want to have to just look for differences from
the R200 driver, as that is a lot of (error prone) work. :)
--
Smile! http://antwrp.gsfc.nasa.gov/apod/ap990315.html

On Mon, Sep 30, 2002 at 01:42:59PM +0100, Alan Cox wrote:
> AGPMode has to match the chipset setting (see lspci -v). There is no
> other correct setting or 'tuning'.
I've been wondering about this one. No BIOS I've seen allows you to set a
specific mode. On my computer I can only select if 4x is supported. 1x and
2x are always visible in lspci -v output.
--
Ville Syrjälä
syrjala@...
http://www.sci.fi/~syrjala/

On Mon, 2002-09-30 at 13:10, Michel D=E4nzer wrote:
> > Option "AGPMode" "4"
>=20
> Can cause instabilities or even failures, and doesn't provide too much
> benefit in my experience.
AGPMode has to match the chipset setting (see lspci -v). There is no
other correct setting or 'tuning'.=20
Alan

On Mon, 2002-09-30 at 06:13, Jason Cook wrote:
> Nicholas,
>=20
> I have followed the advice that others gave you in their replies and
> have been able to find the info on environmental variables but I have
> not looked in the right place for XF86Config Options. Do you know
> where to find them?
>=20
> I use for my Radeon VIVO (QD)
>=20
> Option "AGPMode" "4"
Can cause instabilities or even failures, and doesn't provide too much
benefit in my experience.
> Option "AGPSize" "64"
This one doesn't do much for the radeon driver currently, but might
become important when we start using the AGP memory manager, like the
r200 driver already does. (Increase the default to 16?)
> Option "RingSize" "8"
> Option "BufferSize" "2"
You shouldn't have to change the defaults for these (though we might
want to increase the default for the latter to try and avoid
intermittent radeon_freelist_get failures?). In particular, the ring is
plenty big by default I think.
> Option "EnableDepthMoves" "true"
This one's gone, plus it had a comment about being slow in the source...
(which is probably why it was an option in the first place)
> Option "EnablePageFlip" "true"
This one is only an option until all the issues around it are resolved.
> Option "AGPFastWrite" "1"
This one is an option because it causes failures on some systems.
> Are there others I've missed? Are these Radeon specific?
Most are.
As you see, most options are options because the defaults should be fine
and/or they might cause problems.
--=20
Earthling Michel D=E4nzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member / CS student, Free Software enthusiast

On Mon, 2002-09-30 at 06:40, Nicholas Leippe wrote:
> On Sunday 29 September 2002 10:13 pm, Jason Cook wrote:
> > Nicholas,
> >=20
> > I have followed the advice that others gave you in their replies and
> > have been able to find the info on environmental variables but I have
> > not looked in the right place for XF86Config Options. Do you know
> > where to find them?
>=20
> Well, it appears that the driver-specific man-pages have them:
>=20
> http://www.xfree86.org/current/manindex4.html
>=20
> However, some man pages are missing, notably ati radeon.
There will hopefully be one in 4.3.0 at the latest, someone posted a
draft to the Xpert list a while ago.
> Also, these only cover what was available at the 4.2.1 release--not what =
is
> currently in DRI CVS. (does dri-cvs contain the man page sources?--I
> didn't think it did.)
Yes it does, programs/Xserver/hw/xfree86/drivers/<driver>/<driver>.man,
which gets processed to <driver>._man .
--=20
Earthling Michel D=E4nzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member / CS student, Free Software enthusiast

On Mon, 2002-09-30 at 13:28, Martin Spott wrote:
> > I just uploaded a set of binary snapshots built from the CVS head=20
> > using RedHat's compat-gcc-7.3-2.96.110 package (which produces code com=
patible
> > with the gcc bundled with the RedHat 7.3 and is the same which was prod=
ucing
> > the snapshots before).
>=20
> Unfortunately this appears to be not very helpful for those of us who
> test-run the snapshots on a regular basis against known OpenGL programs. =
This
> is from the radeon-20020930 binary snapshot:
>=20
> libGL: OpenDriver: trying /usr/X11R6/lib/modules/dri/radeon_dri.so
> libGL error: dlopen failed: /lib/libc.so.6: version `GLIBC_2.3' not found=
(required by /usr/X11R6/lib/modules/dri/radeon_dri.so)
>=20
>=20
> _I_ don't have glibc-2.3 on my system and I believe, others don't either.=
So
> this _might_ render the binary snapshots pretty useless.
But so the 2D driver from that snapshot works for you?
--=20
Earthling Michel D=E4nzer (MrCooper)/ Debian GNU/Linux (powerpc) developer
XFree86 and DRI project member / CS student, Free Software enthusiast

> I just uploaded a set of binary snapshots built from the CVS head
> using RedHat's compat-gcc-7.3-2.96.110 package (which produces code compatible
> with the gcc bundled with the RedHat 7.3 and is the same which was producing
> the snapshots before).
Unfortunately this appears to be not very helpful for those of us who
test-run the snapshots on a regular basis against known OpenGL programs. This
is from the radeon-20020930 binary snapshot:
libGL: OpenDriver: trying /usr/X11R6/lib/modules/dri/radeon_dri.so
libGL error: dlopen failed: /lib/libc.so.6: version `GLIBC_2.3' not found (required by /usr/X11R6/lib/modules/dri/radeon_dri.so)
_I_ don't have glibc-2.3 on my system and I believe, others don't either. So
this _might_ render the binary snapshots pretty useless.
Cheers,
Martin.
--
Unix _IS_ user friendly - it's just selective about who its friends are !
--------------------------------------------------------------------------

Hi!
Sun 29, 15:53:34 -0700, Linus Torvalds(torvalds@...) wrote:
>
> On 29 Sep 2002, Jay Phelps wrote:
> >
> > It looks to me like DRI claims to be starting up A-OK. However, glxinfo
> > reports no and gears FPS is as such that it's certainly not using DRI,
> > I'm including my log file for examination.
>
> I had something similar the other week. XFree86.log showed that X had
> enabled DRI fine, but no acceleration worked. Enabling LIBGL_DEBUG showed
> that any GLX app was unable to load the r200_dri.so file, even though
> stracing the binary clearly showed that the open of the file (and mmap)
> succeeded cleanly. R200_DEBUG showed absolutely nothing.
>
> Doing a "make clean + make World" fixed it for me - there's probably
> something wrong with the dependencies in some makefile.
>
> Linus
Hmm...What about binaries from dri.sf.net? Users who using it don't use
cvs :) Maybe stop prducing "bad" code until source is being fixed?
--
WBR, Konstantin
ZAO ELKATEL Network/Security assistant
--------------------------------------------------------
...The information is like the bank... (c) EC8OR

On Sun, Sep 29, 2002 at 09:03:12PM -0700, Jason Cook wrote:
>Jose,
>
>Unfortunately that doesn't fix the problem for me. I get the same
>results as before. I loose all visual context, X segfaults and all my
>VTs are black. I reboot and restore.
>
>Sorry I can't give feeback on the other cards. What could the problem
>be? Do the latest snapshots work on your machine? Maybe a some other
>change has happened in the CVS that significantly alters things for
>the snapshots?
My machines have all Mach64 which lives in a seperate branch now, so
they are unaffected.
Well, there are two things that we could try: make a series of snapshots
without merging the XFree 4.2.0 code in, or make a series of snapshots
on an native gcc 2.9x machine.
But personally I'm more inclined to go directly find the reason to the
problem than to test every combination of parameters in the hope of
finding the answer.
Jason, is it possible for you to download Xfree86 GDB
(http://www.dawa.demon.co.uk/xfree-gdb/ ), start the X server remotely
doing
gdb XFree86
and perhaps starting some applications from another remote terminal by
first setting the DISPLAY environment var. XFree86 should then segfault
and you should be able to get a stack backtrace by typing 'bt' on the
gdb's command line.
Also, I'm not in depth of the changes in XAA that caused this
(please correct-me if I'm wrong), but if the problem is actually there,
shouldn't disabling XAA avoid the segfault then?
>Incidently, why are the gcc 3.x.x snapshots almost twice as large?
As Felix pointed out, it seems that the new gcc uses a different debug
format. If that alone is the single reason is the mistery...
José Fonseca

Felix K=FChling wrote:
> On Sun, 29 Sep 2002 22:37:36 +0100
> Keith Whitwell <keith@...> wrote:
>=20
>=20
>>Felix K=FChling wrote:
>>
>>>On Sun, 29 Sep 2002 23:25:03 +0200
>>>Dieter N=FCtzel <Dieter.Nuetzel@...> wrote:
>>>
>>>
>>>
> [snip]
>=20
>>>>Is r100/r200 a completely different thing?
>>>>If not why not a patch against both?
>>>>Then the testing audience should be much "wider".
>>>>
>>>>
>>>Sure. As far as I could see the code is very similar. However, this: =
=20
>>> rmesa->do_irqs =3D (0 &&=20
>>> rmesa->dri.drmMinor >=3D 6 &&=20
>>> !getenv("R200_NO_IRQS") &&
>>> rmesa->r200Screen->irq);
>>>looks like IRQs are turned off by default on R200. So my code wouldn't
>>>be used. Is the reason for IRQs being disabled that the frame throttli=
ng
>>>is not implemented properly or are there lower level problems with IRQ=
s?
>>>
>>No, this is a hangover from the bugs last week. It can be removed now.
>>
>=20
> Ok, I just saw your commit. I'm working on it now. It will take a while=
,
> though. The code is ready but I want to compile it at least and I havn'=
t
> enabled compiling the r200 driver. Is there a faster way than doing a
> make world after changing config.cf?
>=20
cd lib/GL/mesa/src/drv
make Makefile
make Makefiles
make depend
make
make install
Should work...
Keith

Brian Paul wrote:
> Felix K=FChling wrote:
>=20
>> Hello,
>>
>> Modifying the frame throttling code in r200_ioctl.c I removed
>> R200_MAX_OUTSTANDING which is no longer needed there. It is, however,
>> still used in r200Clear:
>>
>> if ( rmesa->sarea->last_clear - clear <=3D R200_MAX_OUTSTANDING+=
1 ) {
>> break;
>> }
>>
>> The corresponding radeonClear uses a macro RADEON_MAX_CLEARS. There is=
a
>> macro R200_MAX_CLEARS defined in r200_ioctl.c, too. But it is never us=
ed.
>> Did I step on a bug here? Should I change this to
>>
>> if ( rmesa->sarea->last_clear - clear <=3D R200_MAX_CLEARS ) {
>> break;
>> }
>>
>> Regards,
>> Felix
>>
>=20
> What's the story with throttling in glClear? I hope we're not using
> glClear as a frame counter of some sort. Applications don't necessaril=
y
> have to call glClear at all. Other apps may call glClear several times=
per
> frame.
No, this is code by Gareth, I think, to deal with apps like 'clearspd' th=
at=20
just queue up clears in a tight loop. Without throttling, the behaviour =
is bad.
However, I can think of a dozen different ways to get similar bad behavio=
ur=20
without calling glClear either.
Keith

Felix K=FChling wrote:
> Hello,
>=20
> Modifying the frame throttling code in r200_ioctl.c I removed
> R200_MAX_OUTSTANDING which is no longer needed there. It is, however,
> still used in r200Clear:
>=20
> if ( rmesa->sarea->last_clear - clear <=3D R200_MAX_OUTSTANDING+1=
) {
> break;
> }
>=20
> The corresponding radeonClear uses a macro RADEON_MAX_CLEARS. There is =
a
> macro R200_MAX_CLEARS defined in r200_ioctl.c, too. But it is never use=
d.
> Did I step on a bug here? Should I change this to
>=20
> if ( rmesa->sarea->last_clear - clear <=3D R200_MAX_CLEARS ) {
> break;
> }
If you want. I think the number should be '1' in both cases. I don't re=
ally=20
see the need for a macro, even, as we've pretty much narrowed down the on=
ly=20
acceptable value.
Keith

On Sunday 29 September 2002 10:13 pm, Jason Cook wrote:
> Nicholas,
>
> I have followed the advice that others gave you in their replies and
> have been able to find the info on environmental variables but I have
> not looked in the right place for XF86Config Options. Do you know
> where to find them?
Well, it appears that the driver-specific man-pages have them:
http://www.xfree86.org/current/manindex4.html
However, some man pages are missing, notably ati radeon. Also, these
only cover what was available at the 4.2.1 release--not what is
currently in DRI CVS. (does dri-cvs contain the man page sources?--I
didn't think it did.)
> I use for my Radeon VIVO (QD)
>
> Option "AGPMode" "4"
> Option "AGPSize" "64"
> Option "RingSize" "8"
> Option "BufferSize" "2"
> Option "EnableDepthMoves" "true"
> Option "EnablePageFlip" "true"
> Option "AGPFastWrite" "1"
I only knew about AGPMode and the last two. What do the others do?
> Are there others I've missed? Are these Radeon specific? I have seen
> in a post for the Voodoo5 the Option "DisableSLI" "1" or something
> similar. It would be nice to have ready access to these goodies. I
> scoured the forums for the ones I found. But where did other people
> find them? I know RTFM, but which one?
I don't know, that's why I started this thread ;)
Nick

Nicholas,
I have followed the advice that others gave you in their replies and
have been able to find the info on environmental variables but I have
not looked in the right place for XF86Config Options. Do you know
where to find them?
I use for my Radeon VIVO (QD)
Option "AGPMode" "4"
Option "AGPSize" "64"
Option "RingSize" "8"
Option "BufferSize" "2"
Option "EnableDepthMoves" "true"
Option "EnablePageFlip" "true"
Option "AGPFastWrite" "1"
Are there others I've missed? Are these Radeon specific? I have seen
in a post for the Voodoo5 the Option "DisableSLI" "1" or something
similar. It would be nice to have ready access to these goodies. I
scoured the forums for the ones I found. But where did other people
find them? I know RTFM, but which one?