Ah no worries, thanks for checking it out. Not a priority, mainly just
reporting as a bug or in case it was a very easy fix.
Owen.
Brian Paul wrote:
> OK, the problem is we're not doing sub-pixel adjustment of texcoords
> in the sprite rasterization code. Look in sprite_point() in
> s_points.c if interested. This can be fixed but it will take some
> work. I'll see if I can fix it when I get some spare time.
>
> -Brian
>
>
> Owen Kaluza wrote:
>
>> Hi Brian,
>>
>> Sure, here is the best illustration of the issue I could produce: 1000
>> points, aligned as you suggested.
>> If I bring the point size up from 1.0 the issue isn't as obvious
>> although still noticeable.
>> The attenuation seems to drop the point size then gradually increase,
>> you can see the points towards the back disappear then reappear.
>> Attached modified program and two screen shots, one using osmesa and the
>> other with glut+video card gl drivers.
>>
>> Thanks,
>> Owen.
>>
>> Brian Paul wrote:
>>
>>> Owen Kaluza wrote:
>>>
>>>> Hello,
>>>> I'm having trouble with point distance attenuation using OSMesa.
>>>> I'm rendering a lot of depth sorted, alpha blended, textured points and
>>>> dark bands are appearing that are not there when I render with the
>>>> system GL.
>>>>
>>>> I found the problem only occurs with point distance attenuation
>>>> turned on.
>>>> If you look at the attached image you can see there are clearly defined
>>>> bands, possibly the point size calculation is incorrect at certain
>>>> distances resulting in size jumps.
>>>>
>>>> I've attached a sample program that reproduces the problem. Also tried
>>>> the latest Mesalib code (Mesa-7.7-devel-20091105) and is still
>>>> occurring.
>>>>
>>> Could you prune down your test program a bit? Perhaps you could draw
>>> a series of points between the min/max Z positions and see how they look.
>>>
>>> -Brian
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>>
>
>
> ------------------------------------------------------------------------------
> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
> trial. Simplify your report design, integration and deployment - and focus on
> what you do best, core application coding. Discover what's new with
> Crystal Reports now. http://p.sf.net/sfu/bobj-july
> _______________________________________________
> Mesa3d-users mailing list
> Mesa3d-users@...
> https://lists.sourceforge.net/lists/listinfo/mesa3d-users
>

Hi John, sorry this took so long... kind of fell off my radar.
John Wythe <bitspace@...> writes:
> On Sat, Nov 7, 2009 at 2:46 PM, tom fogal <tfogal@...> wrote:
> > Hi John,
> >
> > John Wythe <bitspace@...> writes:
> >> I am encountering different rendering behavior between two
> >> seemingly compatible Linux environments. [. . .] Below are links to
> >> screen-shots and troubleshooting information:
> >>
> >> Screenshots of the issue:
> >> http://lh6.ggpht.com/_mTZwuLfG_iE/SvTnUfC0eWI/AAAAAAAAAB0/SUeL9K7CPcU/s800
> /sc
> >> reenshots.jpeg
> >
> > These look (to me) like they might be Z-fighting issues.
> >
> > Is there any chance of requesting more resolution from the depth
> > buffer? You would normally do this when choosing your glX visual.
>
> I've never heard of Z-fighting, but I can guess what it is. Probably
> the only way I can get more depth from the buffer is to hack at the
> wine opengl.dll implementation, since all the GL code is in the
> legacy app.
Right.
> However, I would think that this would not be necessary, as it was
> not on my desktop environment. I suppose it is possible something
> else is increasing the depth buffer resolution on my desktop.
The spec is worded in such a way that allows different implementations
to return any among a set of `compatible' buffers. As an example,
you might request a 16 bit depth buffer and get a 32bit depth buffer.
Another implementation might actually give you the 16bit depth buffer.
This can mask subtle bugs; an application might require a 24bit depth
buffer, request a 16bit buffer, and through `luck', only be tested on
systems that give 32bit depth buffers.
See the man page for `glXChooseVisual' for more information. This
information should be in the glX spec too, of course.
> >> Server environment information:
> >> http://docs.google.com/View?id=ddkkm9rx_2fvwmsdpt
> >>
> >> Desktop environment information:
> >> http://docs.google.com/View?id=ddkkm9rx_3dgj28nf4
> >
> > Unsurprisingly, your desktop X configuration is using XCB, probably
> > with it's libX11 `emulation' of a sorts, while your server
> > configuration does not have XCB.
>
> I did some reading about XCB before my initial message and figured it
> was a non-issue since it seems to me like just a binding interface
> and an app would have to be written for it to use it; which wine must
> not be since it does not require it.
This is not true; XCB has an emulation layer of sorts that translates
libX11 APIs to libXCB APIs.
> >> Instead I compiled Mesa using the xlib software driver. When using
> >> this libGL version the application continues to work just fine on my
> >> desktop.
> >
> > Are you absolutely certain you're using Mesa?
>
> I did not change my xorg.conf, only the LD_LIBRARY_PATH. The output of
> ldd glxinfo shows that the linker is using the mesa build of libGL and
> glxinfo says the renderer is using the Mesa X11 OpenGL renderer. From
> what I understand so far, that means it's using Mesa.
>
> Without the LD_LIBRARY_PATH override, glxinfo instead says Nvidia is
> the renderer.
Sound logic, I think. To be absolutely certain, of course, it'd be
good to check how this works when you've got a `Driver' of "nv" in your
xorg.conf, instead of "nvidia". `rmmod nvidia' if you can too (it seems
to load itself automagically when needed anyway).
Cheers,
-tom

OK, the problem is we're not doing sub-pixel adjustment of texcoords
in the sprite rasterization code. Look in sprite_point() in
s_points.c if interested. This can be fixed but it will take some
work. I'll see if I can fix it when I get some spare time.
-Brian
Owen Kaluza wrote:
> Hi Brian,
>
> Sure, here is the best illustration of the issue I could produce: 1000
> points, aligned as you suggested.
> If I bring the point size up from 1.0 the issue isn't as obvious
> although still noticeable.
> The attenuation seems to drop the point size then gradually increase,
> you can see the points towards the back disappear then reappear.
> Attached modified program and two screen shots, one using osmesa and the
> other with glut+video card gl drivers.
>
> Thanks,
> Owen.
>
> Brian Paul wrote:
>> Owen Kaluza wrote:
>>> Hello,
>>> I'm having trouble with point distance attenuation using OSMesa.
>>> I'm rendering a lot of depth sorted, alpha blended, textured points and
>>> dark bands are appearing that are not there when I render with the
>>> system GL.
>>>
>>> I found the problem only occurs with point distance attenuation
>>> turned on.
>>> If you look at the attached image you can see there are clearly defined
>>> bands, possibly the point size calculation is incorrect at certain
>>> distances resulting in size jumps.
>>>
>>> I've attached a sample program that reproduces the problem. Also tried
>>> the latest Mesalib code (Mesa-7.7-devel-20091105) and is still
>>> occurring.
>> Could you prune down your test program a bit? Perhaps you could draw
>> a series of points between the min/max Z positions and see how they look.
>>
>> -Brian
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> ------------------------------------------------------------------------
>>

Hi Brian,
Sure, here is the best illustration of the issue I could produce: 1000
points, aligned as you suggested.
If I bring the point size up from 1.0 the issue isn't as obvious
although still noticeable.
The attenuation seems to drop the point size then gradually increase,
you can see the points towards the back disappear then reappear.
Attached modified program and two screen shots, one using osmesa and the
other with glut+video card gl drivers.
Thanks,
Owen.
Brian Paul wrote:
> Owen Kaluza wrote:
>> Hello,
>> I'm having trouble with point distance attenuation using OSMesa.
>> I'm rendering a lot of depth sorted, alpha blended, textured points and
>> dark bands are appearing that are not there when I render with the
>> system GL.
>>
>> I found the problem only occurs with point distance attenuation
>> turned on.
>> If you look at the attached image you can see there are clearly defined
>> bands, possibly the point size calculation is incorrect at certain
>> distances resulting in size jumps.
>>
>> I've attached a sample program that reproduces the problem. Also tried
>> the latest Mesalib code (Mesa-7.7-devel-20091105) and is still
>> occurring.
>
> Could you prune down your test program a bit? Perhaps you could draw
> a series of points between the min/max Z positions and see how they look.
>
> -Brian
>

Owen Kaluza wrote:
> Hello,
> I'm having trouble with point distance attenuation using OSMesa.
> I'm rendering a lot of depth sorted, alpha blended, textured points and
> dark bands are appearing that are not there when I render with the
> system GL.
>
> I found the problem only occurs with point distance attenuation turned on.
> If you look at the attached image you can see there are clearly defined
> bands, possibly the point size calculation is incorrect at certain
> distances resulting in size jumps.
>
> I've attached a sample program that reproduces the problem. Also tried
> the latest Mesalib code (Mesa-7.7-devel-20091105) and is still occurring.
Could you prune down your test program a bit? Perhaps you could draw
a series of points between the min/max Z positions and see how they look.
-Brian

please help
i want to know how 3d API implement
how it divided in parts
for eg(don't take it much series as i don't know much) 3D math, driver
approach via software,bug fix etc
i even try to learn from Mesa 3.x but i can't
please help
any link clue etc
Q. At most how hardware interacted??
thanks in advance
vivek

Hello,
I'm having trouble with point distance attenuation using OSMesa.
I'm rendering a lot of depth sorted, alpha blended, textured points and
dark bands are appearing that are not there when I render with the
system GL.
I found the problem only occurs with point distance attenuation turned on.
If you look at the attached image you can see there are clearly defined
bands, possibly the point size calculation is incorrect at certain
distances resulting in size jumps.
I've attached a sample program that reproduces the problem. Also tried
the latest Mesalib code (Mesa-7.7-devel-20091105) and is still occurring.
Thanks,
Owen.

On Sat, Nov 7, 2009 at 2:46 PM, tom fogal <tfogal@...> wrote:
> Hi John,
>
> John Wythe <bitspace@...> writes:
>> I am encountering different rendering behavior between two
>> seemingly compatible Linux environments. [. . .] Below are links to
>> screen-shots and troubleshooting information:
>>
>> Screenshots of the issue:
>> http://lh6.ggpht.com/_mTZwuLfG_iE/SvTnUfC0eWI/AAAAAAAAAB0/SUeL9K7CPcU/s800/sc
>> reenshots.jpeg
>
> These look (to me) like they might be Z-fighting issues.
>
> Is there any chance of requesting more resolution from the depth
> buffer? You would normally do this when choosing your glX visual.
>
I've never heard of Z-fighting, but I can guess what it is. Probably
the only way I can get more depth from the buffer is to hack at the
wine opengl.dll implementation, since all the GL code is in the legacy
app. However, I would think that this would not be necessary, as it
was not on my desktop environment. I suppose it is possible something
else is increasing the depth buffer resolution on my desktop.
>> Server environment information:
>> http://docs.google.com/View?id=ddkkm9rx_2fvwmsdpt
>>
>> Desktop environment information:
>> http://docs.google.com/View?id=ddkkm9rx_3dgj28nf4
>
> Unsurprisingly, your desktop X configuration is using XCB, probably
> with it's libX11 `emulation' of a sorts, while your server
> configuration does not have XCB.
I did some reading about XCB before my initial message and figured it
was a non-issue since it seems to me like just a binding interface and
an app would have to be written for it to use it; which wine must not
be since it does not require it.
>> On my desktop I have a NVidia 8800GTS. To try and isolate the
>> problem, I wanted to force my desktop to use the software
>> renderer. For some unknown reason setting LIBGL_ALWAYS_SOFTWARE=1 has
>> no effect.
>
> You're probably using NVIDIAs driver. Actually, you almost definitely
> are, because the only other options are `nv' and `noveau', and of
> course the Mesa `swrast' driver. `nv' former can't do 3D, and `noveau'
> will crash when used for 3D -- if you're lucky -- AFAICT (never tried
> it myself).
> If you're using NVIDIA's driver, none of Mesa's environment variables
> matter.
This makes complete sense now. It did not strike me initially that the
using the nvidia driver removes mesa from the rendering pipeline. But
that oversight is just me learning about the X architecture still.
>> Instead I compiled Mesa using the xlib software driver. When using
>> this libGL version the application continues to work just fine on my
>> desktop.
>
> Are you absolutely certain you're using Mesa?
I did not change my xorg.conf, only the LD_LIBRARY_PATH. The output of
ldd glxinfo shows that the linker is using the mesa build of libGL and
glxinfo says the renderer is using the Mesa X11 OpenGL renderer. From
what I understand so far, that means it's using Mesa.
Without the LD_LIBRARY_PATH override, glxinfo instead says Nvidia is
the renderer.
> I would recommend you remove any drivers your package manager supplies,
> as much as possible at least. This won't be fully possible on Ubuntu
> because the removal of all GL impls will make the package manager want
> to remove X, but at least remove all nvidia packages.
I'll try something like that if I get super desperate, but I don't
wish to mess up my development environment. I wanted to get a third
machine involved test test on. I was going to use an Ubuntu image on
Amazon EC2, but for some reason, as soon as I call winetricks, the
server locks up hard. I have to shut it down from EC2. When I get a
chance, I might pursue something like this again to test with
>> The server on the other hand, is a managed environment without root
>> access. The default version of libGL caused the application to crash,
>> which initially, I thought was due to an older version of Xvfb. After
>> learning much more about xorg, I came to realize that it was not the
>> version of Xvfb that made things marginally work, but rather the
>> libGL version that was built as a result of building Xvfb/Mesa. Now
>> I am only building Mesa and libXmu on the server and using the older
>> Xvfb.
>
> CentOS, IMHO, is trash. Everything's too damn old on it; for software
> I work on, we're always hitting things like old compilers not accepting
> valid templates or similar. If you can update the toolchain, I would
> recommend as much. Or better yet, put a Debian stable / Ubuntu LTS /
> hell even openSUSE on the machine and save yourself the pain.
Yeah, but unfortunately there is nothing I can do. Cent OS it has to
be for now; it's a managed server. I'll try building my own private
toolchain on the server, but I did not notice any errors during the
build of Mesa. I might save the build log and look closer.
>> On the server experiencing the problem, I have set MESA_DEBUG=FP
>> to try and get some debug information. I also tried to set
>> LIBGL_DEBUG=verbose but that seems to have no effect on either
>> machine. Two messages were encountered at various times -only- on the
>> server:
>>
>> "Mesa warning: couldn't open libtxc_dxtn.so, software DXTn
>> compression/decompression unavailable"
>> and
>> "Mesa warning: XGetGeometry failed!"
>>
>>
>> I downloaded the libtxc_dxtn from
> [snip]
>
> I would not worry about it. Mesa will give that warning regardless
> of whether or not compressed textures are actually used. I encounter
> very few apps that actually use them (I suppose games would frequently,
> though?), and in any case the image you sent makes me think there's no
> texturing at all in your app, anyway.
I assumed as much, but just wanted to make sure.
>> Overall these two machines are
>> * Using the same version of Mesa
>> * Both using software rendering
>> * Both using the same version of wine
>>
>> Which leads me to believe this must be a subtle dependency problem,
>> either at runtime or build time. At this point though, I would have
>> no idea what could affect the rendering in such a way.
>
> My best guess is XCB/X11. Try configuring Mesa with --enable-xcb. If
> the app is threaded, --enable-glx-tls is probably a good idea as well.
>
> Beyond that my guess is issues with the ancient toolchain provided by
> CentOS.
>
> You might consider OSMesa for this use case as well. Though, I guess
> without source to the application, your only option would be to hack
> OSMesa into wine.
Thanks Tom. I'll take a look into these things. I guess I will have to
try to understand XCB more and how wine might use it implicitly, or
explicitly. These are definitely good ideas to try that I would not
have thought of.
Cheers,
John

Hi John,
John Wythe <bitspace@...> writes:
> I am encountering different rendering behavior between two
> seemingly compatible Linux environments. [. . .] Below are links to
> screen-shots and troubleshooting information:
>
> Screenshots of the issue:
> http://lh6.ggpht.com/_mTZwuLfG_iE/SvTnUfC0eWI/AAAAAAAAAB0/SUeL9K7CPcU/s800/sc
> reenshots.jpeg
These look (to me) like they might be Z-fighting issues.
Is there any chance of requesting more resolution from the depth
buffer? You would normally do this when choosing your glX visual.
> Server environment information:
> http://docs.google.com/View?id=ddkkm9rx_2fvwmsdpt
>
> Desktop environment information:
> http://docs.google.com/View?id=ddkkm9rx_3dgj28nf4
Unsurprisingly, your desktop X configuration is using XCB, probably
with it's libX11 `emulation' of a sorts, while your server
configuration does not have XCB.
> On my desktop I have a NVidia 8800GTS. To try and isolate the
> problem, I wanted to force my desktop to use the software
> renderer. For some unknown reason setting LIBGL_ALWAYS_SOFTWARE=1 has
> no effect.
You're probably using NVIDIAs driver. Actually, you almost definitely
are, because the only other options are `nv' and `noveau', and of
course the Mesa `swrast' driver. `nv' former can't do 3D, and `noveau'
will crash when used for 3D -- if you're lucky -- AFAICT (never tried
it myself).
If you're using NVIDIA's driver, none of Mesa's environment variables
matter.
> Instead I compiled Mesa using the xlib software driver. When using
> this libGL version the application continues to work just fine on my
> desktop.
Are you absolutely certain you're using Mesa?
I would recommend you remove any drivers your package manager supplies,
as much as possible at least. This won't be fully possible on Ubuntu
because the removal of all GL impls will make the package manager want
to remove X, but at least remove all nvidia packages.
> The server on the other hand, is a managed environment without root
> access. The default version of libGL caused the application to crash,
> which initially, I thought was due to an older version of Xvfb. After
> learning much more about xorg, I came to realize that it was not the
> version of Xvfb that made things marginally work, but rather the
> libGL version that was built as a result of building Xvfb/Mesa. Now
> I am only building Mesa and libXmu on the server and using the older
> Xvfb.
CentOS, IMHO, is trash. Everything's too damn old on it; for software
I work on, we're always hitting things like old compilers not accepting
valid templates or similar. If you can update the toolchain, I would
recommend as much. Or better yet, put a Debian stable / Ubuntu LTS /
hell even openSUSE on the machine and save yourself the pain.
> On the server experiencing the problem, I have set MESA_DEBUG=FP
> to try and get some debug information. I also tried to set
> LIBGL_DEBUG=verbose but that seems to have no effect on either
> machine. Two messages were encountered at various times -only- on the
> server:
>
> "Mesa warning: couldn't open libtxc_dxtn.so, software DXTn
> compression/decompression unavailable"
> and
> "Mesa warning: XGetGeometry failed!"
>
>
> I downloaded the libtxc_dxtn from
[snip]
I would not worry about it. Mesa will give that warning regardless
of whether or not compressed textures are actually used. I encounter
very few apps that actually use them (I suppose games would frequently,
though?), and in any case the image you sent makes me think there's no
texturing at all in your app, anyway.
> Overall these two machines are
> * Using the same version of Mesa
> * Both using software rendering
> * Both using the same version of wine
>
> Which leads me to believe this must be a subtle dependency problem,
> either at runtime or build time. At this point though, I would have
> no idea what could affect the rendering in such a way.
My best guess is XCB/X11. Try configuring Mesa with --enable-xcb. If
the app is threaded, --enable-glx-tls is probably a good idea as well.
Beyond that my guess is issues with the ancient toolchain provided by
CentOS.
You might consider OSMesa for this use case as well. Though, I guess
without source to the application, your only option would be to hack
OSMesa into wine.
HTH,
-tom

Hello Mesa3d-users,
I am encountering different rendering behavior between two seemingly
compatible Linux environments. After about a week of troubleshooting
this, researching Google, mailing list archives, and bug trackers, I
would be most grateful for any assistance from this list. Below are
links to screen-shots and troubleshooting information:
Screenshots of the issue:
http://lh6.ggpht.com/_mTZwuLfG_iE/SvTnUfC0eWI/AAAAAAAAAB0/SUeL9K7CPcU/s800/screenshots.jpeg
Server environment information:
http://docs.google.com/View?id=ddkkm9rx_2fvwmsdpt
Desktop environment information:
http://docs.google.com/View?id=ddkkm9rx_3dgj28nf4
We have a legacy windows application (without source) that we are
adapting to run under wine on a headless Cent OS server to perform
work in batches. We have a custom windows wrapper around this legacy
application to make it scriptable and capture a couple of screen shots
using GDI+ during the batch jobs. This works fine on my Ubuntu
desktop, but on the server it appears that some of the GL polygons
have their normals swapped (I'm not an expert graphics programmer so I
hope I have these terms correct).
On my desktop I have a NVidia 8800GTS. To try and isolate the problem,
I wanted to force my desktop to use the software renderer. For some
unknown reason setting LIBGL_ALWAYS_SOFTWARE=1 has no effect. Instead
I compiled Mesa using the xlib software driver. When using this libGL
version the application continues to work just fine on my desktop.
The server on the other hand, is a managed environment without root
access. The default version of libGL caused the application to crash,
which initially, I thought was due to an older version of Xvfb. After
learning much more about xorg, I came to realize that it was not the
version of Xvfb that made things marginally work, but rather the libGL
version that was built as a result of building Xvfb/Mesa. Now I am
only building Mesa and libXmu on the server and using the older Xvfb.
On the server experiencing the problem, I have set MESA_DEBUG=FP to
try and get some debug information. I also tried to set
LIBGL_DEBUG=verbose but that seems to have no effect on either
machine. Two messages were encountered at various times -only- on the
server:
"Mesa warning: couldn't open libtxc_dxtn.so, software DXTn
compression/decompression unavailable"
and
"Mesa warning: XGetGeometry failed!"
I downloaded the libtxc_dxtn from
http://www.t2-project.org/packages/libtxc-dxtn.html but that did not
seem to fix the problem. However the debug message changed to:
"Mesa warning: software DXTn compression/decompression available"
Overall these two machines are
* Using the same version of Mesa
* Both using software rendering
* Both using the same version of wine
Which leads me to believe this must be a subtle dependency problem,
either at runtime or build time. At this point though, I would have no
idea what could affect the rendering in such a way.
Any suggestions would be greatly appreciated.
Thank you,
John