On Mon, May 31, 2010 at 9:10 AM, Marco Avellino <
marco.avellino@...> wrote:
> Hi,
>
> when I use "print hits” I read the value 0.50000000016 for both min_depth, max_depth.
>
>
>
> For example, when I call “print min_depth, max_depth, names” l’output is:
>
> 0.50000000016 0.50000000016 [object_X’s name]
>
> 0.50000000016 0.50000000016 [object_Y’s name]
>
> …
>
>
>
> If I use GLdouble(min_depth or max_depth), I read 0.5000000001641532
>
>
>
> I read always 0.50000000016 except when I draw objects very near. At that moment I read values like 0.49…, 0.48, …. etc. and min_depth != max_depth
>
> It seems that Selection works only in range [0.2,0.5] where 0.2 is forced
> by gluPerspective(45, 1.0*width/height, *0.2*, 2.0)
>
>
>
> If you want I can send you my code but I will do it only if you are agree.
>
Yes, that might be helpful. I'll fuss with it, and see what I can do.
Ian
>
>
> PS.
>
> Sorry if I write you no at sourceforge.net. It seems that I am not able to
> create a new account
>

Hi,
when I use "print hits" I read the value 0.50000000016 for both min_depth,
max_depth.
For example, when I call "print min_depth, max_depth, names" l'output is:
0.50000000016 0.50000000016 [object_X's name]
0.50000000016 0.50000000016 [object_Y's name]
.
If I use GLdouble(min_depth or max_depth), I read 0.5000000001641532
I read always 0.50000000016 except when I draw objects very near. At that
moment I read values like 0.49., 0.48, .. etc. and min_depth != max_depth
It seems that Selection works only in range [0.2,0.5] where 0.2 is forced by
gluPerspective(45, 1.0*width/height, 0.2, 2.0)
If you want I can send you my code but I will do it only if you are agree.
PS.
Sorry if I write you no at sourceforge.net. It seems that I am not able to
create a new account
_____
Da: Ian Mallett [mailto:geometrian@...]
Inviato: domenica 30 maggio 2010 17.31
A: Marco Avellino
Cc: pyopengl-users
Oggetto: Re: [PyOpenGL-Users] Need help about Picking
Hi,
Unfortunately, I can't immediately detect any problems with this code.
-The name stack is, by default, only 64 large. There's no advantage to
having a 1024-length selection buffer.
-What happens if you use "print hits"? Is it all zeroes?
Ian

Hi,
Unfortunately, I can't immediately detect any problems with this code.
-The name stack is, by default, only 64 large. There's no advantage to
having a 1024-length selection buffer.
-What happens if you use "print hits"? Is it all zeroes?
Ian

On Mon, May 24, 2010 at 7:48 AM, Marco Avellino <
marco.avellino@...> wrote:
> Hi, I followed a lot of tutorials about picking and I decided to use the
> Selection Buffer.
>
I've never used it, but it looks interesting.
> hits = glRenderMode(GL_RENDER)
>
> for record in hits:
>
> *min_depth*, max_depth, names = record
>
In context would be more helpful. For example, did you use
glRenderMode(GL_SELECT) and do drawing stuff before this? Could we see your
actual source file?
> and I tried to obtain the lower value of “min_depth“ (= “the nearest
> object”) but the result is no correct: the reason is that all my returned
> values of *min_depth* are identical.
>
...telling me that nothing is being written to this buffer. Which could
mean many things--it's not configured correctly, you're not drawing
anything, you don't have a graphics card, etc.
Could we see the actual code?
Thanks,
Ian

Hi,
I was under the impression that (in the absence of bilinear or trilinear
filtering) each pixel simply maps to a single texel, no matter how far away
you are. This is what causes small (in screenspace) polygons that use a
large range of texture coordinates to look static-y as they move.
Mipmaps address this by successively averaging the adjacent texels down
until you get a teensy texture image (when this is sampled, effectively, the
hardware is reading an average of all the texels instead of one texel
more-or-less at random from the original image). The (speed) advantage of
mipmapping is that the texture data that's being sampled can be smaller, so
the hardware can find the proper values more efficiently.
Of course, in theory, mipmaps ought to be slowest. Bilinear filtering
requires 4 samples (hardware-controlled samples, but four nonetheless).
Trilinear filtering of course uses 8, and unfortunately, texture samples are
one of the slowest processes on any graphics card. A quick benchmark on my
computer confirms all this to be true. 'Course, there may be something
about well-defined graphics paths on other computers that I don't know about
. . .
And I agree--mipmaps are great. Use 'em anyway.
Ian

Mike C. Fletcher wrote:
> it looks like on
> your driver the linear bitmap sampler is doing something non-optimal
> when it's sampling a (large) texture down across a large scale
> difference.
I would expect this to be slow using any driver. When there
is a large scale reduction, each pixel on the screen projects
onto a big block of texels in the texture. Doing linear
sampling on that requires scanning all of those texels and
averaging them together.
Using mipmaps, on the other hand, it's never necessary to
average more than four texels (or possibly eight, if you're
also interpolating between mipmap levels) for each screen
pixel.
Moral: Mipmaps are good. Use them!
--
Greg

Hi, I followed a lot of tutorials about picking and I decided to use the
Selection Buffer.
Now, the code that I wrote works as attended but I am in trouble to discover
"the nearest object".
In fact, I wrote in my code(as I successfully learned at
http://pyopengl.sourceforge.net/documentation/opengl_diffs.html) :
.
hits = glRenderMode(GL_RENDER)
for record in hits:
min_depth, max_depth, names = record
.
and I tried to obtain the lower value of "min_depth" (= "the nearest
object") but the result is no correct: the reason is that all my returned
values of min_depth are identical.
Probably it is perhaps caused by a truncated internal representation - I am
not sure, I am a novice in python too -, or by a bad pyopengl configuration.
For example, at page
http://www.dei.isep.ipp.pt/~matos/cg/docs/manual/gluPerspective.3G.html I
read:
"roughly log2r bits of depth buffer precision are lost. Because r approaches
infinity as zNear approaches 0, zNear must never be set to 0."
and I used "gluPerspective(45, 1.0*width/height, 0.05, 100.0)" with the
value "0.05" near to 0. I tried to change "0.05" in "1.0" but I obtain the
bad effect that my model disappear soon when I zoom into ).
Can you suggest me the correct way to calculate the lower value of
"min_depth"? Or can you suggest me a alternative method to simulate a pick?
I'd like to avoid color based method.
Thank you for your help.
Marco Avellino

On 10-05-22 04:00 PM, Derakon wrote:
> Responses inline.
>
> On Sat, May 22, 2010 at 12:04 PM, Mike C. Fletcher
> <mcfletch@...> wrote:
>
>> To start debugging:
>>
>> does disabling OpenGL_accelerate change your performance (on my machine
>> there is no difference, which suggests that OpenGL_accelerate isn't likely
>> to be your problem)
>>
>> import OpenGL
>> OpenGL.USE_ACCELERATE = False
>>
>>
> Tried this; no change.
>
Okay, so not likely an issue with OpenGL_accelerate (good).
...
> No idea how to do this on an OSX box, but given that I'm using a card
> that shipped with the box, and that games do work properly, I'd be
> extremely surprised if I were using software rendering.
>
Good surmise, I hadn't realized you were on OSX, that should always have
DRI available.
...
> They are. I've been playing Torchlight all last week; I assume it's
> OpenGL because what else would it be on a Mac? DirectX is out of the
> question and I'm not aware of any other graphics libraries that would
> work.
>
Yup, it would have to be OpenGL AFAIU.
>> confirm that you are not using an OpenGL compositing desktop (e.g. compiz on
>> Linux) which may cause indirect rendering of OpenGL windows
>>
> Again, not certain how to do this; however, I tested the script in
> OSX's built-in X11 system, which (I'm fairly certain) skips most of
> the pretty-ifying steps that the window manager normally does, and
> it's still slow.
>
OSX has compositing by default, but it should work properly (whereas
there's some situations on Compiz (Linux) that cause issues).
> No ATI control panel, but again, something like this would affect games.
Yup.
>> try generating mipmaps and using mipmap-nearest (just for kicks)
>>
> Okay, I replaced the glTexImage2D in the script with this:
>
> GLU.gluBuild2DMipmaps(GL.GL_TEXTURE_2D, GL.GL_RGBA, surface.get_width(),
> surface.get_height(), GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, textureData)
>
> and replaced the GL_TEXTURE_MIN_FILTER line with this:
>
> GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER,
> GL.GL_LINEAR_MIPMAP_LINEAR)
>
> and now I get 333FPS! What the heck?
>
That's what I was half expecting. OpenGL drivers tend to be optimized
along certain (common) paths, use of MipMaps is pretty much universal,
so they will be very fast. You're scaling the view constantly (IIRC) so
you're going to have every pixel doing sampling, and it looks like on
your driver the linear bitmap sampler is doing something non-optimal
when it's sampling a (large) texture down across a large scale
difference. With the MipMap, the textures being sampled are much
smaller. You could ask on the OpenGL.org forums and likely get a
definitive answer as to why this particular operation is slow. I
normally chalk it up to the old "do what everyone else does and you'll
be fast" rule of thumb and move on in my code.
Still a surprisingly low MTri on the performance test. That is,
however, likely a different issue from the texture-fill one.
Enjoy,
Mike
--
________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
http://www.vrplumber.comhttp://blog.vrplumber.com

Responses inline.
On Sat, May 22, 2010 at 12:04 PM, Mike C. Fletcher
<mcfletch@...> wrote:
>
> To start debugging:
>
> does disabling OpenGL_accelerate change your performance (on my machine
> there is no difference, which suggests that OpenGL_accelerate isn't likely
> to be your problem)
>
> import OpenGL
> OpenGL.USE_ACCELERATE = False
>
Tried this; no change.
> confirm that your machine is using direct rendering (i.e. actually using
> your hardware driver, not a software renderer)
>
> on Linux: glxinfo | grep direct
>
No idea how to do this on an OSX box, but given that I'm using a card
that shipped with the box, and that games do work properly, I'd be
extremely surprised if I were using software rendering.
> confirm that non-Python OpenGL programs are *currently* running reasonably
> well on this machine
They are. I've been playing Torchlight all last week; I assume it's
OpenGL because what else would it be on a Mac? DirectX is out of the
question and I'm not aware of any other graphics libraries that would
work.
> confirm that you are not using an OpenGL compositing desktop (e.g. compiz on
> Linux) which may cause indirect rendering of OpenGL windows
Again, not certain how to do this; however, I tested the script in
OSX's built-in X11 system, which (I'm fairly certain) skips most of
the pretty-ifying steps that the window manager normally does, and
it's still slow.
> confirm that you do not have system-level anti-aliasing settings enabled
> (i.e. a 4x or 8x antialiasing specified in ATIs control panel)
No ATI control panel, but again, something like this would affect games.
> try generating mipmaps and using mipmap-nearest (just for kicks)
Okay, I replaced the glTexImage2D in the script with this:
GLU.gluBuild2DMipmaps(GL.GL_TEXTURE_2D, GL.GL_RGBA, surface.get_width(),
surface.get_height(), GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, textureData)
and replaced the GL_TEXTURE_MIN_FILTER line with this:
GL.glTexParameterf(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER,
GL.GL_LINEAR_MIPMAP_LINEAR)
and now I get 333FPS! What the heck?
>
> Realize that isn't all that much help, but this is looking like a
> system/config issue. Good luck,
> Mike
>
> --
> ________________________________________________
> Mike C. Fletcher
> Designer, VR Plumber, Coder
> http://www.vrplumber.com
> http://blog.vrplumber.com
>
> ------------------------------------------------------------------------------
>
>
> _______________________________________________
> PyOpenGL Homepage
> http://pyopengl.sourceforge.net
> _______________________________________________
> PyOpenGL-Users mailing list
> PyOpenGL-Users@...
> https://lists.sourceforge.net/lists/listinfo/pyopengl-users
>
>

On 10-05-17 11:16 AM, Alejandro Segovia wrote:
> Hello Hoy,
>
> On Tue, May 11, 2010 at 12:01 PM, Jackson Hoy Loper
> <nbspcorp@... <mailto:nbspcorp@...>> wrote:
>
> It seems like the functions are there --
>
> In [7]: OpenGL.platform.PLATFORM.GL.glGenBuffers
> Out[7]: <_FuncPtr object at 0x10580b600>
>
> but pyopengl doesn't see it?
>
> In [8]: bool(OpenGL.GL.glGenBuffers)
> Out[8]: False
>
> Is my install broken? Is this just not supported? Running OSX
> 10.6.3, same results on System Python or on macports python. Pyopengl
> installed via easy_install.
>
>
> Have you been able to get this to work? We've seen cases before where
> the functions can't be found because of trying to access them before
> creating an OpenGL Context.
>
> Alejandro.-
This would also be my first assumption. Unix-based engines are getting
far more picky these days about having the context before you check for
an extensions' availability.
Good luck,
Mike
--
________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
http://www.vrplumber.comhttp://blog.vrplumber.com

On 10-05-22 01:24 PM, Derakon wrote:
> I'm including the mailing list again, because at this point it looks
> pretty clear that there's something wrong with my PyOpenGL install,
> and I have no idea how to figure out what it could be.
I'd tend to agree that *something* is wrong with your installation,
either PyOpenGL or the OpenGL driver. My laptop gets 1000+fps on the
test3 script on a Radeon Mobile HD 3650 under Kubuntu Lucid (64-bit)
using bzr head of PyOpenGL on Python 2.6.5. The test2 script gets
990+fps on the same machine.
Your hardware is a generation older than mine, with ~= pixel-fill
bandwidth and ~2/4 texture-fill bandwidth, so we'd expect to see around
500fps for the same texture-fill-rate-limited code. You're 30x slower
than that, so yeah, something isn't configured properly.
> My PyOpenGL install was created by downloading the PyOpenGL and
> PyOpenGL-accelerate packages from
> http://pyopengl.sourceforge.net/documentation/installation.html and
> doing "python setup.py install". The only problem I ran into there was
> that PyOpenGL-accelerate was trying to pass -Wno-long-doubles to gcc,
> which didn't recognize it as a valid commandline option. I told it to
> use gcc 4.0 instead of gcc 4.2 and it built without complaints.
>
Hmm, sounds like Cython's distutils extension might need to be updated
on that system.
> If I remove the GL_TEXTURE_MIN_FILTER line then I get an absurdly fast
> (830FPS) set of white rectangles. As I understand it, doing this
> causes OpenGL to assume that I'm going to provide mipmaps for the
> texture, and since I don't it defaults to white. Which is, apparently,
> much easier to draw than the textured quads.
>
It is certainly much easier, even software rendering could handle that
without blinking (which I'm guessing is what's happening with your system).
> If I switch to RGB instead of RGBA and turn off blending, then it's still slow.
>
> Switching from an 800x600 window to a 400x300 window gets me 104FPS;
> likewise, switching to a 1600x1200 window gets me 3FPS.
>
> Here's the output of running PyOpenGL's performance test (from
> tests/performance.py); I have no idea how to interpret it.
>
You expect results on the order of a handful of mega-triangles per
second on reasonable hardware with the middle array sizes (basically
there's a sweet-spot where you're maxing out your hardware's
capabilities per-call), on my machine the values for 16,000 and 32,000
are about 9 MTris.
mcfletch@...:~/OpenGL-dev/OpenGL-ctypes/tests$ python performance.py
Count: 256 Total Time for 100 iterations: 0.0823800563812 MTri/s:
0.310754824949
Count: 512 Total Time for 100 iterations: 0.0618591308594 MTri/s:
0.82768702516
Count: 1024 Total Time for 100 iterations: 0.0667719841003 MTri/s:
1.53357731359
Count: 2048 Total Time for 100 iterations: 0.0698039531708 MTri/s:
2.933931256
Count: 4096 Total Time for 100 iterations: 0.0796709060669 MTri/s:
5.14114901186
Count: 8192 Total Time for 100 iterations: 0.113540887833 MTri/s:
7.21502196819
Count: 16384 Total Time for 100 iterations: 0.1773250103 MTri/s:
9.23953139623
Count: 32768 Total Time for 100 iterations: 0.360249996185 MTri/s:
9.09590571741
Count: 65536 Total Time for 100 iterations: 0.921277046204 MTri/s:
7.1136039121
Count: 131072 Total Time for 100 iterations: 2.15656399727 MTri/s:
6.07781638597
Count: 262144 Total Time for 100 iterations: 5.70130205154 MTri/s:
4.59796722977
The values you are seeing are extremely small, and would indicate that
your hardware isn't being used properly. This may be a PyOpenGL issue,
given that you are seeing such slow performance in all PyOpenGL scripts
tested so far, but I don't intuitively see what it would be.
To start debugging:
* does disabling OpenGL_accelerate change your performance (on my
machine there is no difference, which suggests that
OpenGL_accelerate isn't likely to be your problem)
o import OpenGL
OpenGL.USE_ACCELERATE = False
* confirm that your machine is using direct rendering (i.e. actually
using your hardware driver, not a software renderer)
o on Linux: glxinfo | grep direct
* confirm that non-Python OpenGL programs are *currently* running
reasonably well on this machine
* confirm that you are not using an OpenGL compositing desktop (e.g.
compiz on Linux) which may cause indirect rendering of OpenGL windows
* confirm that you do not have system-level anti-aliasing settings
enabled (i.e. a 4x or 8x antialiasing specified in ATIs control panel)
* try generating mipmaps and using mipmap-nearest (just for kicks)
Realize that isn't all that much help, but this is looking like a
system/config issue. Good luck,
Mike
--
________________________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
http://www.vrplumber.comhttp://blog.vrplumber.com

I'm including the mailing list again, because at this point it looks
pretty clear that there's something wrong with my PyOpenGL install,
and I have no idea how to figure out what it could be. To recap, we've
tried display lists and VBOs now, and can verify that the same script
running on my machine is slow, while running on Ian's machine it is
fast. The script can be downloaded from here:
http://derakon.dyndns.org/~chriswei/temp/test3.py
The image being used as a texture is here:
https://jetblade.googlecode.com/hg/data/sprites/terrain/jungle/grass/blocks/allway/1.png
The task being performed (drawing 400 textured quads) does not require
significant computing power, so hardware should not be an issue (I
have a Radion X1600 with 256MB of RAM, which is entirely capable of
playing modern games).
My PyOpenGL install was created by downloading the PyOpenGL and
PyOpenGL-accelerate packages from
http://pyopengl.sourceforge.net/documentation/installation.html and
doing "python setup.py install". The only problem I ran into there was
that PyOpenGL-accelerate was trying to pass -Wno-long-doubles to gcc,
which didn't recognize it as a valid commandline option. I told it to
use gcc 4.0 instead of gcc 4.2 and it built without complaints.
I thought perhaps the fact that I'm still using Python 2.5 could have
been the problem, so I installed numpy/PyOpenGL/PyOpenGL_accelerate
with Python 2.6 using those same downloads, and that's also slow. I
thought maybe the easy_install instructions could generate a different
install, so I tried those, and it's still slow.
If I remove the GL_TEXTURE_MIN_FILTER line then I get an absurdly fast
(830FPS) set of white rectangles. As I understand it, doing this
causes OpenGL to assume that I'm going to provide mipmaps for the
texture, and since I don't it defaults to white. Which is, apparently,
much easier to draw than the textured quads.
If I switch to RGB instead of RGBA and turn off blending, then it's still slow.
Switching from an 800x600 window to a 400x300 window gets me 104FPS;
likewise, switching to a 1600x1200 window gets me 3FPS.
Here's the output of running PyOpenGL's performance test (from
tests/performance.py); I have no idea how to interpret it.
Count: 256 Total Time for 100 iterations: 0.00929379463196 MTri/s:
2.75452611272
Count: 512 Total Time for 100 iterations: 0.00796604156494 MTri/s:
6.42728255717
Count: 1024 Total Time for 100 iterations: 0.00892996788025 MTri/s:
11.4670065305
Count: 2048 Total Time for 100 iterations: 0.0114290714264 MTri/s:
17.9192160377
Count: 4096 Total Time for 100 iterations: 0.0237309932709 MTri/s:
17.2601287828
Count: 8192 Total Time for 100 iterations: 0.0263659954071 MTri/s: 31.070323246
Count: 16384 Total Time for 100 iterations: 0.0477550029755 MTri/s:
34.3084472394
Count: 32768 Total Time for 100 iterations: 0.0931870937347 MTri/s:
35.1636677213
Count: 65536 Total Time for 100 iterations: 0.188548088074 MTri/s: 34.758241608
Count: 131072 Total Time for 100 iterations: 0.3538210392 MTri/s: 37.0447162488
Count: 262144 Total Time for 100 iterations: 0.695466995239 MTri/s:
37.6932337256
Any ideas? Any additional information I could provide? I'd love to get
this sorted out, as there are things I want to do in this project that
SDL really isn't capable of doing in a remotely timely manner.
Thanks in advance!
-Chris
On Sat, May 22, 2010 at 8:44 AM, Ian Mallett <geometrian@...> wrote:
> Hi,
>
> Well, modern games don't always use display lists. Although display lists
> are easy, they're not *technically* allowed. They're depreciated, but we
> wouldn't be expecting computers to be losing support for them for at least
> another 5 to 10 years. Maybe ATI is jumping ahead, just to be annoying.
>
> VBOs and vertex arrays are supposed to be supported everywhere. For your
> convenience, attached is my VBO version of your code (requires NumPy). If
> anything should make it fast, this should.
>
> Ian
>

Hi,
Well, your texture loading isn't going to work properly. You need to
generate texture IDs for each texture and then bind to that. Currently,
you're loading all the image sequentially into the same texture (so the last
image will be the one displayed, if it works at all.
I've modified the code:
-Without the texturing, the code runs 430 to 450 fps for me, which is about
what it should be.
-With texturing, the code runs at 150-190 fps, again, about what it should
be.
-After some deliberation, I've found what might be your problem: you're
rebuilding the list every frame. "makeNewList(...)" should not be called
inside your loop at all. Essentially, what display lists do is allocate
memory for the geometry, transfer the geometry, then store everything as
machine code for optimized delivery. This isn't exactly fast to do, and
doing it every frame is going to be slower than just drawing the thing
directly; it's designed to be fast later (when you call glCallList(...)).
Bottom lines: texture ids, don't put building-display-lists-operations in
the main loop.
Ian

Thanks for the advice. I gave it a shot, and while it's an
improvement...it's one of 4FPS, from 15 to 19. So clearly something is
still wrong. I've uploaded the new script here:
http://derakon.dyndns.org/~chriswei/temp/gltest2.py
I turned off the texture cycling because it was just distracting from
the matter at hand, so the program now just creates the display list
and then loops, drawing it, as the camera moves about (I note that my
framerate is much better when the tiles are further away from the
camera). Any other ideas?
-Chris
On Wed, May 19, 2010 at 10:51 PM, Ian Mallett <geometrian@...> wrote:
> Hi,
>
> Two things are making your code slow that I notice immediately:
> 1) You're using a Python loop to do 400 operations. That's not going to be
> terribly fast.
> 2) More importantly, you're using fixed functionality to draw 400 polygons.
>
> You can fix both problems by using a display list, vertex array, or vertex
> buffer object. I do not recommend the latter two, as they are more
> difficult (although more flexible, and also not technically deprecated).
>
> To use display list rendering, simply bracket the drawing code (that's
> everything including the texture binding, the glBegin(...), the loops, and
> the glEnd()) as follows:
>
> display_list = glGenLists(1)
> glNewList(display_list,GL_COMPILE)
> ...
> #draw your stuff here
> ...
> glEndList()
>
> ...and drop the whole thing outside of your main loop (put it with
> initialization). Then, to render the display list:
> glCallList(display_list)
>
> ...and your polygons will be magically be drawn. And much faster too!
>
> Another tip: disable vsync to get framerates faster than 60Hz. Simply do
> the following before creating the window:
> pygame.display.gl_set_attribute(GL_SWAP_CONTROL,0)
>
> Hope this helps, good luck, and welcome to PyOpenGL.
> Ian Mallett
>

Hi,
Two things are making your code slow that I notice immediately:
1) You're using a Python loop to do 400 operations. That's not going to be
terribly fast.
2) More importantly, you're using fixed functionality to draw 400 polygons.
You can fix both problems by using a display list, vertex array, or vertex
buffer object. I do not recommend the latter two, as they are more
difficult (although more flexible, and also not technically deprecated).
To use display list rendering, simply bracket the drawing code (that's
everything including the texture binding, the glBegin(...), the loops, and
the glEnd()) as follows:
display_list = glGenLists(1)
glNewList(display_list,GL_COMPILE)
...
#draw your stuff here
...
glEndList()
...and drop the whole thing *outside* of your main loop (put it with
initialization). Then, to render the display list:
glCallList(display_list)
...and your polygons will be magically be drawn. And much faster too!
Another tip: disable vsync to get framerates faster than 60Hz. Simply do
the following before creating the window:
pygame.display.gl_set_attribute(GL_SWAP_CONTROL,0)
Hope this helps, good luck, and welcome to PyOpenGL.
Ian Mallett

I'm looking into replacing the SDL-based rendering in my game with
OpenGL-based rendering, so I downloaded the NeHe OpenGL port (
http://www.pygame.org/gamelets/games/nehe1-10.zip ) and started
tweaking the lessons to suit my own purposes. I modified one script to
draw a 20x20 array of textured quads, with the texture just being a
100x100 PNG with alpha. This script, which isn't doing anything
special that I can see, is giving me a whopping 15FPS, which seems
horribly slow to me. Unfortunately, cProfile isn't telling me anything
useful (at least as far as I can tell). I don't suppose anyone here
could take a look at the script and tell me if I've being boneheaded
somehow? I've put it online here:
http://derakon.dyndns.org/~chriswei/temp/gltest.py
and the textures I'm using are here:
http://derakon.dyndns.org/~chriswei/temp/allway
Any suggestions would be appreciated. I'm a relative newbie when it
comes to OpenGL.
-Chris

Hello Robert,
I do not claim to be a master either, however, I did have a hard time
setting up VBOs in PyOpenGL without using the provided VBO class.
In the end, what worked for me was to convert my arrays to numpy.array and
then using PyOpenGL's AbstractDatatype class to pass the data in as a void
*.
Maybe something similar could work for you:
from OpenGL import *
from OpenGL.arrays import ArrayDatatype as ADT
...
vertices = numpy.array(generate_vertex_list(), numpy.float32)
glEnableClientState(GL_VERTEX_ARRAY)
glVertexPointer(3, GL_FLOAT, ADT.voidDataPointer(vertices))
Hope this helps!
Alejandro.-
On Sat, May 1, 2010 at 5:57 AM, Leo Hourvitz <leovitch@...> wrote:
> I don't claim to be a PyOpenGL master so YMMV, but when I did something
> similar a few years ago, I had to add a call to tostring() to prepare the
> array for PyOpenGL. I was using Numeric at the time, so the relevant code
> fragments were:
>
> self.vertexPositions = Numeric.zeros((size*3,3),Numeric.Float32)
> self._vertexPositionStr = self.vertexPositions.tostring()
> glEnableClientState(GL_VERTEX_ARRAY)
> glVertexPointer(3,GL_FLOAT,0,self._vertexPositionStr)
>
> I haven't actually used this code much in several years though.
>
> Leo
>
>
>
> On Sat, May 1, 2010 at 4:08 PM, Wakefield, Robert <rjw03002@...
> > wrote:
>
>> Hello,
>>
>> I've been using PyOpenGL to try to get faster graphics in pygame, and from
>> what I've been able to find online VBOs are the best way to optimize in my
>> case (2D sprites and tiled backgrounds). However, for some reason they and
>> the vertex arrays they're based on won't work. In even the simplest
>> example, nothing appears, while the corresponding display list or
>> glBegin/End call works without a hitch. Am I missing something, or could
>> this be a technical issue? Any other ideas as to what's wrong? The code I
>> have in the draw test is listed below (I also tried to generate/bind
>> buffers, to no effect, but I think the problem is the array):
>>
>> # shows nothing; also didn't work with GL_INT and integer types or
>> typing in the '.0' for decimal.
>> vertices = numpy.array([0,0, 0,128, 128,128], dtype=numpy.float32)
>> glEnableClientState(GL_VERTEX_ARRAY)
>> glVertexPointer(2, GL_FLOAT, 0, vertices)
>> glDrawArrays(GL_TRIANGLES, 0, 3)
>> glDisableClientState(GL_VERTEX_ARRAY)
>>
>> # this, however, works fine. The data points and mode (GL_TRIANGLES)
>> are both identical
>> glBegin(GL_TRIANGLES)
>> glVertex2f(0.0, 0.0)
>> glVertex2f(0.0, 128.0)
>> glVertex2f(128.0, 128.0)
>> glEnd()
>>
>>
>>
>> ------------------------------------------------------------------------------
>> _______________________________________________
>> PyOpenGL Homepage
>> http://pyopengl.sourceforge.net
>> _______________________________________________
>> PyOpenGL-Users mailing list
>> PyOpenGL-Users@...
>> https://lists.sourceforge.net/lists/listinfo/pyopengl-users
>>
>
>
>
> ------------------------------------------------------------------------------
>
>
> _______________________________________________
> PyOpenGL Homepage
> http://pyopengl.sourceforge.net
> _______________________________________________
> PyOpenGL-Users mailing list
> PyOpenGL-Users@...
> https://lists.sourceforge.net/lists/listinfo/pyopengl-users
>
>
--
Alejandro Segovia Azapian
Director, Algorithmia: Visualization & Acceleration
http://web.algorithmia.net

Hello Duong,
On Mon, May 17, 2010 at 8:07 AM, Duong Dang <dang.duong@...> wrote:
> Well, the problem turned out to be not related with OpenGL .
>
> It's was a math library that produces different result on Fedora (I dont'
> know how), I ended up with doing something like
>
> glTranslatef(nan,nan,nan)
> glRotate(0.0,nan,nan,nan)
>
> If only pyopengl or OpenGL raise an error or at least a warning on that :(
>
>
This might be slightly off-topic for this list, but could this be related to
the GCC version differences? I find it very strange to believe that some
code that works fine when compiled and ran under Ubuntu produces nan's when
compiled and ran under Fedora...
Alternatively, are you recompiling for each platform or producing an static
binary and moving it from one platform to the other.
Alejandro.-
>
> On Sat, May 15, 2010 at 5:45 PM, Duong Dang <dang.duong@...> wrote:
>
>> Hi again,
>>
>> I actually tried the same code on other distros (Ubuntu with Intel GPU and
>> Gentoo with ATI GPU), my scene was rendered fine there.
>>
>> Only on the first one (Fedora 12, Nvidia Quadro FX580, proprietary
>> drivers), that I had the problem. Is there any known distro/graphics card
>> specific issues? Thanks again!
>>
>> D
>>
>>
>> On Fri, May 14, 2010 at 10:36 PM, Ian Mallett <geometrian@...>wrote:
>>
>>> Hi,
>>>
>>> This code looks fine to me.
>>>
>>> Someone else had this exact same problem, and the issue turned out to be
>>> that glClear(...) was being called every time an object was drawn--just in
>>> case, have you done anything of that sort in your source?
>>>
>>> Ian
>>>
>>
>>
>
>
>
>
> ------------------------------------------------------------------------------
>
>
> _______________________________________________
> PyOpenGL Homepage
> http://pyopengl.sourceforge.net
> _______________________________________________
> PyOpenGL-Users mailing list
> PyOpenGL-Users@...
> https://lists.sourceforge.net/lists/listinfo/pyopengl-users
>
>
--
Alejandro Segovia Azapian
Director, Algorithmia: Visualization & Acceleration
http://web.algorithmia.net

Hello Hoy,
On Tue, May 11, 2010 at 12:01 PM, Jackson Hoy Loper <nbspcorp@...>wrote:
> It seems like the functions are there --
>
> In [7]: OpenGL.platform.PLATFORM.GL.glGenBuffers
> Out[7]: <_FuncPtr object at 0x10580b600>
>
> but pyopengl doesn't see it?
>
> In [8]: bool(OpenGL.GL.glGenBuffers)
> Out[8]: False
>
> Is my install broken? Is this just not supported? Running OSX
> 10.6.3, same results on System Python or on macports python. Pyopengl
> installed via easy_install.
>
>
Have you been able to get this to work? We've seen cases before where the
functions can't be found because of trying to access them before creating an
OpenGL Context.
Alejandro.-
--
Alejandro Segovia Azapian
Director, Algorithmia: Visualization & Acceleration
http://web.algorithmia.net

Well, the problem turned out to be not related with OpenGL .
It's was a math library that produces different result on Fedora (I dont'
know how), I ended up with doing something like
glTranslatef(nan,nan,nan)
glRotate(0.0,nan,nan,nan)
If only pyopengl or OpenGL raise an error or at least a warning on that :(
On Sat, May 15, 2010 at 5:45 PM, Duong Dang <dang.duong@...> wrote:
> Hi again,
>
> I actually tried the same code on other distros (Ubuntu with Intel GPU and
> Gentoo with ATI GPU), my scene was rendered fine there.
>
> Only on the first one (Fedora 12, Nvidia Quadro FX580, proprietary
> drivers), that I had the problem. Is there any known distro/graphics card
> specific issues? Thanks again!
>
> D
>
>
> On Fri, May 14, 2010 at 10:36 PM, Ian Mallett <geometrian@...>wrote:
>
>> Hi,
>>
>> This code looks fine to me.
>>
>> Someone else had this exact same problem, and the issue turned out to be
>> that glClear(...) was being called every time an object was drawn--just in
>> case, have you done anything of that sort in your source?
>>
>> Ian
>>
>
>

On Sat, May 15, 2010 at 1:48 PM, Roland Everaert <r.everaert@...>wrote:
> Ian,
>
> After some experiment based on what you say about the winding, I figure the
> problem, and now I have the correct behavior. But that still doesn't explain
> why GL_CULL_FACE is doing something, when using GL_LINE.
>
> Roland.
>
GL_CULL_FACE ought not to apply to GL_LINE. Hence, you're probably
experiencing unexpected behavior. When drawing with lines, my advice would
just be to turn off culling entirely.
Ian

Ian,
After some experiment based on what you say about the winding, I figure
the problem, and now I have the correct behavior. But that still doesn't
explain why GL_CULL_FACE is doing something, when using GL_LINE.
Roland.
Le 05/15/10 21:52, Roland Everaert a écrit :
> So how to explain that a spinning cube gives me what I want (the
> verteces not seen by the camera are hidden)?
>
> The code is the same, but where GL_TRIANGLES is replaced by GL_QUADS,
> the vertices and indices lists are filled accordingly and the 2nd
> argument of glDrawElements replaced by the len of the indices list.
> And there is no test on the depth.
>
>
> Thanks,
>
>
> Roland.
>
> Le 05/14/10 23:36, Ian Mallett a écrit :
>> Well, for one thing, you're drawing the tetrahedron in line mode.
>> GL_CULL_FACE does nothing for GL_LINE.
>>
>> You might also try culling the front faces glCullFace(GL_FRONT). If
>> the results are expected, then reverse the winding order of the
>> polygons (i.e., polygon [v1,v2,v3] specify as [v1,v3,v2]) and cull
>> the back as before.
>>
>> Ian

So how to explain that a spinning cube gives me what I want (the
verteces not seen by the camera are hidden)?
The code is the same, but where GL_TRIANGLES is replaced by GL_QUADS,
the vertices and indices lists are filled accordingly and the 2nd
argument of glDrawElements replaced by the len of the indices list. And
there is no test on the depth.
Thanks,
Roland.
Le 05/14/10 23:36, Ian Mallett a écrit :
> Well, for one thing, you're drawing the tetrahedron in line mode.
> GL_CULL_FACE does nothing for GL_LINE.
>
> You might also try culling the front faces glCullFace(GL_FRONT). If
> the results are expected, then reverse the winding order of the
> polygons (i.e., polygon [v1,v2,v3] specify as [v1,v3,v2]) and cull the
> back as before.
>
> Ian

Hi again,
I actually tried the same code on other distros (Ubuntu with Intel GPU and
Gentoo with ATI GPU), my scene was rendered fine there.
Only on the first one (Fedora 12, Nvidia Quadro FX580, proprietary drivers),
that I had the problem. Is there any known distro/graphics card specific
issues? Thanks again!
D
On Fri, May 14, 2010 at 10:36 PM, Ian Mallett <geometrian@...> wrote:
> Hi,
>
> This code looks fine to me.
>
> Someone else had this exact same problem, and the issue turned out to be
> that glClear(...) was being called every time an object was drawn--just in
> case, have you done anything of that sort in your source?
>
> Ian
>

Well, for one thing, you're drawing the tetrahedron in line mode.
GL_CULL_FACE does nothing for GL_LINE.
You might also try culling the front faces glCullFace(GL_FRONT). If the
results are expected, then reverse the winding order of the polygons (i.e.,
polygon [v1,v2,v3] specify as [v1,v3,v2]) and cull the back as before.
Ian