Hybrid View

VBOs slow?

Okay, so I've been trying to find the cause of a reported slowdown with something I work on. I'm a hobby programmer and I work on a faster, better engine for a Half-Life mod, and I mostly do my programming on WON Half-Life, which is the ancient retail version of the engine. I do this because unlike the Steam version, it allows me to debug the code. Here performance is usually good, unless I enter a heavy are, but even there, the lowest I get is 37 fps with 3 renderpasses.

However, there is the Steam version of Half-Life, and it runs ungodly slow. After a long session of profiling and cutting off bits of the code, I've come to the conclusion that it's rendering from VBOs that's slow. I store all of my rendered geometry in a single large VBO as simple triangles: the world, the models, and particles. Now I run everything through ARB shaders, but toggling these brings absolutely no difference in performance.

But when I render on Steam HL without vbos, just immediate mode, performance is at the absolute top, whereas rendering the same geometry as VBOs brings it down to like 40 fps, and it's not even that much. In one scene where in the WON version I get 75+ fps, in the Steam version I get 28 fps at best.

So yeah, I'm digging around, trying to find the cause, but I figured I might as well ask for some help on the matter, but I figured I'd ask: what on earth could possibly cause such a grave performance difference between the two platforms? At the momment I'm totally clueless. What could this platform be doing that is completely ruining VBO performance?

Thanks.

Edit:
I've found the problem, after a very long session of digging through code with gEDebugger, I figured that maybe Steam HL, unlike WON HL, somehow used vertex arrays, and I used vertex attribs. I've eventually found my guess to be true, and a client state being enabled was causing the crippled performance. You can disregard this.

I've found the problem, after a very long session of digging through code with gEDebugger, I figured that maybe Steam HL, unlike WON HL, somehow used vertex arrays, and I used vertex attribs. I've eventually found my guess to be true, and a client state being enabled was causing the crippled performance. You can disregard this.

My OpenGL display program consumes a lot of CPU. As I have divided the screen to small rectangles, so many vertices are created. To reduce the CPU percentage, I am advised to use VBO for vertices and colors vectors. So, this way CPU would not bother sending all vertices and colors to GPU. Consequently, I was advised that this will reduce CPU usage.
As I read your post, I am now dubious is this will really help reduce CPU usage. what do you think?
I would also like to mention that I render screen every 100ms, I cannot do it more, as it will slow down the system.
Besides, I have also problems implementing vertices and colors array with VBO, if you are experienced I would thank if you could solve the problem: "vertex buffer object for vertices&colors"

VBOs are meant to be the native, most efficient way to provide vertex data to draw calls. Anything that contradicts it is either the result of poor application behavior or poor driver behavior.

Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
Technical Blog: http://www.rastergrid.com/blog/