Welcome to my entire point: if the stride isn't going to change in any real-world system, why is the stride not part of the vertex format?

On the other hand, what harm does it do? I personally can't see any reason to change stride either, but the functionality is now there and maybe someone will find a use for it? I don't see it as being a "wrong thing", more of an odd but ultimately inconsequential decision. Doing what D3D does can make sense in many cases - makes it easier to port from D3D to GL, after all. That's gotta be a good thing. But in this case the D3D behaviour is also odd but ultimately inconsequential. It could be worse - just be thankful that it didn't take an array of each of buffers/strides/offsets like D3D does - that's painful to use.

I don't see it as being a "wrong thing", more of an odd but ultimately inconsequential decision.

But it's not inconsequential. It's taking something that is by all rights part of the format and putting it elsewhere. It's not broken as specified, but it's not what it should be.

It's like not being able to specify attribute indices in shaders and many other API issues with OpenGL, past and present. Yes, you can live without it, but it would clearly be better to have it done right.

08-07-2012, 01:34 AM

mhagain

Hypothetical reason why you may wish to change the stride - skipping over vertexes for an LOD scheme.

To be honest I think you're wasting too much negative energy on this. Not having attribute indices in shaders was a colossal pain in the rear-end; this is nowhere even near the same magnitude. If it's a genuine API issue that is going to cause torment to those using it, then by all means yell about it from the rooftops (I'll be right there beside you). This isn't.

08-07-2012, 07:44 AM

aqnuep

Quote:

Originally Posted by mhagain

Hypothetical reason why you may wish to change the stride - skipping over vertexes for an LOD scheme.

I don't understand what's the use case here. How stride helps you "skipping over vertices for a LOD scheme"? Also, skipping over vertices should be done by giving a different base index to DrawElements calls as you probably use indices anyways and if you use LOD I barely believe that you want to use the same set of indices. Why would you? That would mean that all of your LOD levels render the same amount of vertices which defeats the purpose.
Also, if you don't want to use indices, you probably better off sending a different first vertex parameter to your DrawArrays calls instead of always changing the offset and/or stride of your vertex buffers.

08-07-2012, 08:59 AM

xahir

from spec 2.5.10

Quote:

Vertex array objects are container objects including references to buffer objects, and are not shared

Even with vertex formats removing buffer object references, I still need to carry vertex format info from my loading thread to main thread in order to finalize my OpenGL objects.

Quote:

Originally Posted by Alfonse Reinheart

Of course, we won't see another API cleanup and function removal round, since the last one went so well.

so this just makes me sad...

08-07-2012, 11:50 AM

aqnuep

Quote:

Originally Posted by Alfonse Reinheart

But it's not inconsequential. It's taking something that is by all rights part of the format and putting it elsewhere. It's not broken as specified, but it's not what it should be.

It's like not being able to specify attribute indices in shaders and many other API issues with OpenGL, past and present. Yes, you can live without it, but it would clearly be better to have it done right.

Agree, not to mention that the per-attribute relativeoffset parameter is still specified for the vertex attributes themselves and, in practice, these relative offsets don't make any sense unless you are also aware of the stride, thus again, it defeats the purpose.

08-07-2012, 01:12 PM

kRogue

I think the use pattern intended was that format of the attribute data was unchanged but weather or not and how it was interleaved with other attribute data varied.. the current interface does effectively have an offset in both glBindVertexBuffer and glVertexAttrib*Format .. so the issue is which use case comes up more often:

Keeping the format the same, but varying buffer sources and interleaving

OR

Using the same buffer, but varying interleave and format

What is in the spec makes the first case possible with only setting the buffer sources where as what some are wanting is to do the 2nd more often.

It look to me like that the interface is made for when a GL implementation works likes this:

Attribute puller has only two things: location from which to grab data and stride on what to grab

Attribute interpreter converts raw bytes from the puller for the vertex shader to consume

If a GL implementation worked like that, then I can see how a GL implementer would strongly prefer how the interface came out. Though an offset within the formatting setter kind of invalidates the above idea without more hackery...

08-08-2012, 02:55 AM

Dean Calver

In recent (but not sure latest hardware), there are no such things as 'Vertex Puller', feeding vertex data into a shader consists of two step, a DMA unit that moves blocks of vertex data into registers or memory closer to the shader and then the attrib converter/loader that feed the actual shader. The DMA unit doesn't really care whats in the vertex itself, only the address and total size of each vertex. Hopefully you can see where the interface in D3D and GL4.3 comes from. You're effectively programming the two processes separately.
On at least one platform when programming at a lower level API, it was possible to leave some vertex DMA streams on even if the data wasn't used, this could be a serious performance loss. The DMA unit would pay the bandwidth cost of retrieving data but then nothing would actually need or use it.
It was a simple pipeline optimisations, because vertex/index is highly predictable (its predefined) you can use simple DMA to ensure the data is in the best place beforehand.

However I don't believe the latest hardware uses this optimisation (I suspect they use the general cache and thread switches to achieve a similar effect), so its usefulness going forward may be doubtful...

08-08-2012, 05:32 AM

mhagain

Quote:

Originally Posted by aqnuep

I don't understand what's the use case here. How stride helps you "skipping over vertices for a LOD scheme"? Also, skipping over vertices should be done by giving a different base index to DrawElements calls as you probably use indices anyways and if you use LOD I barely believe that you want to use the same set of indices. Why would you? That would mean that all of your LOD levels render the same amount of vertices which defeats the purpose.
Also, if you don't want to use indices, you probably better off sending a different first vertex parameter to your DrawArrays calls instead of always changing the offset and/or stride of your vertex buffers.

This assumes that all of your VBO streams are going to be using the same stride or offset, which is not always the case. You may have a different VBO for texcoords as you have for position and normals, and you may only need to change the stride or offset for the position/normals VBO. The old API wouldn't let you do that without respecifying the full set of vertex attrib pointers; the new one lets you do it with a single BindVertexBuffer which - because stride and offset are separate state - can be much more efficient.

I really get the feeling that this is very new territory for many of you. Because you've never had this capability you don't see the advantages and flexibility of it, and need to have explained in detail what others have been successfully using for well over a decade now. There's an element of "the Americans have need of the telephone, but we do not. We have plenty of messenger boys" in that, and that's why I mentioned actually sitting down and writing some code that used it earlier on.

The sentiment that "just because it's D3D functionality it doesn't mean that GL has to do it" has a counterpart - just because it's D3D functionality it doesn't mean that GL doesn't have to do it either - because GL is not D3D and can evolve whatever functionality is deemed appropriate; whether or not it's similar is not relevant. Exhibiting opposition to functionality just because it works in a similar manner to D3D is quite preposterous, to be honest.

08-08-2012, 06:02 AM

Eosie

Quote:

Originally Posted by Alfonse Reinheart

Welcome to my entire point: if the stride isn't going to change in any real-world system, why is the stride not part of the vertex format?

Even though it might not make much sense to you from a theoretical standpoint, the reason the spec's been written like that is that it maps perfectly on the current hardware. There's no other reason. The stride is just part of the vertex buffer binding.