I've read the ARB_sparse_texture spec and I noticed that the AMD_sparse_texture spec has functions to fetch from a sparse texture and return information about whether any texture data is present, is there something similar to this in ARB_sparse_texture?

How does Khronos determine that some independent, non-member entity is trustworthy enough? Plus, I assume you mean not only trustworthy but also promising, in the sense that it has to be a potentially successful endeavor?

Khronos will offer certification of drivers from version 3.3, and full certification is mandatory for OpenGL 4.4 and onwards. This will help reduce differences between multiple vendors’ OpenGL drivers, resulting in enhanced portability for developers.

This is fantastic news! I'm more excited about this than any of the new core 4.4 features or extensions. Dealing with broken features debuting in drivers, spec interpretation differences, and driver regressions are a particularly unpleasant part of cross-platform OpenGL development. Working around these issues drains resources from otherwise 'useful' development. While I don't expect the situation to magically improve overnight nor make drivers perfect, this is a good start.

Is there a process for submitting conformance tests to be reviewed by the ARB? Or is this limited to ARB members?

It's a pity as this could have been the kick up the jacksie that GL's buffer object API really needed, and the issue in question should really have been resolved by just saying "this is client memory, full stop, using incompatible flags generates an error, here are the flags that are incompatible and the vendors will have to just live with it", but it seems another case of shooting too high and missing the basic requirement as a result.

OK, so... what is the "basic requirement?"

That's what I don't understand about this whole issue. What exactly would you like "CLIENT_STORAGE_BIT" to mean that is in any way binding? You say that certain flags would be incompatible. OK... which ones? And why would they be incompatible?

If client/server memory is a significant issue for some hardware, then that would mean something more than just "incompatible bits". If client memory exists, then why would the driver be unable to "map" it? Why would it be unable to map it for reading or writing? Or to allow it to be used while mapped or to make it coherent?

The only limitations I could think of for suc memory would be functional. Not merely accessing it, but uses of it. Like an implementation that couldn't use a client buffer for transform feedback storage or image load/stores. It's not the access pattern that is the problem in those cases; it's the inability to allow them to be used as buffers in certain cases.

So the ARB could have specified that client buffer objects couldn't be used for some things. It would be the union of all of the IHVs who implement it. Which would exclude any new IHVs or new hardware that comes along. They could provide some queries so that implementations could disallow certain uses of client buffers.

But is that really something we want to encourage?

BTW, if you want to trace the etymology of CLIENT_STORAGE_BIT, it was apparently not in the original draft from January. According to the revision history (seriously ARB, use Git or something so that we can really see the revisions, not just a log. That's what version control is for), the ancestor of CLIENT_STORAGE_BIT was BUFFER_STORAGE_SERVER_BIT (ie: reversed of the current meaning), which was added two months ago.

Also, from reading the issue it sounds very much like they didn't really want to added it, but had to. Granted, since "they" are in charge of the extension, I have no idea why they would be forced to add something they didn't want.

But as I said before, you can just ignore the bit and the extension is fine.

How does Khronos determine that some independent, non-member entity is trustworthy enough?

Generally speaking, a member company recommends them, the affected working group talks about it and and makes a recommendation to the Board of Promoters, and the BoP discusses and votes on the recommendation. Which is pretty much the way most things are decided in Khronos.

use Git or something so that we can really see the revisions, not just a log. That's what version control is for

The extension specifications are in a public part of Khronos' Subversion tree and you can see the history of public updates after a spec has been ratified. We're not going to publish the entire history of a spec through it's internal development, though.

Also, from reading the issue it sounds very much like they didn't really want to added it, but had to. Granted, since "they" are in charge of the extension, I have no idea why they would be forced to add something they didn't want.

Well that's precisely what the problem is. It's nothing specifically to do with CLIENT_STORAGE_BIT itself, it could have been about anything; it's the introduction of more vague, woolly behaviour, more driver shenanigans, and via another "one of those silly hint things".

What's grim about issue #9 is the prediction that the extension will make no difference, irrespective of whether or not the bit is used:

In practice, applications will still get it wrong (like setting it all the time or never setting it at all, for example), implementations will still have to second guess applications and end up full of heuristics to figure out where to put data and gobs of code to move things around based on what applications do, and eventually it'll make no difference whether applications set it or not.

It seems to me that if behaviour can't be specified precisely, then it's better off not being specified at all. I've no particular desire for CLIENT_STORAGE_BIT to mean that the buffer storage is allocated in client memory; that's irrelevant. I have a desire for specified functionality to mean something specific, and put an end to the merry-go-round of "well it doesn't matter what hints you set, the driver's just going to do it's own thing anyway". If that's going to be the way things are then why even have usage bits at all? That's not specification, that's throwing chicken bones in the air.

What's grim about issue #9 is the prediction that the extension will make no difference, irrespective of whether or not the bit is used:

That section said "set it", referring to the bit. Not to all of the flags, just CLIENT_STORAGE_BIT.

I have a desire for specified functionality to mean something specific, and put an end to the merry-go-round of "well it doesn't matter what hints you set, the driver's just going to do it's own thing anyway".

Ultimately, drivers are going to have to pick where these buffers go. The point of this extension is to allow the user to provide sufficient information for drivers to know how the user is going to use that buffer. And, unlike the hints, these represent binding contracts that the user cannot violate.

Drivers are always "going to do it's own thing anyway." It could stick them all in GPU memory, or all of them in client memory, or whatever, and still be functional. But by allowing the user to specify access patterns up front, and then enforcing those access patterns, the driver is able to have sufficient information to decide up front where to put it.

The only way to get rid of any driver heuristics is to just name memory pools and tell the user to pick one. And that's just not going to happen. OpenGL is not D3D, and buffer objects will never work that way. OpenGL must be more flexible than that.

That section said "set it", referring to the bit. Not to all of the flags, just CLIENT_STORAGE_BIT.

It also said "or not". So take the situation where you don't set CLIENT_STORAGE_BIT and explain how that text doesn't apply.

Originally Posted by Alfonse Reinheart

Ultimately, drivers are going to have to pick where these buffers go. The point of this extension is to allow the user to provide sufficient information for drivers to know how the user is going to use that buffer. And, unlike the hints, these represent binding contracts that the user cannot violate.

Drivers are always "going to do it's own thing anyway." It could stick them all in GPU memory, or all of them in client memory, or whatever, and still be functional. But by allowing the user to specify access patterns up front, and then enforcing those access patterns, the driver is able to have sufficient information to decide up front where to put it.

The only way to get rid of any driver heuristics is to just name memory pools and tell the user to pick one. And that's just not going to happen. OpenGL is not D3D, and buffer objects will never work that way. OpenGL must be more flexible than that.

Again, I'm not talking about CLIENT_STORAGE_BIT specifically, I'm talking about specification vagueness and woolliness in general. You say that "OpenGL is not D3D" but yet D3D 10+ (which by the way doesn't have memory pools, it has usage indicators just like ARB_buffer_storage) has no problem whatsoever specifying explicit behaviour and yet working on a wide range of hardware. This isn't theory, this is something that's already out there and proven to work, and "OpenGL must be more flexible than that" just doesn't cut it as an excuse.

Referring specifically to CLIENT_STORAGE_BIT now, go back and read the stated intention of this extension:

If an implementation is aware of a buffer's immutability, it may be able to make certain assumptions or apply particular optimizations in order to increase performance or reliability. Furthermore, this extension allows applications to pass additional information about a requested allocation to the implementation which it may use to select memory heaps, caching behavior or allocation strategies.

Now go back and read issue #9:

In practice, applications will still get it wrong (like setting it all the time or never setting it at all, for example), implementations will still have to second guess applications and end up full of heuristics to figure out where to put data and gobs of code to move things around based on what applications do, and eventually it'll make no difference whether applications set it or not.

Realise that it's being predicted to not make the blindest bit of difference even if applications don't set CLIENT_STORAGE_BIT.

This extension would have been great if CLIENT_STORAGE_BIT was more strictly specified.
This extension would have been great if CLIENT_STORAGE_BIT was not specified at all.

Right now best case is that implementations will just ignore CLIENT_STORAGE_BIT and act as if it never even existed. MAP_READ_BIT | MAP_WRITE_BIT seem enough to clue in the driver on what you want to do with the buffer. Worst case is that we've an exciting new way of specifying buffers that does nothing to resolve a major problem with the old way.

Realise that it's being predicted to not make the blindest bit of difference even if applications don't set CLIENT_STORAGE_BIT.

You're really blowing this way out of proportion.

The mere existence of the bit changes nothing about how the implementation will handle implementing the rest, because it changes nothing about any of the other behavior that is specified. If you say that you won't upload to the buffer by not making it DYNAMIC, you cannot upload to it. If you don't say that you will map it for writing, you can't. If you don't say that you will map the buffer while it is in use, you can't.

All of that information still exists, is reliable, and is based on a API-enforced contract. Therefore, implementations can still make accurate decisions based on it.

Worst case is that we've an exciting new way of specifying buffers that does nothing to resolve a major problem with the old way.

Um, how?

The fundamental problem with the current method is that the hints you provide are not guaranteed usage patterns. The API can't stop you from using them the wrong way, nor can the documentation explain the right access pattern for the hints. Therefore, those hints will frequently be misused. Since they are misused, driver developers cannot rely upon them to be accurate. So driver developers are forced to completely ignore them and simply watch how you use the buffer, shuffling it around until they figure out a place for it.

With the exception of CLIENT_STORAGE_BIT, all of the hints are enforced by the API. You cannot use them wrong. Therefore they represent real, actionable information about how you intend to use the buffer. Information that driver developers can use when wanting to allocate the storage for it.

The mere existence of CLIENT_STORAGE_BIT changes nothing at all about how useful the other bits are. The discussion in Issue 9 is specifically about those cases where the other usage bits alone cannot decide between different memory stores.

And, as far as the DX10 comparisons go, I checked the DX10 API. The only functional difference between these two is that CLIENT_STORAGE_BIT exists in GL (that, and the GL version gives you more options, such as using GPUs to updating non-dynamic buffers). So why should I believe that the mere existence of an option suddenly turns an API that is functionally equivalent to DX10 into the wild west of current OpenGL buffer objects?

Or let me put it another way. If the DX10's usage and access flags are sufficient information to place buffer objects in memory, why are the current set of bits provided by this extension not equally sufficient for this task? And if those bits are not sufficient, then there must already exist "heuristics to figure out where to put data and gobs of code to move things around based on what applications do" in D3D applications, so why would that code not apply equally well to OpenGL implementations?

I think you really taking that issue way out of proportion.

Is it possible for implementations to just ignore all of these bits (outside of enforcing the contract) and rely entirely on heuristics? Absolutely. But the information is there and it is reliable. So why would they? Just because there's one bit that may not be reliable?