Re: buffer_storage

Why does glTexStorage make the texture immutable? Preventing the programmer from making mistakes is the pretty much the entire point of making glTexStorage textures immutable.

good try but not good enough
preventing the programmer from making mistakes is not the purpose of glTexStorage at all.
its purpose is to relieve the driver from the burden i explained, which is unavoidable otherwise.

Because it makes it impossible to change the usage flags later. If you can only call glBufferStorage one time, then the implementation knows that this object name will be forever associated with a block of memory of known size and will always have a particular usage pattern. The driver doesn't have to be written to deal with changing this usage pattern in the future.

when you call glBufferData second time on the same object, you effectively destroy the old one and create a new one.
why should not you be allowed to specify new parameters for the new object? does not make any sense.

the only difference between glBufferData on existing object and first destroying it and then creating new one is that in the first case you keep the same object id. do you really think this alone has any overhead or is in any way problematic for the driver?

the fundamental difference between glBufferData and glTexImage is that the first semantically destroys the old object and creates new one whereas the second hes much more complicated function which makes the lives of the driver writers harder.
this difference is the reason why the considerations leading to the need of glTexStorage do not apply in the case with the buffers

Re: buffer_storage

l_belev is right. buffer_storage would be useless, because the buffer API has none of the problems of the texture API. It's not just the mistakes that are painful for drivers, but pretty common cases as well. The real reason for texture_storage is to address the total awkwardness of OpenGL texture allocation. I'll give you an example:

Ouch! I have to reallocate my texture in VRAM now, because I only allocated space for level 0, and then copy the content of level 0 to its new location. Such a waste. Wait! The texture is incomplete now anyway, so I can defer the reallocation in case the user specifies the other mipmaps. In the meantime, I can hold a temporary copy of data1 in RAM.

The texture is complete now, should we finally do the reallocation? Better not. We can wait until the texture is used for the first time and do it then.

Later in glDrawElements:
- Allocate a texture with 3 mipmaps.
- Copy level 0 from the first texture we allocated.
- Deallocate the first texture.
- Copy levels 1 and 2 from RAM.

Now you can see there is something very, very wrong with OpenGL. Why do things have to be so complicated for drivers? There is no way to know how much storage is needed for a texture before it's used for the first time. You can only guess until then.

ARB_texture_storage fixes this awkwardness by adding a way to allocate the storage in memory first. The immutability itself is not very useful, but the ARB somehow had to disallow the glTexImage calls which cause so much pain, and adding the immutability rule sounds like a good compromise. The extension pretty much ensures that texture specification is as fast as possible without unnecessary reallocations and temporary copies in RAM. So use it, love it.

Re: buffer_storage

Now you can see there is something very, very wrong with OpenGL.

Something is very, very wrong with that implementation of the spec There's nothing in the GL spec that says that a mipmapped texture must have all its texture levels reside in one contiguous memory chunk - that is completely up to the implementors. The levels could just as easily be separate chunks of memory, with a small table containing the starting addresses of each. Each texture must have some sort of header information where the dimensions, format and mip range must be stored; this table could easily reside there.

I certainly wouldn't mind a houseclean of the usage hints to be clearer and more useful. The current setup has a lot of entries for questionable usage cases that seems to have sprung out of the fact that the spec writers wanted a nice 3x3 matrix.

Re: buffer_storage

preventing the programmer from making mistakes is not the purpose of glTexStorage at all.
its purpose is to relieve the driver from the burden i explained, which is unavoidable otherwise.

And it does so by preventing the programmer from doing things that would burden the driver, ie: making mistakes. Namely, calling glTexStorage multiple times. That would be a mistake, and the texture_storage extension rightfully stops you from doing it.

If texture_storage's purpose was solely to allocate all mipmaps in one go, then it wouldn't need to be immutable. It could just say that subsequent glTexStorage calls erase all previous mipmap levels and respecify the whole texture. And that you can't call glTexImage* on them. Doing that would make texture objects perfectly analogous to buffer object, since their storage would be allocated up front with glTexStorage, just like buffer objects are with glBufferData. You could change the object completely by calling glTexStorage again with different sizes, just like call glBufferData with a different size.

But the ARB didn't do that, for the reasons they explained.

when you call glBufferData second time on the same object, you effectively destroy the old one and create a new one.
why should not you be allowed to specify new parameters for the new object? does not make any sense.

Simple: it's not a new object. It's the same object with different storage on it. Drivers can't just pretend that it's a new object, because it isn't. They have to treat it differently, because the object could be in use somewhere. And I don't mean being pulled from or written to, but bound to the context or attached to a VAO.

If you create a new object with glGenBuffers, then the driver knows that it wasn't in use before.

And the reason you shouldn't be allowed to do so is because it confuses the API. It makes using the API harder, and nothing is gained by that. It makes writing drivers for the API harder, and nothing is gained by that either.

Your argument is that this is a good thing. That it is a good thing for the difference between the fast path and the slow path to be based on whether a number changes between frames. That it is a good thing that this is documented precisely nowhere. That it is a good thing that users can stumble onto the slow path through no fault of their own. That it is a good thing that finding the fast path for buffer objects is a nightmare of random and arbitrary possibilities.

Just by looking at the function calls, it is obvious that the top one is more restricted in its buffer object use, and therefore likely faster, than the bottom one. It's immediately apparent, and impossible to ignore. And impossible to screw up, since the driver will throw an error at you.

Yet you are arguing that the current obfuscated API is the way it ought to be.

You cannot fix the problems with buffer objects without establishing a binding contract between the user and the driver. And in establishing that contract, you will necessarily make the API easier to use.

And the very first step in doing that is to make the storage immutable.

ARB_texture_storage fixes this awkwardness by adding a way to allocate the storage in memory first.

I thought I was quite clear about one of the purposes of my proposed extension:

Originally Posted by me

The main problem this avoids is someone trying to use glBufferData every frame to invalidate it, but maybe gets the hint or size parameters wrong. That's an improper invalidation of the buffer, and it results effectively in creating a new buffer object with theoretically new characteristics.

My suggested extension makes this usage pattern impossible. Just like texture_storage makes it impossible to screw up texture allocation. Both of these ideas fix different problems, yes, but they both use immutability to solve their respective problems. The immutability in both cases is important, because it allows the user and the driver to have a binding contract about how the buffer is going to be used.

You cannot make buffer object hints binding unless the buffer object storage is to some degree immutable. For example, the "Write only after invalidate" style does not allow respecifying the buffer size. If you can respecify it, then you've broken the entire point of having restrictive hints. Namely, having a binding contract between the user and the implementation as to how the user will be using it.

I certainly wouldn't mind a houseclean of the usage hints to be clearer and more useful.

Making new hints would be meaningless without the power to enforce them via the API. And that power starts by knowing that the user cannot make the buffer change size or change what those hints are. IE: immutability.

Re: buffer_storage

Originally Posted by malexander

Now you can see there is something very, very wrong with OpenGL.

Something is very, very wrong with that implementation of the spec There's nothing in the GL spec that says that a mipmapped texture must have all its texture levels reside in one contiguous memory chunk - that is completely up to the implementors. The levels could just as easily be separate chunks of memory ...

Are you saying that all hardware implementations are wrong? I have never seen hardware which can do what you describe. Maybe you can point me to one.

Re: buffer_storage

Are you saying that all hardware implementations are wrong? smile I have never seen hardware which can do what you describe. Maybe you can point me to one.

I don't have to. The GPU hardware shouldn't be involved in the wrangling of a user's texture data into whatever format the hardware prefers, the client-side driver should do that for them. And so, it can store the data however it likes until it needs to send the texture to the GPU's memory. glTexStorage() makes a lot more sense now that unified CPU/GPU memory schemes are appearing in OpenGL (non-ES), but I don't believe the previous glTexImage() scheme was that inefficient.

The point I was attempting to make was that the situation you described with textures seemed overly dire, in an attempt to make immutable buffers seem less necessary.

Edit: Actually, let's say the texture situation was as awful as you describe and that immutable textures completely fixed it. How does that affect the merits of this particular proposal?

Re: buffer_storage

The ideal implementation for textures is that what you specify in glTex*Image should be directly copied into GPU-accessible memory. The driver can be even smart and allocate a RGBA8 texture for format=GL_RGBA and a BGRA8 texture for format=GL_BGRA, so that it can just memcpy your data. Doing anything else is a waste of time (e.g. CPU cycles etc).

Another reason for immutability in texture_storage was to simplify the texture completeness checking. (it's even mentioned in the spec)

I am not convinced that immutability would be any useful for buffers. Doing the complete reallocation of a buffer is merely just about changing the pointer to the buffer (both in a driver and in hardware), which means vertex arrays and texture buffers and whatnots must re-validated, so that the change of the pointer is propagated through different layers of the driver into the hardware. That's not very fast, but immutability wouldn't help here anyway. If you want to change the size of a buffer, this step is inevitable and it doesn't matter whether you setup a new vertex array state by yourself or your driver will do it for you in glBufferData (the former may be a little bit more efficient, depending on the implementation.. the question is: would it be even measurable?).

Performance-wise, I don't see where immutability would help (such that it's worth to introduce a new extension). The implementation of glBufferData is already pretty straightforward. Someone might argue that immutability would make for an easier implementation. This may not apply here because driver developers would have to maintain the current implementation for an unspecified time anyway (usually 15-20 years in the OpenGL world, or until everybody abandons OpenGL).

Re: buffer_storage

Simple: it's not a new object. It's the same object with different storage on it. Drivers can't just pretend that it's a new object, because it isn't. They have to treat it differently, because the object could be in use somewhere. And I don't mean being pulled from or written to, but bound to the context or attached to a VAO.

Um. Err. If you call glBufferData in with say MagicNumber bound in thread A and MagicNumber is also bound in thread B (the contexts are in the same share group), then GL the original data of the buffer object is used in thread B still, until MagicNumber is rebound in thread B [I think]. This is because calling glBufferData does create a new object according to the specification:

(my emphasis added). I's say that I_belev is spot on here. Besides, how many times for a fixed buffer object ID does one change the usage flags or size? That is the only reason to call glBufferData again on the same name.

Though I freely admit, I would not mind seeing a glResizeBufferObject API point, perhaps that is what you are after: a resize API with buffer object creation hints to state that you won't resize a buffer object.

Re: buffer_storage

The ideal implementation for textures is that what you specify in glTex*Image should be directly copied into GPU-accessible memory.

So you want to map texture memory. Cant see why this cant be done - im sure khronos guys would be able to pull decent api for that.

I didn't mean exactly that, as I was referring to the current OpenGL, but good point. I can already map texture memory. But I agree that such a feature would be useful to have in OpenGL, however OpenGL isn't ready for that. The OpenGL internal format doesn't specify any channel ordering and it's allowed to fake one format with another (e.g. ALPHA8 using RGBA8). Even though there are lots of R/RG/RGB/RGBA/A/L/LA/I/D/DS/S internal formats, implementations may actually not support every one of them (e.g. most RGB formats are not). We would need a new set of internal formats that strictly describe how a pixel should look like in memory and we'd need an is-format-supported query too (pretty much what Direct3D has). I don't see this coming anytime soon. It would clean up the API though.