Hybrid View

mutable texture formats

In the newer d3ds (10 and up) there is a mechanism whereby one can effectively change an texture format - it is called "views" there.
This is hardly one of the more useful features. Actually I only know of one practical use, that is, to change from srgb to non-srgb and the other way around (which was just a sampler state in previous directx-es).

Anyway, as the various vendors actually support the newer d3ds, and with them this particular feature (the mutable texture formats), why can't we access this hardware capability from opengl?
So i suggest that we make the texture formats "mutable". That is, once the texture is created, its format can be changed with glTexParameteri.
The sampelr objects will also possess a format state, which will override the texture's, just like the other parameters.
Of course we will have to divide the various formats into groups of mutual compatibility. Then a texture's format can be changed only to one of those that are from the compatibility group of the original format.
If a sampler's format is not compatible with the bound texture we act as in other similar cases - e.g. generate INVALID_OPERATION on the subsequent draw commands.

The format is actually property of the texture levels and not of the texture object itself, but this is so only for historical reasons. (different mip formats within the same texture is possible but it only makes the texture unusable and is a burden for the drivers to deal with).
Changing the format with glTexParameteri will apply to all mips at once.

This will give the full functionality of the directx-es, but with much simpler and intuitive api (the directx way is way too complex and bloated with the numerous pointless objects you have to create and manage).

The biggest flaw with the D3D10/11 approach is that creating a view is mandatory - there is no default view that you can just grab and use. It's a one-time-only op at creation time for sure, and anyone sensible will end up wrapping it with something more appropriate to their program's usage requirements, but mildly annoying all the same.

On the other hand, it does allow for unification of a whole bunch of previously different resource types, and pretty much everything is now some variation on buffer or texture (even the depth buffer is just a regular Texture2D). It's actually quite an elegant setup really - a resource is the raw object and a view defines how the pipeline interprets that object, so there's a nice and clear separation going on. The initial (which are the current) implementations are a mite clunky, but like everything in D3D land they can be expected to get better over time. So for standard texturing you have a ID3D11Texture2D with an ID3D11ShaderResourceView created on it, but you can also create an ID3D11ShaderResourceView on an ID3D11Buffer object, allowing for easy render-to-vertex-buffer or texture-from-vertex-buffer if that's what you wanted to do.

In many ways it behaves a lot like OpenGL's buffer object binding points, with a little bit of glTexParameter thrown in, in other words. So there's likely no requirement for OpenGL to specify any kind of full implementation of D3D views, with the specific use case you identified (mutable texture formats) being the only obvious one, and correctly expressed as a glTexParameter or glSamplerParameter. So long as the mandatory "you must create a view" requirement didn't exist, it sounds fine and reasonable.

As it is now, the texture format is not so well-defined because it is specified separately for each mip. The texture is considered complete if all mips have the same format, but after that the user still can change some mitp's format and texture again becomes incomplete.
Then it may be difficult to define which is the "original" format to which we want any new one set with glTexParameteri to be compatible.
For this reason we may want to allow "mutable formats" only for the "immutable textures" created through the new extension GL_ARB_texture_storage.

Exactly how would this work? What would it mean to change the format of a texture from GL_RGBA8 to GL_RGBA16? I don't know much about how D3D10+ works in these cases.

My concern is that there won't be very many valid re-interpretations of data, and even fewer useful ones. For example, you could turn GL_RGBA8 into GL_RGB10_A2 or GL_R32F or something. But what exactly does that gain you in terms of real usefulness? What problem does this solve?

Also, there's the issue of specifying behavior, essentially forcing implementations to do things the D3D way. I don't care much for that idea.

Most importantly, you said, "I only know of one practical use, that is, to change from srgb to non-srgb and the other way around (which was just a sampler state in previous directx-es)." We already have that, though only as an extension (EXT_texture_srgb_decode) for the time being. This seems like a far less intrusive way of getting the useful functionality of this concept.

OpenGL doesn't have to expose every possible thing that hardware could do. It just needs to expose all of the useful things it can do.

Reviewing the DXSDK, 10/10/10/2 formats aren't in the same type family as 8/8/8/8 so that's either one of the nice arbitrary restrictions that D3D likes to hit you with every now and then, or there was a practical reason behind it. The only thing that would have given you is a psychedelic screen effect without any post-processing, which is probably not in very high demand.

In reality the purpose of views in D3D is something entirely different, and mutability of texture formats is a side effect rather than the main objective (which was separation of resource definition from how the resource is to be used, and which allowed for more generalization of resource types - think of it as being kind of like mallocing a void * buffer then casting it to a struct type). It's also a limited mutability rather than full general, so you can't covert the example of GL_RGBA8 to GL_RGBA16 - they must be from the same type family, which tends to mean same number of components and same component size.

If GL were to get this a more general mutability would have one practical use I can think of, and that's where you might have a texture that sometimes you want to access as RGBA8 and sometimes as R32F. Say you want to interpret one portion of it as depth info and one portion as colour info, and - because the texture is quite large - resource constraints prohibit you from creating two of them. True, it's a mite far-fetched, and true, you could do some fragment shader packing/unpacking (at the cost of a few extra instructions), but it is sneaking into the realm of things that could happen.

If GL were to get this a more general mutability would have one practical use I can think of, and that's where you might have a texture that sometimes you want to access as RGBA8 and sometimes as R32F. Say you want to interpret one portion of it as depth info and one portion as colour info, and - because the texture is quite large - resource constraints prohibit you from creating two of them. True, it's a mite far-fetched, and true, you could do some fragment shader packing/unpacking (at the cost of a few extra instructions), but it is sneaking into the realm of things that could happen.

It seems to me that the way to handle this would be with a more flexible type system, something that would allow you to build an image format from raw components. You could create an image format that would be the equivalent of GL_R16F_GB8. I don't know how you would handle shadow accesses from such a texture, as those explicitly return a single float value.

Even so, it's not of that much utility. If you need that level of flexibility, nobody's stopping you from using special shader logic to turn one texture's format into another. You could unpack the RG components of an RGBA8 texture into a float to emulate GL_R16F_GB8.