The statement "are gone" is somewhat misleading. The learning/optimization curve of OpenGL is not to be cut any soon. I prefer things to be quickly codeable first and quickly to execute later on.

it's there to provide access to the hardware, with minimal overhead.

OpenGL is not a hardware-driver in my reading. Windows has a great GUI despite the fact it is an Operating System. One can criticize the fact that one cannot get the OS without the GUI but not that the GUI is shipped with the OS, if you get my reading.

They are - at least for everyone doing modern OpenGL right and caring about performance. BTW, even though GL_ARB_compatibility permits using all the old nonsense, it's just the syntax that's still there. Under the hood, all that crap is emulated using current hardware facilities.

Originally Posted by hlewin

I prefer things to be quickly codeable first and quickly to execute later on.

Since when is something really elaborate quickly codeable in OpenGL? Correct OpenGL usage needs knowledge, effort and in most cases time. It doesn't matter if you save time coding when the result runs several times slower than the semantic equivalent you put more effort into.

Originally Posted by hlewin

OpenGL is not a hardware-driver in my reading.

No, OpenGL is a specification. Your OpenGL implementation, however, is part of the driver and it implements an interface to the graphics hardware - hopefully with minimal overhead, like Alfonse suggested.

Originally Posted by hlewin

Windows has a great GUI despite the fact it is an Operating System. One can criticize the fact that one cannot get the OS without the GUI but not that the GUI is shipped with the OS, if you get my reading.

They are - at least for everyone doing modern OpenGL right and caring about performance. BTW, even though GL_ARB_compatibility permits using all the old nonsense, it's just the syntax that's still there. Under the hood, all that crap is emulated using current hardware facilities.

Which is a good Thing as using the old crap makes learning OpenGL quite a lot easier. And as you say the principles stay roughly the same. For my taste the compatibility spec goes not far enough to provide for means of a simple fade from a beginners-tutorial as downloadable everywhere to a state-of-the art application.

Since when is something really elaborate quickly codeable in OpenGL? Correct OpenGL usage needs knowledge, effort and in most cases time. It doesn't matter if you save time coding when the result runs several times slower than the semantic equivalent you put more effort into.

It matters for example when using declaratory elements of the language binding. See the example above. When scatching things I do not want to care about alignment-requirements of bind-buffer-range. That can be optimized if things have been implemented and if a bottleneck occurs. I feel it's unnecessary to be forced to write hardware-friendly, optimized code in the first place. Who cares about the Need for 100, let it be 1000 readbacks from the gpu per Frame? That's something one Needs to care about when writing bleeding-edge stuff. bleeding-edge for about 6 month until the next gpu-generation Comes out. I have no Problems wasting 10000 clock-cycles per Frame. I have Problems wasting some work-hours having to cope with offset-alignment-requirements.

Which is a good Thing as using the old crap makes learning OpenGL quite a lot easier.

Is the dark side stronger?

No. Quicker. Easier, more seductive.

Just because something is easy doesn't make it good. I have never seen a fixed-function-based tutorial really explain how things actually work in the code, what all those parameters to various functions mean and so forth. Whereas you can't write shader-based code without knowing what you're doing.

Users learn to use gluPerspective without having the slightest clue what it means. They learn to use glTexEnv without knowing what it's doing. They memorize and regurgitate glBlendFunc parameters to achieve some effect without any idea what it is really doing. And all the while, they think they are "learning" computer graphics, when in reality, they're just copy-and-pasting bits of code that worked before into some other place.

And when they encounter a problem, because the Frankenstein's code that they've assembled from 20 different tutorials doesn't integrate well, they ask here. Without the slightest clue what's broken or how to fix it.

It may take longer to learn via shaders, and you may not be able to see glamorous results quickly. But when you learn it, you learn it. You aren't just copying bits of code around; you're understanding what you are doing.

Nonsense. A lot of the stuff you needed to do with legacy OpenGL simply does not apply to modern OpenGL.

Originally Posted by hlewin

When scatching things I do not want to care about alignment-requirements of bind-buffer-range.

When I registered on this forum almost 3 years ago it was because I stumbled over the buffer offset alignment for uniform buffers. Ok, so it's not too intuitive. However, when you're doing OpenGL there's stuff that's implementation dependent. Knowing that and how to deal with it is sometimes essential. In any case, there's the spec you can read. And don't tell me you don't have to read other specs or API docs or documentation in general during your workday. If you don't want to read the spec you can ask here or other places and people will help you. Still, nobody's going to change the spec just because some parts of it are an inconvenience to you.

Originally Posted by hlewin

I feel it's unnecessary to be forced to write hardware-friendly, optimized code in the first place.

Who forces you? YOU need to force yourself if you want fast code. By your logic, writing code that uses cache lines well is wasted. Or making sure data is properly aligned so memory accesses work properly. Or utilizing SIMD instructions. Or inline assembly. Etc, etc.... Oh well ... It's cool to first make code correct and then fast but disregarding platform specific quirks is simply unwise to be diplomatic.

Originally Posted by hlewin

Who cares about the Need for 100, let it be 1000 readbacks from the gpu per Frame?

Ehm, everyone who's not completely insane? Do you have any idea what that much readbacks will do to your program's performance?

Originally Posted by hlewin

That's something one Needs to care about when writing bleeding-edge stuff.

So your argument is, unless one writes a high-end renderer for use in next-gen AAA games, performance simply doesn't matter?

Since we're straying very far from your original proposel, let me finally urge you to consider the following: If you don't want to write high-performance code that's ok and if you're happy with the result, good for you. Still, I'm pretty confident that most experienced or semi-experienced OpenGL devs like to make things fast and they want and need an API to cater to that desire. At least that's the case for me. OpenGL is not designed to provide maximum convenience, it's supposed to provide a means to write high-performance rendering applications - and performance usually comes with a price. This includes decisions on the hardware-level which may not be transparent to the application developer but still necessary to keep the performance up. If that means I have to sacrifice some convenience than sign me up. Wishing for changes to be adopted that result in implementations being slower compared to predecessors is simply unacceptable.

Bringing suggestions to improve OpenGL is always good if they're valid, but your suggestion has been dismissed by several very experienced people (myself not included) during a long discussion. It's time to let it go.

At any rate, what hlewinwants is already available as an NVIDIA only extension. As stated before, that extension assumes point blank that all the GPU needs in the shader when accessing a texture is a 64-bit value.... what he fails to grasp is that other hardware may or may not operate that way.

As a side note, the NVIDIA extension offers several distinct advantages over glBindTexture jazz:

Avoid glBindTexture, pass the 64-bit address directly. This is the same avoid binding savings that NVIDIA's original bindless offers

With NVIDIA's bindless texture, the need for texture atlas utterly disappears. You no longer need to make sure you are using no more than N textures, you can use them all (subject to VRAM room!). What one uses to choose the texture can then be fitted to anything: attributes, buffer object {be it uniform or texture buffer objects} (the former which is what he wants so badly)..

In theory one could imagine that an integer computed/determined in a shader could be used to specify what texture unit to use; but I do not really buy that either since it forces an implementation to have a separate thing orthogonal to the fragment shader to do the sampling (which I guess is the case for NVIDIA).

I'd still like to see NVIDIA's bindless for buffer object data somehow come to core in some form, but I do not think I will; it assumes too much: that data behind a buffer object all that one needs is a 64-bit value.

At any rate, what hlewinwants is already available as an NVIDIA only extension.

Not exactly. Bindless texture works because it introduced opaque handles represented by a 64-bit integer to accomplish getting samplers from buffers. What hlewin want is that a non-opaque API concept, the texture unit index to be enough for the shader to create samplers out of them. Furthermore, bindless textures require one more important additional step: making the texture resident.

Also, what he wants is the GL implementation to parse the buffer and automatically update API values to opaque, implementation dependent values automagically. That's the non-sense part.

Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
Technical Blog: http://www.rastergrid.com/blog/

Not exactly. Bindless texture works because it introduced opaque handles represented by a 64-bit integer to accomplish getting samplers from buffers. What hlewin want is that a non-opaque API concept, the texture unit index to be enough for the shader to create samplers out of them. Furthermore, bindless textures require one more important additional step: making the texture resident.

Also, what he wants is the GL implementation to parse the buffer and automatically update API values to opaque, implementation dependent values automagically. That's the non-sense part.

The need to make it resident I already noted, what he was originally after was sampler in a buffer object. NVIDIA bindless does give that. The rest of what he was going on about a GL implementation needing to check the buffer object, etc I think was him just getting painted into a corner... you can definitely emulate using what texture unit to use stored in a buffer object just by having an additional array (in a separate block) indexed by texture unit with values as that texture/sampler pair is bound there.

But I confess, the idea of storing what texture unit(instead of what texture) to use in a buffer object sounds almost useless.. like Alfhonse originally state, the vast majority of times the texture unit to use for a sampler uniform is static for the life time of a GL program.