Re: OpenGL 3 Updates

I see there's no mention of binary blobs again. I think you're underestimating their importance. I'd rather wait another month if it means getting binary blobs. I'd also have rather you spent time spec'ing the binary blobs than coming up with this text buffer nonsense to support #include's. I already support includes in my renderer, it's a trivial bit of search&replace - took 20 minutes to implement, and I'm sure that included making a cup of tea.
Not that I'm not grateful for your OpenGL charity work.

Re: OpenGL 3 Updates

ATI/nVidia HAVE advised for AGES not to use alpha-test, if you can prevent it. There are plenty of papers mentioning it.

This is contrary to the times, when there were software-renderers (and the very early HW implementations), because at that times alpha-test was advised to use as often as possible to kill as many fragments as early as possible. Though there were no shaders at that time, of course.

Discard is also advised not to use, of course.

However, so far i have not seen a single paper, that said to prefer discard over alpha-test, if you actually need the functionality. I always got the impression that discard is by far the most evil per-fragment operation.

But if the ARB decides to include alpha-test in GL3, i believe they do have good (enough) reasons. And, as knackered pointed out, there are actually much more important things.

Re: OpenGL 3 Updates

I see there's no mention of binary blobs again.

And... why would they?

They made it clear that such functionality was being considered for the post-3.0/pre-Mt. Evans refresh. It might be nice to hear some confirmation that they intend to go forward with such functionality, but they haven't provided anything more on any of the other post-3.0/pre-Mt. Evans features either.

Re: OpenGL 3 Updates

Originally Posted by Humus

Originally Posted by Eric Lengyel

Thank you! That's right, don't let the slackers at a certain company (that shall go unnamed, but we all know who I'm talking about) skimp on FBOs any more because depth/stencil was left out of the original extension spec. I still have to support pbuffers because of them. >-(

I assume you're talking about ATI, but GL_EXT_packed_depth_stencil is supported already. At least it's in my extension list. Haven't tried to use it myself though.

90% of my customers are still running on Windows XP. The GL_EXT_packed _depth_stencil extension is not supported in the ATI driver (7.10) under Windows XP, so I must fall back to pbuffers.

The extension does show up under Windows Vista, but it did not function correctly for a long time, and I again had to fall back to pbuffers. I just checked, and it does appear to be working correctly under Vista now, but at a 50% performance penalty compared to XP on the same machine.

Re: OpenGL 3 Updates

Originally Posted by Korval

Not only is a "linear search over a permuted range of options" inefficient, it is a broken algorithm. It may well create a format that you cannot use with those images.

If I understand correctly, you first create a format object and then use that format object to bind to an image object. There is no guarantee that the bind will succeed either (e.g., supported format, insufficient resources). Then I have to delete the format and try one with a lesser demand on the system. Same boat different paddle.

Originally Posted by Korval

Maybe not, but it would only ever be so in a program like Maya, Max or some other tool where texture formats were controlled specifically by the user. And in that case, you would simply tell the user that the format is not available.

I don't program for commercial entertainment. Everything I do is pretty much controlled by the demands of my users and I don't have the luxury of telling them that something is not supported. I have to provide an alternative -- period.

Originally Posted by Korval

You misunderstood the question.

What you said was, "I would only hope that the return is more than just yes/no and actually tells the caller why the format can't be created." That is, if a format object fails creation, you can query the system as to a specific reason why it failed creation..

Once again, an inefficient algorithm to determine what went wrong after the fact.

I don't like APIs that prevent me from knowing about specific limitations unless I try to create an object and wait for a failure. I don't create rendering contexts this way, so why should I be expected to do this for images? Even creating a format object doesn't guarantee that a particular size image will successfully bind. And please don't claim that a creation routine is a query routine -- it's not.

Re: OpenGL 3 Updates

Note that I don't claim this is the most efficient solution. I claim it's the only solution that's actually working.

Please, tell me a better alternative to query these things, *and* show me that it will never produce false positives or false negatives, on any hardware (current and future).

So we continue to wait for an inefficient solution.

FWIW, if I know that a particular filtering mode is not supported at all for a particular graphics card, why should I even bother attempting to create a format object with that filtering mode? THAT is a better alternative. A failure for one combination does not provide any insight that another combination will or will not work -- only that that combination has failed. I understand that there can be a huge number of possible combinations but only a relative few supported combinations. If you don't understand the difference between n! and (n-1)!, then you can't possibly understand the problem.

Future proofing has nothing to do with having or not having a decent query system.