For the next generation of OpenGL – which for the purposes of this article we’re going to shorten to OpenGL NG – Khronos is seeking nothing less than a complete ground up redesign of the API. As we’ve seen with Mantle and Direct3D, outside of shading languages you cannot transition from a high level abstraction based API to a low level direct control based API within the old API; these APIs must be built anew to support this new programming paradigm, and at 22 years old OpenGL is certainly no exception. The end result being that this is going to be the most significant OpenGL development effort since the creation of OpenGL all those years ago.

f**k. Yes.

Interesting OGL4.5 extensions:

ARB_clip_control: Except for the purely convenience features this provides when porting from DX, it can also be used to improve depth precision slightly. This could be solved before as well, but was a bit unclear and hacky.

ARB_direct_state_access: FINALLY. This should eliminate almost all binding!

KHR_context_flush_control: I have no idea how this magically works. Will investigate. Multithreaded OpenGL for more than just texture streaming would be amazing. EDIT: Most likely this is not very revolutionary. The improved multithreading is expected in OpenGL NG.

a ground up rewrite for OpenGL sounds great (and long overdue) and they seem to have ticked all the right boxes, however judging by the description of how they intend to work with so many parties and being aimed at mobile, desktop and console platforms (and probably the web), I'm guessing it'll be a couple of years before they get anywhere (e.g. html5).

ARB_clip_control: Except for the purely convenience features this provides when porting from DX, it can also be used to improve depth precision slightly. This could be solved before as well, but was a bit unclear and hacky.

Finally, some good news. I don't even care if it takes 2-3 years, they can take their time, just let the API be nice and modern.If they're going to redesign the whole API maybe they're going to drop the state machine format?

After a good night's sleep, I've had the time to take a look at OpenGL NG more. It's clear that Khronos is going the same direction as DirectX 12 and Mantle, thank god.

A completely redesigned API made for modern GPUs, most likely targeting the same hardware as DX12/Mantle which would be the Nvidia 600 series and the AMD 7000 series and up. I am unsure if Intel would be able to support it with their current generation of hardware; they may have they hardware but lacks the drivers.

Which leads to the second point. As with DX12/Mantle, we're looking at a very thin and simple driver. All the old redundant features are thrown out. This should allow AMD, Nvidia and Intel to simply build a new, small driver for OpenGL NG, finally slowing down or halting further development of the old OpenGL. Newly released hardware would obviously still need an OpenGL 4.5 driver, but from now on we can assume that OpenGL 4 will get fewer updates and new extensions, but I guess we can assume that some OpenGL NG features will drip down to OpenGL 4 through extensions... Well, hopefully we'll at least get much more stable OpenGL NG drivers which will be released faster!

With more low-level control of how the GPU and CPU work, we should be able to do some pretty cool optimizations by taking advantage of the GPU in better ways. For example, depending on what's exposed by OpenGL NG it might be able to render shadow maps in parallel to lighting the scene. Rendering shadow maps has very low compute load as the vertex shader is usually simple and there is no fragment shader. Filling pixels and computing the depth is handled by the hardware rasterizers on the GPU, leaving the thousands of shader cores on your GPU idle. Tile based deferred lighting on the other hand is done by a compute shader which bypasses the vertex handling hardware and the rasterizer, and only uses the shader cores and a little memory bandwidth. We could essentially double buffer our shadow map and render a new shadow map while computing lighting with the previous one in parallel.

They're also promising massively improved CPU performance. The push for direct state access (promoted to core in OpenGL 4.5) implies that this is the way OpenGL NG will work, meaning simpler, shorter and clearer code. This also means that we won't be getting the same problem with state leaking as before, which would make it easier to avoid hard-to-find bugs. We're also promised proper multithreading. Multithreaded texture streaming is nice and all, but hardly a replacement for being able to actually build command queues from multiple threads as Mantle and DX12 will allow. Games that use OpenGL NG will at least have the potential for almost linear scaling on any number of cores. FINALLY.

Precompiled shaders! An intermediate shader format is something that has been wanted for a long time. GLSL basically just got a lot more like Java. Instead of letting each vendor develop their own GLSL compiler with their own sets of bugs and quirks, Khronos will (or I assume they will) develop their own compiler which compiles GLSL shaders to some intermediate format, just like Java bytecode. This should result in much more predictable performance as all GPUs and vendors will be able to take advantage of any optimizations the intermediate compiler does. This is especially good for mobile, which suffers from compilers that are bad at optimizing shaders. This also means that the GPU vendors will only have to be able to compile the GLSL "bytecode" to whatever their GPUs can run, which should be muuuuuuch less bug prone than compiling text source code. We'll only have to work with a single GLSL compiler from now on. As someone who's encountered so many broken compilers, this is a HUGE improvement. This will speed up development a lot on my end as well.

OpenGL NG will run on phones as well! Although I believe OpenGLES does not suffer from the same bloating as OpenGL, it's still extremely nice to be able to run the same code on both PC and mobiles. This is almost gift-wrapped and addressed to LibGDX.

Aaah... This is so great. Khronos basically gave CAD the boot and went full gamer. So many great things here. Obviously, OpenGL NG isn't complete yet and may even result in a second Longs Peak disaster, but there are lots of reasons to be hopeful. The circumstances are completely different, with Mantle pushing development of DX12 and OGL NG.

Basically a useless driver update though. The whole OpenGL model is f**ked. Until I can distribute the latest drivers with my game I'm stuck with using the lowest common denominator I'm willing to support.

Basically a useless driver update though. The whole OpenGL model is f**ked. Until I can distribute the latest drivers with my game I'm stuck with using the lowest common denominator I'm willing to support.

Cas

Just wondering... what is the lowest common denominator for OpenGL right now?

Just wondering... what is the lowest common denominator for OpenGL right now?

Yeah, as Cas said there is no such thing. I usually make my games using OpenGL3.3 functionality, because I found this to be the best in terms of functionality/availability.Usually Intel is the bottleneck as they only support OpenGL 3.1 with their HD3000 and HD2000 cards, and OpenGL2.1 with their low-end integrated GPUs. You can find a table of that here.

Nvidia is generally considered the best when it comes to OpenGL (their driver is the most stable out of all vendors, also they have pretty good linux drivers), they support up to OpenGL 4.5 from their GeForce 400 series and up, and OpenGL 3.x from GeForce GT 8 series cards and up. You can find more details about that here.

I have a 2009 computer with an integrated Intel card that may or may not even support fragment shaders. (It's whatever version supports it using an extension. 1.3? 1.4?) It's depressing, but still makes me counterproductively determined to make the game run on it anyway.

One of the things I look forward to the most is explicit multi-GPU programming. Being able to control each GPU in any way you want has some extremely nice use cases, instead of relying on drivers and vendor specific interfaces to perhaps kind of maybe possibly get it to work with only some flickering artifacts.

I dunnno about that... maybe I'm nuts, but it seems like things have been shifting away from custom rigs. Or maybe that's always been a niche thing, and the niche is swelling along with the overall amount of computer users? But if people start moving away from desktops and laptops... hm.

The trend in software development though is that catering for lowest common denominators is where all the money is. There is no point in taking advantage of multiple GPUs unless the software cost of it is free, that is, it's all done magically under the covers and no-one needs to work too hard for it. Consider multithreaded code... very rarely used in C++ land as it's basically just too bloody difficult to do pervasively; give us the appropriate tools and suddenly everything can be multithreaded with hardly any effort (eg Java .

On multithreading C++ and Java are pretty much the same. Either you do a simple design and it's easier than single threaded OR you don't and your life sucks. On multi-gpus...that sort like the apple dev thing of metal or opengl.

Whaa!? No, Java < 8 really does make it an order of magnitude easier, and Java 8 improves on it further with that lambda stuff. It's no panacea but it's far, far easier than attempting to do it in a language with no built-in support for it at all. It could of course get even better (some of the directions Rust is going in are very interesting) but until we start seeing 8- and 16+ core desktops as the "standard" no-one's going to really bother.

To be fair, C++11 and C11 have added standard support for threads and atomics -- they even managed to do it reasonably well. Not perfect, but certainly not as painful as it was.

Previous to the 2011 standards, you needed to use a platform specific threading library, which in the real world meant pthreads and or the Windows thread APIs. If you don't need to support both POSIX and Windows, I personally don't find it *that* painful. Cross platform, though, is a PITA. Simpler stuff works fine with one of the Windows ports of pthreads, but anything that gets clever means writing an abstraction layer.

You could use one of the cross-platform threading libraries, of course (e.g., SDL), but all that I've used have been less than ideal.

All that said, Java's concurrency support in 1.7 is ahead of even the current C++11 stuff, and, with Java 8, the gap has grown.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org