Compute shaders on MacOS X

Hi all,

I know that MacOS X Mavericks has only GL 4.1 support but is anyone aware if there is hidden support for compute shaders or even a hack-ish way to use them?

What I am interested to know is if a Mac with Mavericks and with an nVidia 6xx GPU can do some compute magic. I assume that since Apple is using nVidia's driver the functionality is probably already there but not exposed to MacOS public headers (?!?).

Thanks in advance!

PS: When I write compute shader support I also imply support for shader storage buffers, shader storage blocks etc etc.

Compute Shaders are not supported on OSX right now. The drivers are not from NVidia - Apple develops there own driver front end (which results in a lot of pain). In the past there were sometimes ways to get access to unreleased features (e.g. some GL 4 features on late OS 10.7) but those were buggy and unreliable. As mentioned already, OpenCL might be a better guess for you.

If you take a look at GL 4.1 support MacOS passes only 43% of the tests. For 4.0 passes 64% of the tests. Even if g-truc is telling half truth (which I highly doubt) the pass rate of MacOS is pretty low.

*: I only investigating the possibility of porting a 3D engine to MacOS

If you take a look at GL 4.1 support MacOS passes only 43% of the tests. For 4.0 passes 64% of the tests. Even if g-truc is telling half truth (which I highly doubt) the pass rate of MacOS is pretty low.

The other possibility - shocking, I realize - is that his tests are buggy.

Consider his failing primitive-instancing shaders for GL 4.1. They use this GLSL syntax:

"layout(location = COLOR) flat out vec4 VertColor;"

As it happens, it's against the GLSL 4.1 spec to use a flat interpolation mode in conjunction with a location modifier. This is why a number of the OS X tests are failing in his suite.

OS X adheres slavishly to the spec here, for better or worse, and as it happens the other implementations should be the ones failing. I mention this case in particular because I was bit by it recently and was shocked to discover that OS X was behaving correctly. A far more observant person than me recently pointed this out to me, and noted that this is sorted out in GLSL 4.2.

Let me add, just for clarification, the part of the spec that specifies this.

First, grab a copy of the GLSL 4.10.6 clean pdf spec from opengl.org.

Now open the pdf and go to page 149. This is the middle of the BNF for the GLSL syntax in 4.1.

You'll note that, annoyingly, there is no valid combo for "type_qualifier:" that combines both layout_qualifier and interpolation_qualifier.

If you want to see how this was fixed in GLSL 4.20, grab the GLSL 4.20.11 clean spec pdf and go to page 167. "type_qualifier:" can now be any combination of "single_type_qualifier:", which means the language works like you would expect in 4.20 but not 4.10. Bug fixed!