I have been battling this for hours. Scoured the internet and past forums posts but nothing I have found or tried has helped so far. I am just trying to get a very minimal working example of a tessellation shader running, but am having no luck.

I am in OpenGL 4.5 on Windows, C++ in VS. I use SFML, GLEW and GLM libraries.

I have no GL errors, no shader compile errors, no shader linker errors in my log. If I swap the shader (game.GL.shaderTess) for just a standard one with only vertex and fragment and change GL_PATCHES to GL_TRIANGLES it works as expected.

I imagine there is probably some simple mistake in there somewhere but I can figure out what it is for the life of me.

Note that this will flip the ordering relative to GL_TRIANGLES. This may be an issue if GL_CULL_FACE is enabled. Try changing "cw" to "ccw".

Also: many tessellation techniques require the transformations (or at least the projection transformation) to be applied after tessellation.

Acumen

12-13-2017, 09:22 AM

I can verify that culling is not on with the other shaders that do render. I did try it anyway but no dice.

Wouldn't this shader be applying the tessellation to already transformed vertices so not need additional transformation? I tried moving the transformation from the vertex shader to the tessellation evaluation shader but nothing changed then either.

I am going to verify all the inputs going to the draw call, probably should have done that earlier but it worked fine with the standard shader so didn't think too. Out of ideas though.

Just for clarification I use the word quad in my variables, but for everything GL related it is a regular list of triangles, I only use it internally in quad form.

Acumen

12-14-2017, 04:23 PM

OK I fixed it, not sure why the code above wasn't working since it seemed to work for the tutorials that wrote it. Can someone maybe shed light?

What I am pretty sure it ended up being is the shader compiler was optimizing out
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0f); from the vertex shader, due to
gl_Position = (gl_TessCoord.x * gl_in[0].gl_Position + gl_TessCoord.y * gl_in[1].gl_Position + gl_TessCoord.z * gl_in[2].gl_Position); being set in the tessellation evaluation shader? Even though it relied on the value already being in gl_position when I set it in the vertex shader? Does that sound right?

My first problem was I was still binding the attributes back the original non tessellated shader after I bound them to the tessellated shader as mentally I was still modifying it, but had switched to A/B when I started having problems. That led to me reading the errors from the old attributes which were clean instead of the new ones, so once I figured that out I found an error in the bind. The attribute was coming back as not found, but since it was in my code, it means it was optimized out. I instead passed the position variable down the pipeline so that I was only writing gl_position once and it started working no problem.

GClements

12-14-2017, 05:21 PM

What I am pretty sure it ended up being is the shader compiler was optimizing out
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0f); from the vertex shader, due to
gl_Position = (gl_TessCoord.x * gl_in[0].gl_Position + gl_TessCoord.y * gl_in[1].gl_Position + gl_TessCoord.z * gl_in[2].gl_Position); being set in the tessellation evaluation shader? Even though it relied on the value already being in gl_position when I set it in the vertex shader? Does that sound right?

If that's happening, it's a bug in the implementation. If a tessellation evaluation shader is present, it's supposed to set gl_Position, and there wouldn't be any point in having gl_Position in gl_in[] if it couldn't be used.

My first problem was I was still binding the attributes back the original non tessellated shader after I bound them to the tessellated shader as mentally I was still modifying it, but had switched to A/B when I started having problems. That led to me reading the errors from the old attributes which were clean instead of the new ones, so once I figured that out I found an error in the bind. The attribute was coming back as not found, but since it was in my code, it means it was optimized out. I instead passed the position variable down the pipeline so that I was only writing gl_position once and it started working no problem.
A common problem when using geometry or tessellation shaders is forgetting that variables don't automatically get "passed through" from the vertex shader to the fragment shader. The intermediate shaders have to explicitly copy inputs to outputs. Also, as you can't use the same name for both input and output variables, you need to either have the fragment shader use different names depending upon whether there's a geometry/tessellation shader present, or use interface blocks, or use location-based binding.

Acumen

12-15-2017, 01:24 AM

A common problem when using geometry or tessellation shaders is forgetting that variables don't automatically get "passed through"

Ah! OK thanks I found it. I was missing
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position; in the tessellation control shader. I had assumed once gl_Position was set that it was a special case in memory so I didn't need to pass it through. Seems like the kind of thing that could easily be. I guess maybe there are ways to take advantage of the fact that it doesn't work that way?

GClements

12-15-2017, 03:59 AM

Ah! OK thanks I found it. I was missing
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position; in the tessellation control shader. I had assumed once gl_Position was set that it was a special case in memory so I didn't need to pass it through. Seems like the kind of thing that could easily be. I guess maybe there are ways to take advantage of the fact that it doesn't work that way?

One of the reasons that it doesn't work that way is that there's no requirement for the number of input vertices to be equal to the number of output vertices. In fact, "output vertices" is an abuse of terminology. It's just the number of TCS invocations, nothing more or less. Splitting the TCS' workload into multiple invocations allows the inherent parallelism of GPU architectures to be taken advantage of. It's not required (or even common) for the non-patch outputs of a TCS correspond to vertices in any way. E.g. it's fairly typical for a TCS to convert control points to polynomial coefficients, as this simplifies the calculations which the TES performs for each generated vertex.