Projecting screen space texture onto UVs with GL 4.5

In a deferred rendering path with I have a plane with a default black texture, and a screen space texture. I want to permanently "stamp" this into the plane's black texture from the current camera's viewpoint vec3(5.f) looking at vec3(0.f). I've tried to use image load / store functions with image2D and regular texture() to no avail. I've also tried transforming the UV coordinates by the View, Projection, and ModelView matricies. The texture should be horizontally applied since I'm looking through the screen space tex, as seen here:

Should the object space UVs be transformed to a different space to match the camera? If so, which space and why?

It's not clear to me exactly what you're trying to do. It sounds like you're trying to render one texture into another, but it's not clear what the problem is.

You're talking a lot about UVs, but if you're trying to render one texture onto the other, then the only UVs that are relevant are the UVs for the triangle(s) that you're mapping the source texture onto. And those UVs should just be whatever part UVs should represent the portion of the source texture you want to copy. So if you're rendering the whole texture as a quad, then the texture coordinates should be the four corners of the texture coordinate space: (0, 0), (0, 1), (1, 0) and (1, 1).

What matters for how it appears in the destination texture is not the source UVs, but the source positions. After all, you're just rendering. Whether to a texture or to the screen, where things get rendered depends on the positions of the vertices that make up the triangles.

So the result you get depends on what happens to the vertex positions.

It's not clear to me exactly what you're trying to do. It sounds like you're trying to render one texture into another, but it's not clear what the problem is.

A similar situation would be in a projection painting application like Mari where you paint or load an image into the "paint buffer", then press B to bake that data into the selected object's current texture.

Originally Posted by Alfonse Reinheart

What matters for how it appears in the destination texture is not the source UVs, but the source positions. After all, you're just rendering. Whether to a texture or to the screen, where things get rendered depends on the positions of the vertices that make up the triangles.

So the result you get depends on what happens to the vertex positions.

So I should reconstruct P in my gBuffer and multiply that by the object space UVs? Right now the result of the projection seems to not have the MVP matrix applied to it....this is the source of my confusion.

A similar situation would be in a projection painting application like Mari where you paint or load an image into the "paint buffer", then press B to bake that data into the selected object's current texture.

By this analogy, you would appear to be attempting to take a texture and copy it to another texture, based on projecting the source texture onto an object that is mapped to the destination texture.

That's... complicated. Doable, but complicated.

The key here is to remember what you're rendering. You have the following things:

* A source texture.
* A destination texture.
* An object.
* A projection of the source texture onto that object. So you have some matrices and other settings to do that projection.
* A mapping from the object's vertices to locations in the destination texture (aka: your object has texture coordinates).

So, your goal is to render into the destination texture. The question is this: what triangles do you render?

You can't render the positions of the object, as they are meaningless. Your goal is to modify the destination texture based on a projection of a texture onto the vertex positions. Because you're rendering to the destination texture, the triangle you actually rasterize needs to represent where that triangle is in the texture's space.

So the gl_Position you write from your vertex shader is not based on the position of the object in the world. It's based on the "position" of the object in the destination texture. And therefore, gl_Position should be based on the texture coordinate, the mapping of vertex positions to locations in the destination texture.

You'll still need the model's positions, but they are not part of computing gl_Position. Your gl_position are the texture coordinates, modified to fit the [-1, 1] range of the destination texture viewport.

The next question is how to fetch the right data from the source texture. To do that, you need to perform normal texture projection steps, based on the positions of your vertices. So you do the usual per-vertex transforms, but you use the projection camera viewport and an appropriate projection matrix. Also, you don't write the result to gl_Position; instead, you pass this 4-dimensional data as a vertex attribute to the fragment shader. From there, you use the textureProj functions to do projective texturing.

Then just write the fetched value to the fragment shader output.

You also need to keep in mind backface culling. You probably don't want faces that face away from the projecting texture to get painted. And you can't rely on the vertex positions for that, since they're texture coordinates. So instead, you need to cull triangles based on their normals. You could use a geometry shader for that. Alternatively, you can do the dot product between the normal and the camera viewing direction in the fragment shader, discarding the fragment if the dot product is positive.

I don't really know what any of this has to do with deferred rendering, g-buffers, or anything of that nature. You're just trying to copy part of one texture into another, based on a projection onto a surface that maps into the destination texture.

You also need to keep in mind backface culling. You probably don't want faces that face away from the projecting texture to get painted.

And you may also want to use a depth pass so that the texture doesn't get applied to camera-facing surfaces which are occluded by nearer surfaces. The principle behind that is essentially the same as shadow mapping.

And you may also want to use a depth pass so that the texture doesn't get applied to camera-facing surfaces which are occluded by nearer surfaces. The principle behind that is essentially the same as shadow mapping.

That's going to be rather difficult when rendering into a texture. Remember: the positions are locations in the texture being rendered to. And the texture mapping probably doesn't overlap.

You could do some manual depth buffering with image load/store, but the automatic depth buffer won't work for this approach.

I was successful in projecting a texture onto an obj by setting the destination obj's UVs as the gl_Position and then doing textureProj if NdotV > 0.

However, I now have a problem with texture seams being extremely visible when sampling this texture on an object. This is the same ABJ tex with green letters and white BG, just projected onto a teapot and saved as a texture and then applied. You can see all my UV seams. I'm wondering if this is a problem with the shader or with incorrect sampling parameters when I initialized the texture?

Also, I found that changing the projection matrix settings when using glm:erspective - specifically the FOV - will cause multiple projection and cause the object to grow or shrink. Have I forgotten to negate this someplace ?

Also, I found that changing the projection matrix settings when using glm::perspective - specifically the FOV - will cause multiple projection and cause the object to grow or shrink. Have I forgotten to negate this someplace ?

It sounds like you're using different projections at different times.

To bake a rendered image into a texture, you need to generate texture coordinates by transforming object coordinates using the same transformations used for rendering the image. The model transformation can be used to position the object, but the transformation from world space to screen space (i.e. the view and projection transformations) needs to match.