Recommended Posts

Vertex and geometry shaders + uniforms transform these static data to dynamic 3D data (projection on a sphere and scaling)

User can select a node through mouse-click : we detect the selected node by framebuffer rendering with a fragment shader encoding the vertexId

corresponding to the node. Now, we would like to rotate the camera to bring this node in front of us and with a given size.

Here, we only see to ways :

- iteratively move the camera and use pixel tracking to check the result, which requires to get the framebuffer rendering texture back to CPU and do a very slow CPU search to compute new node center and size

- translate the whole Vertex & Geometry shader glsl code back in C++ to compute the real coordinates of the node and do the math to compute the right camera position achieving our goal.

This seems to be a basic functionality but we can't find a way to find the right technique.

Share this post

Link to post

Share on other sites

To know how much we need to rotate the camera (with classical X & Y rotations + eye translation looking at some fixed point), we need to find out in some way the distance from the camera to the object : but since the object 3D coordinates are computed by the GPU (Vertex & Geometry Shader), we do not have any information at the CPU level...

From a mouseClick, we are just able to identify the object (framebuffer rendering + readPixel + decoding of the vertexId encoded at the fragment level).

Thanks

0

Share this post

Link to post

Share on other sites

I might be a little late on this, but if the problem still exists, you could try to also read the depth value of the pixel. Together with the pixel coordinates and the inverse projection and view matrix you are then able to calculate the 3D world coordinates of the pixel.