Share this post

Link to post

Share on other sites

Some info about what is going on:
[url="http://www.songho.ca/opengl/gl_projectionmatrix.html"]http://www.songho.ca...tionmatrix.html[/url]

In a nutshell, 3D geometry in world space is [i]projected[/i] onto a 2D view plane, defined by eye coordinates. So imagine you have 3D cube. The cube is decomposed into triangles. The triangles are clipped against the view frustum*. The visible (and clipped) triangle corners are projected onto the 2D view plane.The Z component in the triangle point is retained as a measure of distance from the view origin (camera), which will be used later for hidden surface elimination purposes. The resulting points in eye coordinates are transformed into normalised device coordinates (NDC). Generally the world->eye->NDC transform can done in a single step via the projection matrix, by combining the projection and the normalisation math.

The purpose of using normalised device coordinates is to make the incoming values unit-less and proportional to one another. The graphics card can map those normalised coordinates to whatever internal metric the hardware uses, generally in pixels, as defined by the view port size. Depth (Z) values are usually normalised to the range of [0, 1], which is mapped over a particular precision range (16, 24, or 32-bit, either float or integer), as dictated by the hardware's capabilities.

* Note, the clipping is usually done after transforming the triangle into eye coordinates, because clipping in 2D is more efficient in some cases. But conceptually, we get the same result.

0

Share this post

Link to post

Share on other sites

[size="2"][color="#1c2837"]
[size="2"][color="#1c2837"]Thanks a lot[/color][/size]
[size="2"][color="#1c2837"]
[/color][/size]But How come the last row of the Perspective Projection matrix become (0,0,-1,0). It is not clear.[/color][/size]

0

Share this post

Link to post

Share on other sites

The view frustum is looking into the -Z direction in the eye coordinate system. Therefore, the aforementioned cube model in my earlier example has a negative offset relative to the view origin along the Z axis. In order to sort depth values in the range [0, 1], we must negate the incoming Z values.

Similar Content

I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
void main()
{
float x = 0;
float y = 0;
int sum = 0;
for (float x = 0; x < 10; x += 0.00005)
{
for (float y = 0; y < 10; y += 0.00005)
{
sum++;
}
}
fragColor = vec4(1, 1, 1 , 1.0);
}
with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.

There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window.
I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
anyone, please help me .. how to go further... to create an application like VR CAVE.

i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
thank you, and looking forward to positive replies.

I have a few beginner questions about tesselation that I really have no clue.
The opengl wiki doesn't seem to talk anything about the details.

What is the relationship between TCS layout out and TES layout in?
How does the tesselator know how control points are organized?
e.g. If TES input requests triangles, but TCS can output N vertices.
What happens in this case?
In this article,
http://www.informit.com/articles/article.aspx?p=2120983
the isoline example TCS out=4, but TES in=isoline.
And gl_TessCoord is only a single one.So which ones are the control points?
How are tesselator building primitives?

I've been developing a 2D Engine using SFML + ImGui.
Here you can see an image
The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine.
I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui.
3D Editor preview
But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
If you can provide code will be better. And if you want me to provide any specific code tell me.
Thanks!