So I've been learning OpenGL for the past two weeks and have finally begun to understand the first few chapters of the book Modern 3D Graphics Programming thanks to a few very helpful members of this forum. I've begun to learn about shaders, however I always find myself using ffp functions which are either deprecated or removed, for example when setting up a perspective camera I use:

I've done a little Googling and found that the alternative to this in modern OpenGL is handling your own matrices and passing them into your shaders. However I wouldn't yet know how to do such a thing. What I'm worried about is getting into a habit of using these deprecated functions. Should I just continue reading the book and continue to use these ffp functions until I learn modern alternatives?

Another question I have regards the use of shaders and VBOs. I understand that you can use a fragment shader to set fragment colours, but you can also use VBOs. I have no doubt that using a shader would be a better choice but would this be the case for every situation? I see a lot of comments regarding the use of shaders and VBOs as modern OpenGL, but is there any other use for VBOs except for storing vertex data? Is there any limitations to shaders that you would have to use VBOs for?

I think you messed something up. Vertex Buffer Objects is a way of passing in vertex data into your garphics card's memory. You can pretty easily edit that data once it is in the graphics card's memory. You can pass in vertices, it's colors, it's texture coordinates, it's normals and some other stuff probably I don't yet know about.

Shaders allow you to modify the OpenGL pipeline. What this means is you can change stuff OpenGL does for every vertex. For example, if you want to every vertex to have the same color, you can do that by changing color data on graphics card's memory, or by making a shader, which would set color of every vertex sent to the OpenGL. In the other words, you can do stuff with shaders which you can't do with anything else. It is impossible to do lighting and shading without shaders. Of course it is possible, but the performance doesn't compare, because you would have to do lighting and shading on CPU instead of on GPU with Shaders.

If you're just starting out, use the function that are deprecated. You should get to the point when you don't need them and can change them with your code very easily. You will know when to drop deprecated functions

I think you messed something up. Vertex Buffer Objects is a way of passing in vertex data into your garphics card's memory. You can pretty easily edit that data once it is in the graphics card's memory. You can pass in vertices, it's colors, it's texture coordinates, it's normals and some other stuff probably I don't yet know about.

Shaders allow you to modify the OpenGL pipeline. What this means is you can change stuff OpenGL does for every vertex. For example, if you want to every vertex to have the same color, you can do that by changing color data on graphics card's memory, or by making a shader, which would set color of every vertex sent to the OpenGL. In the other words, you can do stuff with shaders which you can't do with anything else. It is impossible to do lighting and shading without shaders. Of course it is possible, but the performance doesn't compare, because you would have to do lighting and shading on CPU instead of on GPU with Shaders.

Let me rephrase the second question. What I meant to ask is: you can pass data to the GPU with VBOs and I realise that it's possible to keep passing in new data with VBOs. However, since shaders can modify that data passed in previously, why would anyone do that? Is there any limitations in modifying that data with a shader which would mean you'd have to pass new data in with a VBO? Also, what would be the point of passing in colour data in the first place when you can just change it directly on the GPU using a fragment shader?

Every vertex could have a different color. If all the vertices have the same color, than you can just use shader to do the job. But what will you do if you have 10000 vertices and each one of them has different color? You can't use shader for that. You will just have to pass the color data into GPU. I still don't understand your question kinda..

Shaders are used to transform the incoming vertex data. Not to modify it. You can just easily transform where you want your vertices to be for example.

The reason you don't understand is because you couldn't go on without deprecated functions. Just don't bother with it. You will understand eventually. Try making a game. You will see how stuff actually works in practice instead of reading a book. Most of the tutorials for game development in LWJGL don't work in practice. I tried to make a game and it didn't work at all. People who make tutorials usually don't have any larger game made other than "small test game".

I see shaders as being important in terms of lighting. I know there are lots of other uses for them, but you can create your own light models with them instead of just using the standard OpenGL ones that kind of aren't that great. You can do crazy things with shaders because they are essentially little programs that change data before being rendered, giving you an almost infinite possibility of things to do with them.

Every vertex could have a different color. If all the vertices have the same color, than you can just use shader to do the job. But what will you do if you have 10000 vertices and each one of them has different color? You can't use shader for that. You will just have to pass the color data into GPU. I still don't understand your question kinda..

Shaders are used to transform the incoming vertex data. Not to modify it. You can just easily transform where you want your vertices to be for example.

You did understand my question but having 10000 colours would be unrealistic so the general answer I assume would be to always use shaders wherever possible.

The reason you don't understand is because you couldn't go on without deprecated functions. Just don't bother with it. You will understand eventually. Try making a game. You will see how stuff actually works in practice instead of reading a book. Most of the tutorials for game development in LWJGL don't work in practice. I tried to make a game and it didn't work at all. People who make tutorials usually don't have any larger game made other than "small test game".

I learn by doing, so I've actually been learning by drawing cubes and I started creating a game. So far I've created a little art, wrote a simple model loader, some nice maths/physics classes, etc. but it's the matrix maths that I'm having trouble with and OpenGL code that I'm still learning. I could implement most of the transformations, rotations etc. with the ffp but I want to use shaders since I'll eventually want to add lighting and I don't want to use deprecated functions. I guess I'll learn the basics of using shaders then have a go at actually fitting the pieces of the game together. When I've got that sorted I'll probably start my first project thread on here.

I seriously recommend making a game in OpenGL before you move on. If you're serious about only learning it for two weeks, then you still have a lot to learn. Learning modern opengl on top is just confusing.

Let me rephrase the second question. What I meant to ask is: you can pass data to the GPU with VBOs and I realise that it's possible to keep passing in new data with VBOs. However, since shaders can modify that data passed in previously, why would anyone do that? Is there any limitations in modifying that data with a shader which would mean you'd have to pass new data in with a VBO? Also, what would be the point of passing in colour data in the first place when you can just change it directly on the GPU using a fragment shader?

i.e: Is there any reason to use GL_STREAM_DRAW over shaders?

VBO updating and shaders are two completely different things designed to accomplish completely different goals. VBOs are permanent or temporary memory buffers stored in the GPU's video RAM. Shaders are small programs that can do whatever you want to each vertex, pixel or piece of geometry. They cannot replace each other.

To answer your main question: GL_STREAM_DRAW is very useful for positioning things. A prime example is particle rendering. You have a few thousand particles that you move around each frame, meaning that each frame the updated particle data has to be reuploaded. In this case, GL_STREAM_DRAW is great since that's exactly what you want to do. Note that GL_STREAM_DRAW in itself is only a hint. From what I know most drivers use that value as a hint but the final decision concerning how the VBO should work is done using heuristics based on how you use it. Also, it is possible to run the particle simulation on the GPU, meaning that the data is already in VRAM so no uploading is necessary, but this can be very hard to do when more advanced stuff like collision detection comes into play.

Shaders on the other hand can be used to complement particle rendering. For instance, geometry shaders can be used to expand points into point sprites (quads facing the screen with a texture). Let's say your particles are simple: They have a position, a color and also need texture coordinates for texturing. Without any shaders, you'd need 4 vertices per particle (one for each corner) each containing a 3D position (3 floats), an RGBA color (4 bytes) and texture coordinates (only 1 or 2 bytes, but padded to 4 bytes for alignment). In total, we need 12+4+4 bytes = 20 bytes per corner, totaling 80 bytes per particle. That's a LOT! It's obvious that we're duplicating the color 4 times for each particle and the texture coordinates are the same for each particle. In this case geometry shaders can help a lot. By instead uploading a single vertex which contains all the information we need to construct a quad in our geometry shader we can both save a lot of memory and offload much work from the CPU.

To construct our quad we'd only need a 3D position (3 floats), a 2D size of the generated sprite/quad (2 floats) and a color (4 bytes) for a total of 12+8+4 = 24 bytes in total. The geometry shader can then output 4 corners, each with generated texture coordinates. You could even throw in a rotation variable and calculate a rotation matrix in the geometry shader.

TL;DR: VBOs are useful for static data stored permanently on the GPU, and CPU generated data uploaded each frame. Shaders are useful for heavy work like lighting and other effects that need to be recalculated or generated from static data (or a relatively small amount of dynamic data like a matrix, a skeleton, a light's position) each frame.

Using X_DRAW tells the graphics driver how the VBO shoul be used, and then it decides how best to store it for that.

STATIC_DRAW is for 1:N drawing. (The data is supplied once and used any amount of times)STREAM_DRAW is for 1:1 drawing. (New data is given for each draw call)DYNAMIC_DRAW is for N:N drawing. (New data can be supplied whenever and can be drawn any amount of times)

So if the data never changes, use static draw.If it changes every render call, use stream draw.Otherwise, use dynamic draw.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org