I've recently started working with OpenGL (ES 2.0) on the Android. While there is lots of documentation about the basics, I can't seem to find much about the overall architecture of OpenGL. In particular, while I see how a Program works, shaders are linked, and vertices are loaded, I can't find any indication of how this should be used to create entire scenes.

That is, does one create multiple programs, reuse the programs, create multiple indexes, try to share textures/shaders, etc. What I'm looking for are some solid references that explain the trade-offs, and intended use, of these functions in the production of scenes. The various Android docs, and OpenGL docs also hint at several limitations in using the API, but don't give concrete information in many cases.

I understand this may be a bit subjective, but I really can't find anything useful on this topic by searching. I'm unsure where else to ask and I believe that concrete useful answers can be given.

1 Answer
1

I would suggest you start doing some reading. The book Game Engine Architecture by Jason Gregory was very helpful in giving me an understanding of the relationship between a game engine and the rendering pipeline.

First, you need to understand the role of the rendering pipeline of a modern video card (which, for the most part, is what OpenGL is designed to provide access to). In this pipeline, the video card primarily assembles lists of vertices into triangles and turns these into pixels on the screen. While that is a very gross generalization, it serves to point out that the OpenGL API is designed to be very low level and provide flexible access to the functions of the video card.

The relationship between this API and your game / game engine depends a lot on what kind of abstractions you want to make. For example, if you're making a fairly simple 2D game, you may only need a very simple abstraction (something that draws a texture to a quad could be the only thing you need). If you have a more complex 3D game with a free moving camera, numerous lights, and lots of models with various shading techniques, you'll need a lot more abstraction.

I know firsthand how difficult it can be to work on both high-level game design tasks and low-level engine design / rendering routines simultaneously. It can be very difficult to separate your concerns in a useful way. In the end, you probably want high-level concepts like an entity (which has behavior, interaction in the physics engine, and a renderable model), and you want to be able to specify its rendering qualities in a data-driven way.

The book I mentioned talks a little bit about the idea of a "Render Packet," which really resonated with me. What I take that to mean is an abstraction layer in which you specify the bare essentials needed to tell the rendering pipeline how to draw something. In OpenGL ES 2, this probably comes down to:

The shader program (linked vertex / fragment shader)

Inputs to the shader, like lights and material

The VBO's (essentially, the "geometry" that you want to render)

Any transformation matrices (namely, the modelview and projection matrices)

At this level, you've abstracted quite a few API-specific things like the geometry and shaders. Once you've created your shaders and compiled/linked them with the video card, created and defined your VBO's, etc. you can now just specify references to them in order to use them in drawing calls. In my game engine, this abstraction layer basically amounts to my RenderingEngine, which operates on RenderPackets. A RenderPacket is basically these references as well as some other state like blend-mode, depth testing, etc. Essentially, it's everything the video card would need to know to draw a given object correctly.

From here, you can start thinking in a more data-driven context. In my game engine, I have the notion of a RenderableObject. My RenderableObject can be a simple textured quad, an animated sprite, or a full 3-dimensional model. No matter what it is, it specifies enough for my RenderingEngine to know how to assemble a RenderPacket.

I have another abstraction layer that manages the scene. It's basically a scene graph where each node corresponds to a RenderableObject. It's responsible for managing the current modelview matrix, the projection matrix, determining which lights should affect a given object, the current camera, etc. It also does a lot of sorting so that objects can be rendered in the optimal order, as well as ensuring that transparent objects are drawn in the correct order.

Above this, I start getting into very abstract game objects. Each entity in the game has an associated SceneObject, whose renderable qualities are completely data driven. An enemy definition, for example, provides such things as the behavior (controller) of the enemy, its HP, its collision detection bounds, and its renderable definitions.

--

This may not be the answer you're looking for, but I hope it's given you a good perspective on how you can use the OpenGL API to provide a foundation for higher level abstractions. Keep in mind that there are probably many other valid ways in which this is handled.

Thank you, very informative. I do understand the pipeline and game engines, what I'm looking for are specifics on how to best use the OpenGL ES 2.0 pipeline. If I code on my own I have no problem figuring basic scenarios, but I can't tell if it is the best use of the API.
–
edA-qa mort-ora-yAug 8 '12 at 6:07

Are you talking about performance or design?
–
stepheltonAug 8 '12 at 8:34

I suppose mainly performance -- since not using the API correctly most likely leads to bad performance .
–
edA-qa mort-ora-yAug 8 '12 at 8:44