From a modellling app, to code...

I'm very much a novice programmer so please forgive me for asking what may be a pointless question.

If I design a model in an application like Silo, Maya or Lightwave and convert the file to be used with OpenGL (I've seen command line tools for that), can the model be manipulated fully? To use an example and hopefully be more specific: If I design a model of a human but the modelling program does not allow me to animate and then get that model into code, can the model be coded to move around? That is, can the arms, legs, torso, head, etc be animated via OpenGL programming even though the original model made could not?

Again, I apologize if this is a dumb question (some may argue that there is no such thing), but I know little or nothing of OpenGL and do not feel comfortable yet attempting to learn it. Given time, though...

Some programs like BltizMax use different versions of the model to animate, ie, a stand pose morphs to a running pose.

You can also program your own bones/animation system and make a primitive animation program to support your game. We've done this with iGame3D when we used meshwork models, because boning and animating in meshwork means crashing all the time.

But since you are new to openGL I suggest not worrying about animation yet, getting your feet wet in the general rendering and motion first, then move on to the heavy stuff, else you might get discourage by failure. When its time to animate, look into exporting model formats that can be animated instead of trying to squeeze blood from a stone, or animation from a DXF file.

Yes, as igame3d said, it involves creating your own program to animate and attach a skeleton to whatever mesh it is you're trying to animate. However, if you're using something industrial like Maya, it's far easier to export the whole works, including the animations, from there. There are two popular types of character animation, vertex animation and skeletal animation.

Vertex animation must be done in an external editor like C4D, Blender, etc., and cannot be generally done programmatically after the fact, like what you're talking about. What you do is animate your character in the modeler/animator/editor program and then sample, or take a "snapshot" of, the position of all the vertices every few frames or so. This can take up a lot of memory as you can imagine, which is one of several drawbacks to vertex animation. In your OpenGL program you would recreate the missing frames of animation when they are needed, on the fly, by calculating where each vertex *would* be if there was a frame for it at any given time, called interpolation. The main advantage to vertex animation is that it is relatively simple to implement. The Quake series uses vertex animation. Quake 3 even allows separate meshes and animations for each limb. Doom3 and Unreal, however, use skeletal animation.

Skeletal animation is a little harder to understand I think, but it offers the greatest amount of flexibility and can be added after the fact like what you want. A "skeleton" is merely a collection of joints. There is one root joint, usually at the pelvis location. The rest of the joints are attached to the root joint in a hierarchy just like a real skeleton. Each joint has an offset from its parent joint, and a rotation value, usually stored as a quanternion (you'll have to google quaternion if you don't know what that is). After the skeleton is made it is attached to the mesh. This is done by associating each vertex with one or more joints (usually no more than three or four joints at a time, but one or two is most common). If more than one joint attachment is allowed then the mesh is referred to as a weighted skin. Weighted skins take more processing time and are more complicated to implement but offer more realistic animations. Weighted skin arrangements use a technique called matrix palette skinning to determine the location of the vertices at runtime (big words, simple concept). However, some (most?) game engines only allow one joint association for simplicity and speed. In either case, the vertices are located by their offset from the joints. If the joint moves or rotates then the vertex naturally has to move accordingly. It sounds complicated, but it really isn't after you get your mind wrapped around it. Basically what you wind up doing is animating the skeleton by changing the rotation of each joint and the vertices of the mesh follow automatically. The big advantage to skeletal animation is that all that needs to be animated are the joints, and you don't need a copy of the entire mesh for each frame of animation, just a copy of the skeleton, which might be fifty or eighty joints for a humanoid with facial skeletal animation included, instead of thousands of vertices for vertex animation each frame. Another great thing about skeletal animation is that animations can be done programmatically in the game for effects like inverse kinematics or rag-doll physics.