So, I am making a game with the help of Blender and Collada. Then I got to making the aimation system and realized that for the animations for the bones I should probably not use any interpolation in a single animation. As in, when you aniamte in blender you don't animate on every single frame. Therefore there is interpolation going on, usually cubic/beizier. However, if you are exporting the animation in Collada, should you bake the animation and then send it to your game. Becuase the later would be to export less matrices, creating a smaller file, but more computations at run time. Because instead of just doing the matrix multipications, you would also need to do an interpolation conversion. I only really see interpolation useful between animations and paramterizing animations. But an animation itself should have a full set of matrices per bone for each animation. If we can calculate it before runtime and it will always be the same, why not?

Is what I am saying make sense, or is there a huge concept that is currently flying over my head? I want to start programming this thing, but dont want to make a huge mistake resulting in me doing the whole project again. Just wanted a few pointers from people who have been doing it longer.

There are actually 3 sides to this question which when answered will tend to suggest the best solution. What quality are you attempting to achieve is one question. Another question is how much memory you are willing to use. And finally, how much performance hit will you accept. So, say you are working on a Tomb Raider game, typically you have Lara running around and maybe 1 or 2 mobs on screen at a time. You likely want very high animation quality and you can eat up a significant portion of CPU time to get that high quality, additionally you can use a fair amount of memory in the process (though it should not be needed as I'll explain in a bit). Now, look at a game like WoW of course, potentially hundreds of toons running around all with different animation data, you need only "fair" quality high performance and the least possible memory. So, in general, it is going to be the style of game and your target which will generally drive your choices. You have three basic choices, each with upsides and downsides.

The most common choice is to pre-process the animation at some framerate and compute all the joint matrices, store all those into memory. This choice gives you among the best performance level but at the cost of more memory for better quality or less memory for less quality. With simple interpolation between two keys, you can get a bit of the quality back even if you sample down at the 10-15 fps range.

The next choice is an inbetween solution where you pre-process the curve data from the animation package into something like a piecewise cubic curve approximation of the original curve. The reason this is inbetween is that the original curves from animation software are often fairly complicated mathematically, this pre-processing brings the curves down to something more simplistic. Now, the performance is not "as" good as the first case but it is still well within reason for many characters on screen. The memory and quality again go hand in hand but compared to the first version the memory is greatly reduced and depending on your curve fitting, the quality can be extremely good. (NOTE: this is typically the solution I use for most things.)

The final choice is to perform as little translation as possible and use the original DCC content as close as possible. Performance wise, this generally stinks due to how DCC solutions organize the data for flexibility of editing with less focus on performance. But, it is of course the best possible quality. Memory wise, the second solution is generally a bit better if you use a good curve fitting system which can compress large gradual changes to channels very well. But if there are a lot of changes in the derivatives in the curves, they tend to be fairly close. If you are doing cinematic cut scenes, sometimes this solution is the way to go in order to get very high quality, the second solution has some annoying compression artifacts which are difficult to correct. (For instance, if you ever played Final Fantasy X, the cutscenes would look nice except that the hands tended to jitter a bit, that is a compression artifact on long bone chains.)

Hopefully this gives you a bit of an overview. I don't mean this to be comprehensive coverage so ask questions if you want further details.

I only really see interpolation useful between animations and paramterizing animations. But an animation itself should have a full set of matrices per bone for each animation. If we can calculate it before runtime and it will always be the same, why not?

First of all, you're calling "animations" "animations", but also calling "keyframes" as "animations".

Interpolation/blending between animations is what happens when I have a character performing two animations at the same time i.e. walking and shooting. I want both to be active.

Interpolation between keyframes is when, inside the same animation (i.e. walking), the game is in a frame that lies in the middle of two keyframes; thus the result is obtained mathematically.
Like you said, you can calculate all keyframes at preprocessing time for a given framerate. Say if your game will run at 60fps, you would have to save the matrices/keyframes once every 16.666ms; and then play them.
But more often than not (particularly in PC), your game won't be running at exactly 60 fps, so you may want to interpolate. If you're running at 120fps, you'll have to interpolate i.e. at frame 89 you're actually in the middle between keyframes 44 & 45.

If you're running at 30fps you'll have to drop half of the keyframes. Now... if you're running at 32fps, you will have to drop 28 keyframes, but the problem is that you're not running at a framerate that is a whole divisor of 60. As such, when you're rendering frame 31; you would be somewhere between keyframes 58 & 59. You're actually at keyframe 58.125; which doesn't exist. You have to interpolate.

If you choose not to, and use keyframe #58 instead (which is the closest one to the theoretical 58.125) you could witness some jerkiness/stuttering in the animation. How visible this artifact is depends on the actual framerate at which you're rendering.

So the thing is, interpolation is almost unavoidable if you choose such technique, because even if your sampling frequency is too high, missmatches between the playback framerate and the framerate it was encoded with can cause artifacts.

Another issue worth mentioning is that high sampling frequency (i.e. 60fps) cause the memory consumption to skyrocket. A modestly detailed walk animation loop for a human rig with around 50 bones sampled @60fps can easily reach 20 MBs. And you won't get much more detail/quality than sampling at 15-30fps (for a videogame).

Also considering that computing power keeps improving (and animation is highly parallelizable) but memory bandwidth growth is stalled, if you're planning to have hundreds of animated objects in your scene, interpolating with relatively low sampling frequencies could give you much more performance than just preprocessing the whole thing at high framerates and playing it with no interpolation.

So the thing is, interpolation is almost unavoidable if you choose such technique, because even if your sampling frequency is too high, missmatches between the playback framerate and the framerate it was encoded with can cause artifacts.

Another issue worth mentioning is that high sampling frequency (i.e. 60fps) cause the memory consumption to skyrocket. A modestly detailed walk animation loop for a human rig with around 50 bones sampled @60fps can easily reach 20 MBs. And you won't get much more detail/quality than sampling at 15-30fps (for a videogame).Also considering that computing power keeps improving (and animation is highly parallelizable) but memory bandwidth growth is stalled, if you're planning to have hundreds of animated objects in your scene, interpolating with relatively low sampling frequencies could give you much more performance than just preprocessing the whole thing at high framerates and playing it with no interpolation.

This is handled by adding a redundancy elimination after the sampling stage.First you sample between key frames at a fixed rate as mentioned.

Then you go over every triplet of samples, for example froms 0, 1, and 2, and you interpolate between the end samples, checking for error on the middle sample.In this case that means, interpolate between 0 and 2 and if it is close enough to 1, eliminate 1 and repeat for frames 0 and 3, again checking against 1.If the interpolation between 0 and 2 is not close enough, 1 is left and the process is repeated for frames 1, 2, and 3.

This gives you the best trade-off between accuracy and memory, as well as run-time performance. You can often end up with fewer frames then were in the original data set while stilling being extremely accurate.

However I would contend with the suggestion that the whole matrices should be stored on each keyframe.Everything should be track-based, and only the minimum number of tracks should be used. This increased performance greatly and avoids the main problem with storing whole matrices. If you store whole matrices at each keyframe you lose a lot of flexibility. It’s easier to write the tool chain and the run-time, but you can’t mix tracks etc. In my previous engine I stored whole matrices at each keyframe and then later made a tool that could allow you to throw custom motions into the mix, such as rotating around Y for some seconds etc.I had to evaluate the whole transform matrix for the animation system and then overwrite the rotation based on the tool’s parameters, and not only was it quirky but it had bugs that were simply not possible to fix due to limitations with the matrix system.

By going fully track-based, you are only updating the parts that are actually animated, and tracks can be individually turned on and off to allow compliance with other systems.Plus it is true to the way model authoring software works, and tracks can then be used to animate anything, not just position, scale, rotation, etc. The same track system can be used to animate lights turning on/off, changes in colors, changes in camera settings, etc.Tracks are really the way to go.

Here is code for loading a track from the Autodesk® FBX® SDK and extracting the minimum number of keyframes needed.

Because until now the clients have not complained about the performance. Most likely they have not profiled the engine for bottlenecks, or if they have they found other areas of a higher priority. That can happen when scenes are heavy on articulated dynamics or other types of physics.

@AllEightUP I dont know if this niave, but essentially the game I am making is a fast pased action-fighting game where ever hit press counts. So what I would like is for the game to be fast, as in as little delay as possible between a button press. But what I would really hate to suffer are the animations. I am an animator myself and but a lot of time into my animation work and like for it to be a preserved as performacly possible. One thing I am willing to take the hit is maybe memory or even graphics. So I would like quality and performance, Essentially what it seems like what I might do is store the whole matrices for the animations. Method one would be the right choice right for maximum performace and quality?

The most common choice is to pre-process the animation at some framerate and compute all the joint matrices, store all those into memory. This choice gives you among the best performance level but at the cost of more memory for better quality or less memory for less quality. With simple interpolation between two keys, you can get a bit of the quality back even if you sample down at the 10-15 fps range.

The third option sounds temping but I dont want the performace hit.

The final choice is to perform as little translation as possible and use the original DCC content as close as possible. Performance wise, this generally stinks due to how DCC solutions organize the data for flexibility of editing with less focus on performance. But, it is of course the best possible quality.

Unless you collide it maybe with what L Spiro said.

@L Spiro What you are saying are that there are methods to reduce the memory and still have relatively high/preserved animations. I understand that part, but then you go to detest storing whole animations in the form of matrices. Which seems understandable, but then what you offer is thing I have never heard of before: tracks. What are those exactly? And what would you be storing with tracks? I know if I use dual quaternions, I would be storing quaternions--but what would I be storing with tracks.

I was planing on making the animation system by storing all the matrices per bone per animation and then upgrading to dual quaternions to get better deformations and rotations, but are tracks better than even dual quaternions or do they ensapsulate them--like animation controls? Ie like one contol cotrols the top, one controls the bottom half of the character(Animation Blending!)...what I am trying to say are waht on earth arte tracks lol

@Matis I think for performace and quality sake I would be better off doing method One and choping it down using redudancy check and maybe tracks...if i could understand them lol

but then what you offer is thing I have never heard of before: tracks. What are those exactly? And what would you be storing with tracks? I know if I use dual quaternions, I would be storing quaternions--but what would I be storing with tracks.

If we think in programmer’s terms, which might be more comfortable for you, an animation track is nothing more than a class whose only responsibility is to track the change of a given value over time.

At the end of the day, this is indeed the bare necessity behind all animation, and it’s how all 3D modeling tools work.

A track modifies a single floating-point value over time.
An animation is a collection of tracks.
Thus if an animation requires only the position (XYZ) to be changed, only that is calculated and updated. Not only is this a closer relationship with the tools the artists are using, it is also faster at run-time.

When you define a track as nothing more than “something that modifies a floating-point value over time,” it becomes easy to see how it can be useful in all forms of animation.
When the value being modified is a boolean, a floating-point value is still modified internally but it is cast to “bool” afterwards. And this exactly mimics how Autodesk® Maya® works, as well as all other 3D authoring software.
When a value is float, nothing changes. A float cast to a float results in no output code.

In other words you can easily make a template class that keeps a float internally but casts the interpolated result to any type. That is how all 3D authoring tools work.

Once you’ve made track so low-level, they can easily be applied to other things, as I mentioned. Changing only the R component of an RGB color overlay, for example.

Basically, everything you want to animate is just a number. A track works on a single number and it tells how that number changes over time.

If your position, scale, and rotation are all animating, you would need 3 tracks for position, 3 for scale, and 3 for rotation.

The overhead would be similar to interpolating between full matrices every frame.

If scale is not changing, you only need 6 tracks, and you save time. If only rotation is changing, you only need 3 tracks.

Essentially a track is, a class that modifies a floating point value over time? So when you store the animations, per frame you are storing the change in pos3f, rotatation3f. I am using Collada and Blender, it exports animations in matrix form. So what I would need to do is decompose it into translation and rotation, matrices. Then record the change that those two transformations perform to the basis roation and translation vectors. Then when ever I am doing the animation, instead of multiplying it all through I just read the track information which is simple addition of pos3f tracks and rot3f tracks?

This sounds ingenious. If I understand you correctly, essentially this allows you to side step all the problems of interpolation. So you dont need to use quaternion interpolation or matrix interpolation, all you have to do is linearly interpolate the tracks which are simple floats.

Essentially a track is, a class that modifies a floating point value over time? So when you store the animations, per frame you are storing the change in pos3f, rotatation3f.

There would be one track for position.x, one for position.y, etc.

I am using Collada and Blender, it exports animations in matrix form.

That is something of a problem as information is lost that way. This caused the incurable bugs I described above.As one specific example, if you take a linear deogree rotation animation from 0 degrees (let’s say around the Y axis) to 720 degrees, once you convert that to radians and normalize you have a rotation from 0 radians to 0 radians.

I am understanding what you are saying properly?

Yes.Just get out of matrix form as early as you can, trying to retain as much of the original information as possible.The bug I mentioned above might not happen if you can sample the animation at regular intervals from within Blender. The begin and end radians might both be 0, but you would interpolate through enough intermediate frames to run through the animation correctly.

I dont feel like I am going to grasp it entriely tonight, but at least I now know of tracks and desperately want to learn more, Thank you.
Is there a sitie or a pdf or some book i could read that goes into more detail in the topic of animation tracks. Thank you for your help so much.