Points and Lines: Computationally fast

I am working on a basic physics engine for a video game. In it, models are applied forces and deal with those forces by some physics calculations.

Whenever a model is applied a force, the force is in the form of a vector originating from a point in space. The vector must be decomposed into two components: the component that is pointing from the origin of the vector to the center of mass of the object is added to the translational force vector of the model, the component that is perpendicular to the first component goes into rotational force.

I need a way to decompose these vectors into two base vectors, one pointing at the center of mass of the object, and one perpendicular to that. But the catch is, because I am doing this many times a second, the calculation needs to be very fast.

This problem is very easily solved, but I would like to know if there is any way to solve it *without* using square root, sin, or cos. Is this possible?

Keep in mind that if a formula dictates I find sqrt(a^2 + a^2) then this really isn't forcing me to use the square root function: because this is simply sqrt(2) * a, and sqrt(2) is a constant. So it's possible to use sqrt, sin, and cos at computationally fast speeds so long as their use is constant regardless of the vector used.

Is this possible?

Here are several other versions of the problem that, if solved, can be used to solve the problem as a whole:

You have a point c and a line. Find the two points on the line such that their distance from c is equal to a given value r.

You have a circle centered at c with a radius r. A ray starts with a point on the circle. Find out where the ray intersects the circle again.

First question: have you actually implemented the engine and profiled it? Do you actually know that your calculations are too slow? If so, are you sure that the problem is with calling these numeric functions, rather than other facets of your algorithm?

Second question: how accurate do the calculations need to be? There are ways to quickly compute good approximations to commonly used function; these might be sufficient for your purposes.

1. No -- I thought it would be safe to assume that calculations happening at this frequency would be likely to create a bottleneck, and thus wanted to evaluate my possibilities before getting too deep in the project.

2. This will be the tradeoff -- I'll probably implement both and allow the user to choose the degree of precision (at the expense of speed)

1. No -- I thought it would be safe to assume that calculations happening at this frequency would be likely to create a bottleneck, and thus wanted to evaluate my possibilities before getting too deep in the project.

Some famous quotes from computer science:

“More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.”

“premature optimization is the root of all evil.”

“Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you have proven that's where the bottleneck is.”

Thinking about efficiency is a good thing -- but optimization really belongs near the end of the development process. IMHO, the most important reason is that optimizations make your code more complex; this means it will take longer to design and develop, it will be more difficult to understand, and it will contain more bugs. (as a side benefit, it can also help you avoid wasting time optimizing when none is needed) It is better in almost every way to first focus on getting your code working, and only later focus on getting your code to work fast.

If you want to insist on thinking about speed now, then I suggest trying toy programs -- write short little test programs that allow you to estimate how long it takes to perform various calculations, and thus how many operations per second you can perform. Then, analyze your engine design to see how the numbers compare.

3. Yes, quite.

I would expect that working with dot and cross products would be more efficient than working with sines and cosines, except in certain special cases. I doubt you can avoid square roots entirely (though consider working through the algebra to see if you can delay them -- it might be possible to cancel them out, or gather a bunch together to do at once)... but I expect square roots to be faster than trig functions.