So I was reading through my copy of GPU Pro 2 and I came across a chapter for building a rendering pipeline for real-time crowds. The implementation uses render targets on the GPU to update agents and finite state machines while also performing frustum culling and level of detail sorting. With this approach, the test case in the chapter is able to easily render about 8,000 characters walking in a crowd.

My question is have any games started taking advantage of the GPU for task other than rendering and physics such as game object updating and AI? And if so, are there any conceivable disadvantages to their implementations?

I ask this because I'm thinking about going this route with my game using D11 DirectCompute to create and simulate large scale space battles. Off the top of my head, I think the GPU would at least be a good tool in calculating and blending steering behaviors.

3 Answers
3

On top of that it's worth considering that we don't party like it's 1999 any more, 20 000 polygons on screen definitely looks better than 10 000. But if it's 500 000 polygons vs. 200 000, or fancy dynamic shadows vs. blob shadows, or 2000 filler NPCs vs. 500, no one really cares. If you set it up side by side and show people the difference they will of course say that one is better than other other. But if they have to judge a game on it's own the impact of such details is negligible.

Is figuring how to do a specific task on the GPU worth your time? If the gain is simply more filler NPCs, the answer to that question is most likely no. Combine it with the points about being able to run on lesser PCs and the doubt is blown away.

I'm not saying that game programmers should never consider running something unconventional on the GPU, but there are so many other things worth spending time on that a well made list of priorities will almost always have something more important.

Consider that programming things to run on the GPU is generally a lot harder than it is to run on the CPU, and that there is a fairly small set of tasks that would benefit from this anyway.
–
stepheltonJan 15 '12 at 16:29

1

+1: "Is figuring how to do a specific task on the GPU worth your time?" This is the #1 most important question you should ask yourself when dealing with GPGPU stuff: does it matter enough to be worth the effort?
–
Nicol BolasJan 15 '12 at 17:56

There are reasons to avoid using the GPU. For many "AAA" games, even high end GPUs are already being completely utilized by actual rendering work that there isn't any GPU computing time or memory to spare on other tasks.

For games with less demanding graphics, it may be desirable to have less demanding hardware requirements. If you don't need a powerful GPU for graphics, wouldn't it be nice of people with low end or integrated GPUs could run your game? If you're selling your game as an indie dev, the segment of the market that has nothing but an Intel GPU (and not the very latest Ivy Bridge) can make up a sizable chunk of your sales.

Lastly, keep in mind that the GPU has some latency to it. It can take upwards of several frames to get the results of any computation back. If you're not very careful, it's easy to stall both the CPU and the GPU during data transfer to and from the GPU. Even if you are careful, this can be a problem. For even many graphics related tasks, it's not at all uncommon to unload some processing to the CPU that the GPU might be better at. E.g., occlusion queries can often be computed on the CPU faster than it would take to send and receive the request to the GPU, even though the GPU could do the actual computations much faster. Modern CPUs are often under utilized by games, and there's a lot of spare processing power to play with if you're comfortable writing threaded code.

Generally, the GPU should be considered for very large batches of highly parallel data processing.

Specifically answering your question: yes, the GPU can and has been used for AI processing. Particularly for things like terrain analysis, occlusion queries and ray tests, and some navigation. However, it's applicability is very dependent on the specific algorithm and data set used and the desired hardware requirements.

You are best off writing your code CPU-side (the tools are more mature, debugging is far easier, and more people could run your game) and moving it to the GPU only if you truly can't get it to run with adequate performance on the CPU.

If you can do more with your GPU than with the CPU, then it's a good idea. If it isn't, then it isn't.

That's not to say that you might not learn a lot by trying to make GPU-friendly code, but personally I wouldn't start with that unless you build a prototype with the numbers of enemies you think you would want and start running into performance issues.