Recommended Posts

The main AI that I''m knowledgable about is board game playing programs (chess, checkers, etc.). I''m curious how a strategy similar to those used in board game AI would work for a real time game, such as a fighting game or a sports game.
I have often wondered how a football team (American football) using a search & evaluate approach would perform against a finite state machine, which is the classic approach to such games (AFAIK).
While the finite state machine team (team FSM) would have premade plays, the search & evaluate team (team S&E) would have maybe 10 seconds prior to each play to look at the defensive formation, and based on how many yards it needed, it would search & evaluate, then run it''s play, continuously searching and evaluating. Perhaps it could do a search once every 1/10th of a second (or longer), and have some kind of prediction (similar to what is done in multiplayer games where a player on a slow connection doesn''t have 100% real data, the client predicts where the player will be).
I have thought about implementing some simple forms of this, such as perhaps a simple game where there are two teams of dots on the screen attacking one another. The finite state machine team could have simple states, like if there is an enemy in front of me, attack. else, find closest enemy and move towards him. The S&E team would (I think) form a plan and formulate a group attack to take out the other team player by player, one at a time.
The real problem that I forsee is that since it''s real time, that the search & eval might not happen fast enough to generate a real plan.
Any thoughts?
Russell

0

Share this post

Link to post

Share on other sites

Advertisement

Guest Anonymous Poster

Guest Anonymous Poster

I think you would also have a lot of difficulty expanding the set of possible states....i mean there is a LOT of ground to cover, you would be expanding thousands of states...i might have the wrong idea of what you mean though...

0

Share this post

Link to post

Share on other sites

You could limit the options (or the "legal moves" if you''d prefer) to whatever you''d like. You could give each player an option of 8 directions, and have each direction not only move 1 unit, but several units at once to cut down on the depth you''d have to search. For a simple program, you could give have two or three players on each team, and their "legal moves" could be:

Move upMove downMove leftMove rightAttack

That is only a branching factor of 10 or 15. That would allow a search for a small depth to take place (and you wouldn''t really have time for a search to a deeper depth anyway since it''s real time). In chess an efficient program can look at millions of positions each second, and I imagine that in a simpler game such as the one I''m describing where there are only a handful of legal moves, you could easily look at thousands of states many times in a second.

So, it makes sense in theory, but I''m not sure if when I actually try it if it will be too inefficient to work. I suppose we will see.

Russell

0

Share this post

Link to post

Share on other sites

You''d be better served by taking a Heirarchical planning approach to the problem.

Compute strategic decisions only once per play. Perform a heirarchical search on tactical decisions. At the highest level of this search are the strategic goals... or possibly a set of known sub goals that can achieve the strategic goal (like break the offensive line, then sack the quaterback). At each lower level of detail actions are more defined, but are constrained by the needs of the higher level goals. You''ll probably only need 3 levels of abstraction. By taking this approach you cut down the search to only local regions of the state space, where these regions are refined by your higher level planning goals.

I hope this makes sense! I''m not in a very eloquent writing mode this morning!