Has anyone tried to develop, or know of an algorithm such as used in a typical turn based game like Advance Wars, where the number of objects and the number of moves per object may be too large to search through up to a reasonable depth like one would do in a game with a smaller search base like chess?

There is some path-finding needed to to engage into combat, harvest, or move to an object, so that in the next move such actions are possible.

With this you can build a search tree for each item, resulting in a large tree for all items. With a cost function one can determine the best moves.

Then the board flips over to the player role (min/max) and the computer searches the best player move, and flips back etc. upto a number of cycles deep.

Finally it has found the best move and now it's the players turn. But he may be asleep by now...

So how is this done in practice?

I have found several good sources on A*, DFS, BFS, evaluation / cost functions etc. But as of yet I do not see how I can put it all together.

min-max with pruning like you mentioned, along with static-opening sequences and heuristics about what the AI is trying to achieve at each point. All of this is just like chess.
–
BlueRaja - Danny PflughoeftJun 9 '11 at 19:30

@BlueRaja You should really make that an answer
–
Ray DeyJun 9 '11 at 22:32

3 Answers
3

To have pathfinding (A*) and high-level systems (state machines) is not an either/or approach. You'll want to use both. The high-level systems tell each unit how to act, and each unit performs the best way it knows how (pathfinding).

I doubt that Advance Wars doesn't use pathfinding for unit movement. The maps are very small and the unit cap is so low, and with the Nintendo DS' specs on such a manageable scenario, A* would hardly be time-consuming.

I think you're looking at this like Chess where the AI plans so and so moves ahead for each unit. It doesn't work like that. Ray Dey does a good job at explaining how strategy game AI works so I'll link to his explanation: http://gamedev.stackexchange.com/a/21617/22314

Here's a few things I'll add:

There are many methods to make the high level A.I. system.

Finite-state machines, as mentioned, are the most basic systems. You define states (idle, attack, defend, etc.), how the AI behaves in each state, and how and when should the AI move between the various states. Do that, and you can watch the AI move on its own.

There's behaviour trees, which eschew states and work like a decision tree. They work similar to programming flowcharts.

There's goal-oriented action planning (GOAP). This is where you define actions that units can do, the goals for a unit, and which action satisfies which goal. Its up to the AI then to decide which action to do to achieve some goal.

Working with @Gajet and @BlueRaja 's input (the optimum is somewhere in the middle?):

Let's assume a finite state machine for the item/character that decides on an item level how it behaves (e.g. idle, create and follow path, attack, capture building, heal, defend other item) with some cost function. That would give 6 possible scores for this item.

Note: behavior may not be the right term, because it's the overall AI that decides in the end what each item is to do (see below).

In the screenshot below, red has 5 items, blue has 7 (neglecting the factories for the time being). Lets assume, blue is the AI and is to move next.

With 7 units and 6 states/scores per unit, there are 7*6=42 possible scores in total, yet not independently, as the scores of items may interfere, i.e. if the enemy is killed by item A, it cannot be killed again by item B.

So this is where the search tree starts: out of the 42 states/scores, select 1, e.g. start with the higher scores first. With this state fixed, there are 6*6=36 states remaining for the second level of this node. We now need to check if something has changed in the situation because of the action of item 1, or if the remaining 36 states are still valid. With, or without update, again pick one out 36 and proceed with the remaining 5 items. When at the lowest level move backwards to the top and pick the 2nd best move of the 1st item and build the next branch. This will create a tree of 42*36*30*24*18*12*6=1.4 billion nodes.

This is too much, but it is also clear that there are many useless nodes in this tree, for one because the order of actions may have an effect, but in most cases it has no effect. Also, branches with low scores may turn out be be beneficial after the opponent has played (like an piece offer in chess), but most of them are not. (how to decide?).

To reduce the search tree, you could decide to limit the number of states per item to be considered. E.g. only pick the best and 2nd best. (could be a parameter in the program setting). This helps: 7*2=14 possible states to start with and 14*12*10*8*6*4*2=645 thousand nodes in total. Probably still many doubles.

Assuming this can still be much improved, but a search tree is created. Add the scores per branch up to the top and select one (there will be many) with the highest score. This is the first AI move.

Now we get to the minimax.

Flip sides and repeat the above for the red items. Minimizing the score. Pick a move, execute and flip back to the blue side. Depth = 1.

This blows up, because at every blue move (645 thousand), red can respond with a similar amount of moves (actually 5*2=10*8*4*2=640 in this case), but with 7 units the total number of combinations would be 0.4 trillion. And for each you would like to search some levels deep...

if you are only asking about the path finding problem you can use your answer in this question, but for the whole idea of how to implement an AI read the rest of my answer.

It's not always finding algorithm like A* that decide what AI should do, usually for games with too many possiblities they use some state-machine that controls the actions and those search algorithm are used both to describe how choosed action should be done and to extrant usable information from all the game data.

for example i've been in robocup soccer simulatoin 3d team. the game is just like real soccer with some humanoid robots moving inside the field but the problem is you only have control over joints of robot body, so you have at first try not to lose your balance while moving! I think this problem is just as hard as implementing AI for an strategy game. in our team (and other teams in league) there was 3 stages of thinking: first to understand what server told us about the game field. server describes field using relative positions to the player so we have to transform all the input to undestand world position of object seen. then we had to choose which action should player do. should it run toward the ball? or is the robot already in possession of the ball and should kick it? or should he pass it to another player? and so on. this was decided using a state machine. and the last stage was to describe how the should we do the action state machine chose. for example if it's running how should I run to maintain my balance, or if I fall what should I do to stand up.

so to apply this approch to your AI problem, first we have gather all the input we can, in an strategy game it can be number units their position and their type. remember if you want your AI to act normal you can't give it any information on the map, if the player can't see what the AI is doing, AI shouldn't be able to know what's happening in player's base. then you have to extract useful information from that data. for example it's not really important if your opponent has many units far away from you, the only important ones are the ones near.

next step is to decide about high concept of what your AI should do in the next turn. just think as a general, you have some information, and for any action you order, your soldiors know how they should obey. so based on the information you just give an order, it can be moving units to some specific point or constructng a building or what ever you think you have to do at the moment. so there is a list of data, and a list of possible high level actions. it's usual to use an state-machine to determine for each dataset which action is better. in this stage state machine can also use algorithm to generate new data to help itself decide. for example based on the position of the units you can use A* to determine minimum time needed to gather your soldiers around opponents base.

and for the last step you have to translate those high concept orders to game level commands. for example you've already defined for tanks how to move to the point a player assigns them using a click. so you can easily traslate what your state machine decides to the available user commands, like mouse clicks or keyboard shortcuts. you can also implement your AI version of controls like setting the exact position of AI units for the next step, but remember giving your AI much more abilies may lead to unrealistic AI opponent.

Gajet: thanks for this. Haven't studied state-machines yet. Will do. This seems to be a different (one level deep?), deterministic approach. It should significantly reduce CPU requirements. Rather than have the AI search for the best solution (brute force as in chess), I may have to pre-code its behaviour. I think indeed this may be how Advance Wars is set-up.
–
Jan de LangeJun 10 '11 at 22:15