Pathfinding Antipatterns

So, while waiting for thrive to compile I was reading through the old AI threads floating around, and I noticed a worrying trend.

Whenever the question of pathfinding comes up, the answer is usually something like A* over navmeshes. Now, there's nothing wrong with an agent doing A* over a navmesh. But what about when you have thousands of agents, each doing it's own A*? Then you have an issue; loads of stuff will get recalculated per-agent. I know, I know, you'll tell me we can just cache A* results, but then we get an overcomplicated Floyd-Warshall.

So let's say we generate a navmesh and Floyd-Warshall over it every frame. Now, agents are given the shortest path to their chosen target for free. Ofc, this solution isn't optimal: first off, if you don't have a high enough agent density your complexity just shot up. More importantly, however, is another massive inefficiency -- we may no longer be recalculating paths per-agent, but we're still recalculating all paths every frame. That's almost as bad -- in general, the environment changes incrementally per-frame, and a pathfinder that can't take advantage of the repetition is in trouble.

"But moopli!" says my strawman, "I call bullgium! There can't possibly be an incremental, non-agent-based pathfinding technique that gives us the power we need!", to which I say Pah!

Assuming you have read the paper, here are some more reasons i have to support this method:

It works well with the ECS -- A system can manage all the diffusion fields, and entities which need to follow certain gradients can register appropriate components.

It is scientifically accurate, at least for the Microbe stage and organisms relying heavily on scent -- it uses a diffusion model to set up the concentration gradients that agents then climb. For example, adjusting diffusion-particle decay rate can result in scent trails.

It models time lag in information-getting, decision-making, and more -- useful for strategy mode, where it can lead squads to make understandable mistakes in navigation (not knowing that their target has moved, perhaps). One hallmark of realistic-seeming AI is that it makes reasonable mistakes, the sort of mistake you'd expect from something intelligent.

It easily affords simple tactics with no overhead -- the paper demonstrates flanking maneuvers, for one.

One important implementation detail I'm doing some thinking about is the topology of the underlying graph that all the diffusion and pathing is done over. In the paper, all these graphs are regular grids, which really simplifies their implementation but has what might be troublesome overhead on larger spaces. The first solution I could think of is to diffuse over navmeshes, but that runs into issues when you want scent information at a higher resolution than navmesh polygons give you -- improving the performance of one hurts the other, and of course, we'd need to actually dynamically generate and modify the navmeshes without introducing ugly artifacts (but that borders on OT). The other I came up with is to store the gradient data in a quadtree (or an oct-tree when absolutely necessary). This way, we can adjust resolution as we need it.

Longpost is long, thanks for reading!

Edit: Is this the right place for this? I used a greedy approach to find the right subforum, but this forum is all over the place so I'm not sure if greedy works here...

Edit2: Elsewhere,

NichderNich wrote:

The ultimate goal will be to have compounds exist in particle clouds that float around and spread in your environment, like droplets of ink in water.

As a bonus this could also be implemented as part of the diffusion system, and it would feed seamlessly into pathfinding. Yay science!

Last edited by moopli on Sun May 18, 2014 12:42 pm; edited 2 times in total (Reason for editing : inspiration strikes again)

Looks interesting, but you mention the microbe stage for which I don't see ay need for pathfinding as there is no other obstacles than other microbes, which can reasonably brute force past each other or have some reactionary system.

Naturally it could potentially become useful in the later stages. However a quick read through the pacman example seems to suggest that it is best used for multiple entities working towards a shared goal, while I suspect the needs of our future solutions will involve many entities with many distinct goals. Of course an A* or any other solution I can think of, doesn't solve the optimization of these issues either, but it seems to me that the concept deflates in the situation with multiple goals. Correct me if I misunderstand.

For microbes, this is more of a case of turning a simple representation of nutrient levels into a convenient pathfinder, by copying nature directly.

In many cases, we may have many interchangeable goals. For example, there will be many bits of fruit within reach of our monkey' (read: monkey-prime), and any piece will do, so monkey' goes for the closest. Then, if we have quite a few monkey' (our first one was quite prolific), they'll all want fruit, but any fruit will do. Since fruit are interchangeable, we can use the same diffusion graph for each piece of fruit. As a side bonus, several pieces of fruit far away could be more appealing than one fruit slightly closer but on the other side of monkey' -- making monkey' appear to plan ahead even though all it does is follow its nose.

We can even merge some non-interchangeable goals, for example, if the value of goal type A is a constant multiple of that of goal types B, C, and D, then it would simplify matters greatly to just use one field and have sources of differing strength.

As for dealing with distinct, possibly-conflicting goals, (that is, non-constant multipliers) this deflates. Then, the best we can do (I think?) is to reduce the set of necessary distinct fields (for example, instead of having separate fields for pathing towards the enemy and fleeing from battle, we could use one "danger" field, where fleeing units simple go down the gradient instead of up), and to reduce overhead by storing several diffusion types in one graph (among other optimizations, anyway).

We may be able to treat special cases where A* or something else is necessary as modified forms of the diffusion-gradient system, though I'd consider that more of an academic exercise.

On the contrary, microbe stage would most likely have the simplest, and most useful diffusion system. We could simply use the compounds spawned in the environment to increment the cells nearby -- so we get the particles, which are the most intensive part of the simulation, for free. Then just do a pass -- cheap when we'd only be simulating about 500 grid cells at a time -- enough for more than double screen space when each grid cell is about a cell big (nope not ambiguous at all nope) -- where we apply a blur and decay. And to pathfind we'd just let each cell (not grid, but swimmy) decrement the compounds under them and hill-climb.

We could even go all the way and store free-floating compounds in the grid, but I won't belabor the point as I'd rather see the game released

For later stages we'll definitely wait -- actually, I had the urge (now suppressed) to ask a mod to lock this until its time arrives

Last edited by moopli on Sun May 18, 2014 6:21 pm; edited 1 time in total (Reason for editing : bortle)

Some very nice suggestions in there moopli. I agree that we won't need anything partiularly advanced until atleast the aware stages, as the environments will be very simple (with few, if any, imovable obstacles).

I think A* came up as a suggestion mostly because its the one most people are familiar with. As we're a long way from actually needing pathfinding the discussion never got much further. I've always thought we'd go with a mixture of boids style movement, possibly with diffusion maps for finding direction to the nearest food/shelter etc.

What your suggesting sounds like a more advanced way of calculating diffusion maps, which may be useful, so I'll try and have a proper read of the paper when I get the chance.

One consideration is that we probably won't have very many agents simulated at the same time. Due to the computational expense of AI, pathfinding, the compound system, and everything else we hope to simulate, only those agents relatively close to the player will be fully simulated (something like 1-200 agents at most). Those agents outside the players areas will be vastly simplified, and probably won't need any pathfinding at all (they will either not have a tracked position, or be allowed to ignore obstacles).

One of the more interesting things this could model -- which, methinks, would work best in Microbe stage -- is agents pathing away from resources if there's enough competition. That is, since microbes decrease the desirability of cells they've inhabited, a microbe between J. Random AI Microbe and some food might cause J. Random AI Microbe to go looking for fresher pastures. It isn't so much that we can solve the problem of getting from A to B, the trick is that by letting the environment keep track of some info, we can give simpler agents more complicated, and smarter-seeming, behavior. It's, more or less, offloading some processing to the environment itself so the agents within operate at a higher level. So it isn't just about pathfinding :3

Seregon wrote:

Those agents outside the players areas will be vastly simplified, and probably won't need any pathfinding at all (they will either not have a tracked position, or be allowed to ignore obstacles).

Very true -- which would mean a smaller diffusion map to track, which again I may have misrepresented, since it isn't all about pathfinding.

Edit:It just occurred to me that we could parallelize the blending step using some OpenCL, which would cut down on the time costs a lot. For that of course, we'd need to add OpenCL as a dependency, but that would (almost definitely) pay dividends later. (Conveniently enough, I'm learning OpenCL at work right now :P)

Last edited by moopli on Sat May 24, 2014 4:11 pm; edited 1 time in total (Reason for editing : yay OpenCL)