Using PathEngine with Multithreaded Applications

PathEngine is fully thread safe, as long as certain basic constraints are adhered to (see below).
Calls are synchronised inside the interface where necessary to ensure the
consistency of internal data structures.
Synchronisation is performed at a high level,
so the overhead for this synchronisation is minimal.

Object destruction

The client application is responsible for ensuring that objects
being used by other threads are not destroyed.
Calling methods on an object that has been destroyed or that is in the course
of being destroyed by another thread has undefined results.

Agent and obstacle set / collision context destruction

On agent or obstacle set destruction PathEngine will automatically perform the necessary removal operations internally,
so the following additional constraint essentially follows on from the above constraint on object destruction:

It is not permitted to destroy agents and containing obstacle sets of collision contexts simultaneously
from separate threads.

Doing this will result in undefined behaviour.

Note that modifications to agent state during containing obstacle set or collision context
destruction (in another thread) is permitted.
PathEngine ensures the necessary syncronisation for this internally.

Duration of validity in cases where pointers to internal memory are returned

In cases where pointers to internal memory are returned
(for example, in iMesh::retrieveNamedObstacleByIndex())
the duration of validity of the pointed to memory should be noted in the API reference for
the relevant method.
As long as this duration of validity is respected,
it is safe for multiple threads to call these methods and read from the pointed to memory.

Changes to shared contexts

It is safe to make changes to shared contexts,
and there should be minimal blocking between threads operating on these shared contexts.

Multithreaded use case: Loading and preprocessing in a background thread

Operations on different meshes are guaranteed not to block.
It is possible, then,
to load meshes and perform preprocess generation in a background
thread without stalling pathfinding or collision operations on other meshes
in a foreground thread.
An example application
is provided to demonstrate this functionality.

Multithreaded use case: Parallel preprocess generation

Within a the context of a single mesh, base mesh collision and pathfind preprocess generation operations are now non-blocking
(starting from release 5.20), specifically, generating preprocess for a given shape does not wait for any pending preprocess generation
operations on for other shapes to complete.

When working with multiple pathfinding agent shapes the required preprocess can now be generated in parallel
to take advantage of multiple CPU cores where these are present.

Multithreaded use case: Thread pools

In server applications it is common to dispatch operations to pools of worker threads.
For operations that block on hardware access (for example accessing files on a hard drive)
or on external events (for example on network messages)
dispatching to a thread pool can yield greatly improved performance over single threaded operation.
This is because threads can continue to run (and utilise the CPU) whilst other threads are blocked.

PathEngine does not perform any file access or other naturally blocking operations
internally.
On a single processor, therefore,
there is no performance gain for dispatching pathfinding requests to multiple threads.
On the contrary, using a pool of threads can significantly reduce performance because
the contents of the processor caches will generally be invalidated by each context switch.
If you are using a single processor machine as a pathfinder server,
it is recommended therefore to queue pathfinding requests as they come in and then dispatch
in sequence.

On a multiprocessor machine, however, or on a CPU with multiple cores,
it can make sense to maintain a small thread pool with one thread per processing unit.
An example application
is provided to measure throughput for thread pools of varying sizes.

The
'ThreadPerMesh' example application
shows one straightforward way to split execution across multiple threads,
where multiple iMesh instances need to be updated.
(But note that this is not necessarily the best way to set things up!
Setting up optimal multithreaded architectures is tricky and there are a lot of different ways in which
PathEngine can be applied across multiple threads.)

Multithreaded use case: Pathfinding across frames

In complex situations where there are many agents active and running movement
based behaviours at the same time,
the frequency of pathfinding requests from those agents can easily become unpredictable.
If a lot of pathfinding requests come in from the behaviour code on a single frame
then this can lead to dropped frames and jerkiness.

In many cases pathfinding queries will be fast enough so that multiple queries
can be dispatched per frame without any noticable variation in frame rate.
In this case an 'external scheduler' approach can be used,
where pathfinding requests are managed by external scheduling code that
treats pathfinding requests as atomic,
and simply rations the number of pathfinding requests permitted on a given frame.

In some cases, however,
(depending on the hardware used, mesh complexity, and so on)
it is possible for query times to be too long for an external scheduler approach.
If query times are long enough for a single query to cause noticeable jerkiness
then a background thread can be used to run pathfinding queries across frames.

Running long queries in a background thread does not block
context management or other pathfinding operations in a foreground thread.

When multithreaded application is not required

It's possible to build a single threaded version of the SDK from the source code packages.
See this page, for the relevant preprocessor define.
Project configuration should also be modified, in this case, for maximum benefit,
e.g. to switch to single threaded versions of the C run-time library.