2D vision system with Ashley and Box2D

In Sloppynauts the player had to remain undetected, avoiding CCTV cameras and alien baddies. We constantly had to determine who could see who and whether the player was hidden behind something. We wrote a nice reusable system using Ashley and Box2D and I think it”d be a shame if it went to waste. So here it is in case you”d like to use it.

This 2D vision system is generic enough to work with both side-scrolling and top-down games. Surely it can be further optimised and tailored to your needs… Hey, it was done for a game jam! Nevertheless, it could be a decent starting point!

Vision System concepts

We want to give some of the entities in our game world the ability to see other entities. Both observers and observables will necessarily have a location in our game world. However, we need some extra information about our observers, specifically the area they can cover at any given point in time, ie. their field of view. In the diagram below you can see a couple of observers and three observables. One of the observables can be seen, the second one is complete outside of both FOVs whilst the third one is hidden behind a box. We simply want to ask our system: “can this entity see this other entity?”

Observable and Observer components

Our Observable component is pretty trivial, it simply has a position.

public class ObservableComponent implements Component {
public Vector2 position = new Vector2();
}

We will also need the collection of entities with an ObservableComponent, they are the candidates to make into the vision map as targets.

private ImmutableArray<Entity> observables;

The addedToEngine() and removedFromEngine() methods are invoked whenever we register the system with the engine. We can hook into them to grab the immutable list of observables as well as to register our vision system as a listener for observers. That way, we can pre-populate and clear up our vision map as observers come and go.

VisionSystem is an IteratingSystem, so we need to implement the processEntity() method, which will be invoked once a frame for every observer registered with the engine. Here is where the vision map entry for the observer gets updated.

To know whether an observer can see an observable two conditions need to be met: the observable has to be within the observer’s FoV and there must be an unobstructed LoS between the two. Querying the Box2D world can be costly, that is why we short-circuit the FoV check with the raycast.

First, check whether the observable is within the vision distance of the observer and if it is, we check whether or not the angle between the two falls within the observer’s vision angle. The math is be pretty simple here.

It’s time to perform our raycast, which will go from the observer to the observable. Box2D raycasts take a reference to the Callback interface to handle geometry hits. The handler is notified on every fixture hit. Box2D will pass the fixture it encountered as well as the fraction along the segment at which the hit happened. The VisionSystem has an inner VisionCallback implementation, which gets reused for every raycast, that way we don’t need to constantly allocate memory.

Whenever the ray hits a fixture, the reportRayFixture() method gets called. Box2D bodies can hold arbitrary data, i.e. a reference to any Object. We conveniently set this to be a referene to the Entity the body belongs to. That way we can check if the fixture we hit is part of the observer itself. Whenever we encounter the observable we record how far along the ray segment it is.

Thanks to the information recorded during the raycast, we can then ask VisionCallback whether the object is visible. This question is easy to answer, it will be visible if and only if the observable was the closest object the ray bumped into.

Room for improvement

Like I said, this is game jam code, you have been warned! Here’s a few things I could think of to make the system more efficient and nicer in general.

Collision filtering: Box2D allows us to set bit masks to bodies to filter collisions. We can leverage that to select behind which bodies observables can hide.

Space partitioning: we can use a quadtree to avoid processing every observable for each observer.

Deferred raycasting: we probably don’t need one frame accuracy, so we can update the vision maps for a subset of observers each frame. The player won’t ever notice if that guard spotted him a couple of frames later.

Prioritisation: if you ever find yourself in a situation where there are just too many observables and observers you can add some sort of prioritisation to your deferred raycast queue, so the important ones get processed first. You may also have to keep track of the time spent in the queue to avoid starvation.

Some games may need slightly more complex vision models. For instance, you may add a small detection circle around observers to represent some kind of sixth sense. A guard would notice a presence right behind him after a short while. That would be quite easy to add to our VisionSystem.