def monte_carlo_policy(self, gridmap, evaders, pursuers):
"""
Method to calculate the monte carlo tree search policy action
Parameters
----------
gridmap: GridMap
Map of the environment
evaders: list((int,int))
list of coordinates of evaders in the game (except the player's robots, if he is evader)
pursuers: list((int,int))
list of coordinates of pursuers in the game (except the player's robots, if he is pursuer)
"""

The purpose of the function is to internally update the self.next_robots variable, which is a list of (int, int) robot coordinates based on the current state of the game, given gridmap grid map of the environment and the player's role self.role. The player is given the list evaders of all evading robots in the game other than his robots and the list of pursuers of all pursuing robots in the game other than his robots. I.e., the complete set of robots in game is given as the union of evaders, pursuers and self.robots.

During the gameplay, each player is asked to update their intention for the next move coded in the self.next_robots variable by calling the calculate_step function. Afterward, the step is performed by calling the take_step̈́ function followed by the game checking each step, whether it complies to the rules of the game.

The game ends after a predefined number of steps or when all the evaders are captured.

In MCTS, each player has a predefined time for making the decision for a single robot given in the self.timeout variable.
The timeout for the decision can be implementted as follows:

In grid environment, the MONTE_CARLO pursuers are expected to catch the GREEDY evader.

Note, you can easily generate new game setups by modifying the .game files accordingly.
In the upload system, the student's solutions are tested against the teachers RANDOM and GREEDY policies players.