The software comes with a trained driving agent; all developers need is a copy of the game to get cracking. After that, programmers can swap out the demo AI model with their own agents to test their code and neural networks. Universe and Quiter's integration code takes care of the fiddly interfacing with the game.

Video games new and old provide great training grounds for developing reinforcement learning agents, which learn through trial and error – or rather, trial and reward when things go right. OpenAI's Universe was released in December, and is a wedge of open-source middleware that connects game controls and video displays to machine-learning agents so they can be trained in the virtual arenas.

GTA V is preferred by various research groups because it gives developers a realistic, detailed world for researchers to test their algorithms and models.

San Andreas – a fictional state in the game – is almost one-fifth the size of Los Angeles, and includes 257 different vehicles, seven types of bicycles, and 14 weather patterns, allowing researchers to test self-driving cars in urban areas without having to hit the actual road.

“[Integrating with Universe] was a perfect fit because GTA V runs in Windows, and Universe allows running the AI separate from the game; in Linux, for example, where most AI work is done,” Quiter told The Register.

The video above demonstrates an AI running around in GTA V via Universe. The screen is split into separate windows to allow developers to inspect what frames are given to the agent, view any diagnostics from the bot, and so.

The simulated environment provides heaps of labelled data allowing the agent to start recognizing cars, bicycles, road signs, and other objects. Plus, the code can be modified to import a Tesla Model S into the world or explore a custom-designed town, Quieter added.

Just in case anyone is worried about AI agents becoming murderous thugs by hanging around the mean streets of GTA V, all the violence has been stripped from the system, we're told. ®