Advertisement

"Let’s say you want to teach an aerial robot to tell the difference between a wall and a shadow," Microsoft says in a blog post. "Chances are, you’d like to test your theories without crashing hundreds of drones into walls".

The system is built on the Unreal gaming engine and is able to show shadows, reflections, water on a surface and more real-world objects that are difficult to simulate. The code for AirSim has been made available on GitHub and the firm says it is still under "heavy development".

Read next

Wednesday briefing: Drone sighting disrupted departures at Heathrow

ByWIRED

Shital Shah, a Microsoft researcher behind the project, says the simulator has been designed for drones but can be used for other types of vehicles.

Advertisement

Both autonomous cars and drones use the machine vision branch of artificial intelligence, among others, to sense the world around them. The process involves cameras which capture images of what is happening, and artificial intelligence to interpret and identify them in real-time.

"Our goal is to develop AirSim as a platform for AI research where we can experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles," Shah says.

Simulating the movements and actions of vehicles controlled by artificial intelligence trains the algorithms on more data than can be produced by testing in the real-world. Before Google's self-driving cars were spun-out into the Waymo company, it was simulating millions of miles driving per day, for example.

Advertisement

In a monthly performance report from January 2016, Google said it was simulating three million driving miles every day. "If the simulator shows better driving is called for, our engineers can make refinements to the software, and run those changes in simulation in order to test the fixes," Google said at the time.

Microsoft's simulator is similarly designed to gather data for those creating AI navigation systems. "The platform enables seamless training and testing of such perception systems as cameras via realistic renderings of the environment," the company says.

"These synthetically generated graphic images can generate orders of magnitude more perception and control data than is possible with real-world robot data alone."