The visualization program is based on the MIT-developed "Director" code, which is also being used as an interface for Boston Dynamics' Atlas robot. Director has been adapted by Drive.ai to operate within its vehicles, working concurrently with the car's GPS, cameras, and other sensors to show a passenger exactly what the system sees. As shown in this onboard video taken during a self-driving test, the Drive.ai car is constantly identifying objects like signs, trees, pedestrians, and other vehicles in its path. This information is then displayed in real-time to the engineer riding along.

While that video only shows footage from a frontal view, the vehicle is actually mapping the entire 3-D space around itself, and all that data is being recorded for playback and study after-the-fact. Another video from Drive.ai shows off the area of Frisco, Texas that it has charted.

Drive.ai then feeds the data obtained from both human-driven and A.I.-driven tests into its simulator. In the sim, a virtual car drives through a copy of a map that the real car has already plotted out. In this computer-generated space, Drive.ai can alter variables like traffic density or change traffic lights to its liking. The sim allows the engineers to put the car into situations that would be too complicated or dangerous in the real world, then see how the A.I. reacts. This data can then go back into the real car and give the system less likelihood of being tripped up when it sees an uncommon traffic situation that it has already faced inside the simulator.

In the Medium article, Drive.ai says that it is considering the possibility of making its visualization and sim programs "open-source," meaning that anyone can use and modify the code. If the company does go down that route, then the software could potentially be used by other developers of semi-autonomous systems to create safer cars for everyone.