JavaScript is disabled for your browser. Some features of this site
may not work without it.

If you have any problems related to the accessibility of any content (or if you want to request that a specific publication be accessible), please contact (rohit@unr.edu). We will work to respond to each request in as timely a manner as possible.

The ability of a vehicle to navigate safely through any environment relies on its driver having an accurate sense of the future positions and goals of other vehicles on the road. A driver does not navigate around where an agent is, but where it is going to be. To avoid collisions, autonomous vehicles should be equipped with the ability to to derive appropriate controls using future estimations for other vehicles, pedestrians, or otherwise intentionally moving agents in a manner similar to or better than human drivers. Differential game theory provides one approach to generate a control strategy by modeling two players with opposing goals. Environments faced by autonomous vehicles, such as merging onto a freeway, are complex, but they can be modeled and solved as a differential game using discrete approximations; these games yield an optimal control policy for both players and can be used to model adversarial driving scenarios rather than average ones, so that autonomous vehicles will be safer on the road in more situations. Further, discrete approximations of solutions to complex games that are computationally tractable and provably asymptotically optimal have been developed, but may not produce usable results in an online fashion. To retrieve an efficient, continuous control policy, we use deep imitation learning to model the discrete approximation of a differential game solution. We successfully learn the policy generated for two games of different complexity, a fence escape and merging game, and show that the imitated policy generates control inputs faster than the differential game generated policy.