Autonomous vehicles generally rely on multiple sensors that use different sensing modalities in order to achieve reliable performance for navigation and obstacle avoidance tasks. In this research, we are studying the combination of laser based 3D sensors (LIDARs) with image-based sensors, including stereo and monocular imagery. One of the key ideas that we are exploring is the concept of tightly integrating these different sensing modalities throughout an algorithm’s various steps. Another central aspect of the research is the effect of time and temporal error due to synchronization errors between different sensing modalities. This project covers a number of specific research topics:

Optimal LIDAR Sensor Configuration – When designing a system with multiple LIDARs, how should they be mounted and configured to optimize the system’s sensing capabilities?