Kinect sensor may find a home in self-driving cars

Self-driving cars are the wave of the future, but currently the robotic vehicles are too expensive and not convincingly reliable enough to replace human drivers. This is primarily due to the expensive LIDAR sensors, which are both costly and unreliable, being easily influenced by strong sunlight and reflective surfaces.

Now University of La Laguna researchers have discovered that a cheap Kinect sensor may be as accurate as LIDAR for objects close to the ground, and more effective than stereoscopic cameras.

Using a self-driving golf cart, the researchers tested the Kinect depth camera against a laser rangefinder and stereo cameras. By operating the vehicle on a test road that contained stairs, ramps, and curbs, they discovered that the Kinect sensor outperformed the other two devices in detecting and correctly discerning nearby, close-to-the-ground objects. For instance, when confronted by a ramp, the laser rangefinder mistakenly determined that it was too steep to drive up. The laser device also failed to spot lower stairs, while the stereo camera had trouble identifying very close (and very far) objects and gave frequent false detections.

The Kinect sensor correctly identified the ramp as navigable and consistently outperformed the stereo cameras in detecting obstacles close to the ground. Lead researcher Javier Hernandez-Aceituno praised the Kinect sensor’s superior abilities in detecting close-range obstacles, telling reporters that the Kinect sensor allows an autonomous vehicle to navigate safely in areas where the other systems fail.

Their paper’s abstract notes:

An accurate method to detect obstacles and dangerous areas is the key to the safe performance of autonomous robots. Time of flight sensors can report their existence through the emission, reflection, and measurement of wave patterns, but large wavelength light projection is often unreliable in outdoors environments, due to solar radiation contamination. In this paper, a specific Microsoft Kinect arrangement on a robotic vehicle is proposed, such that outdoors detection is possible. The main contribution of this paper is the description of a sequence of filtering techniques, which translate the depth image provided by the sensor into definite obstacle projections in the navigability map used by the vehicle. A series of experiments proves that the Kinect device is more accurate at detecting obstacles using this procedure than a camera pair using two different stereovision techniques.