Tesla CEO Elon Musk is interested in self-driving technology for his future fleets -- and he's going to Google for advice.

Musk has been talking to Google about its work with self-driving cars and how to implement such a system for future Tesla vehicles. However, Musk doesn't want to call it "self-driving," but rather "autopilot" technology.

“I like the word autopilot more than I like the word self- driving,” Musk said. “Self-driving sounds like it’s going to do something you don’t want it to do. Autopilot is a good thing to have in planes, and we should have it in cars.”

The idea behind such technology is to not only make driving more convenient, but also safer. Cars equipped with self-driving systems can react to certain situations and prevent a crash, for example.

Google is the place to go for insight on the new technology, considering Google has launched self-driving projects over the last couple of years. Its test fleet consists of Toyota Prius', Audi TTs and Lexus RX450hs equipped with the self-driving technology.

Google's self-driving cars use LIDAR, which is a rotating sensor on the roof that scans more than 200 feet in all directions for a map of the car's surroundings; a position estimator sensor that helps locate its location on a map; four radar sensors to identify the position of distant objects, and a video camera to detect traffic lights as well as moving objects like pedestrians.

“The problem with Google’s current approach is that the sensor system is too expensive,” Musk said. “It’s better to have an optical system, basically cameras with software that is able to figure out what’s going on just by looking at things.”

i dont think switching from acoustic to optical measurements will necessarily increase the computational load. i'm pretty sure the output of all the sensors in the current google cars are 'images' in a loose sense of the word. you can make a picture from a radar scan, for example. the same processing would be used on that scan as would be on an optical image.

if you're thinking switching to optical imaging will increase the amount of data through higher pixel counts, then i'd say there's no obvious reason to use super high res cameras. they're not interested in avoiding mosquitoes.

Pictures tend to be 2D representations - you can scan them all day but it has limits. The Human brain is hardwired to analyze images, i.e. depth perception. Computers are not even remotely close to our genius at the task.

The point of a LIDAR system is to build up a partial 3D map of an area by firing lasers at objects (hence the rotation of the sensor) and analyzing the back-scatter pattern. It's a far simpler system computationally, well developed, and therefore extremely reliable. It's used all over the planet for topographic mapping (and in orbit too).

So a car can either a) get lots of 2D photos and try to figure out what they mean using a supercomputer (the one in the meatbag driver's skull), or b) get a set of LIDAR point data that can be converted with well understood statistical analyses to provide a 3D map in realtime.

Wrong. iOnRoad does not even come close to the level of image processing that would be required for a driving computer. The fact that it only uses one camera should make that abundantly obivious.

iOnRoad only monitors what is happening directly in front of the car. Mapping a 3D environment based on 360 degree snapshots is orders of magnitude more complex than measuring linear distance on one axis.

iOnRoad also does not make decisions, it only does analysis. Taking action requires a vast increase in computing complexity. Accelerating breaking and stearing are not binary choices. Each has a magnitude associate with it and many driving scenerios require manipulating two at once.

:)iOnRoad was only an example of how a 2.5 guys startup could pull the image processing on a generic CPU and a generic camera. Still…Decision making seems rather irrelevant, since it would have to be done for LIDAR and Radars just as well.Cameras are not one dimensional – they can cover way over 90 degrees (just horizontally), hence covering 360 degrees should not be "orders of magnitude more complex".Mobileye, that I also mentioned, as I recall, has been able to process full 360 degrees even on their old systems. Their systems are integrated in many luxury cars.And cameras would probably be installed anyway - for recognizing objects, reading road signs…