Can a Car Find a Parking Spot by Itself?

Junko Yoshida is stubbornly dubious of autonomous driving, but she's strangely fascinated with the idea of autonomous parking.

I do know that the notion of cars with eyes is a huge deal for the electronics industry serving automotive companies.

The market research firm IHS said in April that automotive applications such as lane departure warnings and self-parking will be among the major growth drivers this year for the embedded vision market -- technology that helps machines see and interpret data from computer vision software.

Revenue from "special-purpose computer vision processors used in under-the-hood automotive applications" rose from $126 million in 2011 to $137 million last year and should reach $151 million this year, IHS said. That revenue will keep expanding and will hit $187 million by 2016, "confirming the solid prospects in store for embedded vision, one of the fastest-growing trends in technology."

But exactly what are the basic technology building blocks involved in making a blind car see? And what challenges does an self-parking car still face?

I popped a few questions on the topic to Kevin Tanaka, senior manager of worldwide automotive marketing and product planning at Xilinx. "While there is the start of some V2I out there, it's still very limited worldwide at the moment, so no automaker is really relying on that for autonomous parking in its current state," he told me. There are some trials in Germany right now in which the vehicle communicates with an electronic parking lot, which tells it what spaces are open. "But it's, again, very, very early."

There's also the Google self-driving car, "but that also does not utilize V2I, but rather a huge range of sensors." They include "cameras, radar, lidar (a remote sensing technology) and some ultrasonics and mapping programs."

In production setups today, OEMs are using combinations of radar, ultrasonic devices, and cameras. "There is an incredible amount of parallel processing power that needs to be done to process the sensor data, run algorithms, and ultimately coordinate gearbox, steering, acceleration, and braking controls."

This obviously is part of the reason why programmable SoCs are in huge demand these days for cars with eyes. Tanaka said Xilinx Automotive FPGAs and Zynq-7000 All Programmable SoCs are being used in many of the radar and camera programs right now, and they will continue to be used into the future.

Where I live, there's no overnight on street parking allowed ever so you either have space, rent space, or use public transportation. We don;t ahve the need to save street parking becuase it doesn't exist.

When I first moved to Boston I found it unnerving how do many drivers don't make eye contact, when say you're both trying to inch into the same lane of traffic. So I can only imagine how much a leap of faith it would be to trust a driverless car would do the right thing. And as for the parking situation, my solution? Ditch the car!!!

Autonomous driving seems too alien (and risky) to most consumers (I assume) and it is a hard sell.

But autonomous parking is something being pitched by the auto industry as "convenience." Debating "safety" takes time, but selling "convenience"? It's easy. There, I see their marketing plot. Am I alone thinking that way?