Ford kicks off new automated driving research projects with MIT and Stanford University

22 January 2014

Ford Fusion Hybrid automated research vehicle with four LiDAR sensors. Click to enlarge.

Building on the capabilities of the automated Ford Fusion Hybrid research vehicle unveiled last month (earlier post), Ford is working with the Massachusetts Institute of Technology (MIT) and Stanford University to research and to develop solutions to some of the technical challenges surrounding automated driving.

The MIT research focuses on scenario planning to predict actions of other vehicles and pedestrians, while Stanford is exploring how a vehicle might maneuver to allow its sensors to peek around obstructions. Put another way, the purpose of the MIT project is enhance the utilization of the line-of-sight data already acquired by the Fusion’s sensors to provide augmented predictive capability, especially for pedestrians. The purpose of the Stanford work is to enhance the acquisition of non-line-of-sight data.

Ford’s automated Fusion Hybrid research vehicle uses the same technology already in Ford vehicles in dealer showrooms, then adds four LiDAR sensors to generate a real-time 3D map of the vehicle’s surrounding environment.

Visualization of Fuion Hybrid research vehicle LiDAR data, using a color gradient for height. Note the shadows where signals are blocked. Click to enlarge.

While the vehicle can sense objects around it using the LiDAR sensors, Ford’s research with MIT uses advanced algorithms to help the vehicle learn to predict where moving vehicles and pedestrians could be in the future. This scenario planning provides the vehicle with a better sense of the surrounding risks, enabling it to plan a path that will safely avoid pedestrians, vehicles and other moving objects.

The MIT work has been in progress and has worked well on vehicles, so now the partners are moving on to see how it might be able to be applied to pedestrians, said Greg Stevens, global manager for driver assistance and active safety, Ford research and innovation.

The predictive capability is based on the analysis of three aspects of the target, which is used to make of model of where it might be going, Stevens said.

First is the physical capability of the vehicle—how fast can it accelerate, brake, change direction laterally. That creates constraints on future positions. Next, we look at cues that the vehicle might be giving us, such as ‘it looks like the vehicle is starting to edge into my lane’. We pick up a bunch of different clues as to how the vehicle is moving. The third thing, is we look at what potential goals or destinations might be. For example, if an exit ramp is coming up, and the guy is starting to inch into the lane, it might suggest that he might want to get to exit ramp. It lends urgency to cutting us off. By looking at those three factors, we found can do a reasonably good job of predicting where he will likely end up in near future and can plan a path to avoid hitting him.

—Greg Stevens

Pedestrians are more challenging, Stevens suggested. Although they can’t move as fast as a vehicle, they are dramatically more agile; the potential for sudden lateral movement is a significant additional challenge.

If you are driving along a street, and you see a pedestrian walking along the street, you have to be constantly aware that they can dart into the street. But we can use the other two factors. If the pedestrian starts looking back over the shoulder, that’s an extra clue for looking to cross. The third factor, if the pedestrian is coming up on a crosswalk, there is more probability that [he or she] may step into the street.

Are they texting, or talking on the phone? There are a lot of clues that we would like to use that are probably difficult to detect. That’s the essence of the research [with MIT], which is to figure out what’s possible.

—Greg Stevens

At the North American International Auto Show in Detroit, Ford brought in a working research vehicle, and used the sensor system to display a 3D visualization of the acquired positional data of elements—including moving humans—in the ballroom. A color gradient designated height. As interesting as the visualized data were the “shadows”—the areas blocked from the signals.

Working with Stanford, Ford is exploring how the sensors could see around obstacles into those shadows. Typically, when a driver’s view is blocked by an obstacle like a big truck, the driver will maneuver within the lane to take a peek around it and see what is ahead. Similarly, this research would enable the sensors to “take a peek ahead” and make evasive maneuvers if needed. For example, if the truck ahead slammed on its brakes, the vehicle would know if the area around it is clear to safely change lanes. This work with Stanford is just beginning.

Automated driving is a key component of Ford’s Blueprint for Mobility, which outlines what transportation will look like in 2025 and beyond, along with the technologies, business models and partnerships needed to get there. With its automated Fusion Hybrid research vehicle, Ford is exploring potential solutions for the longer-term societal, legislative and technological issues posed by a future of fully automated driving.

Our goal is to provide the vehicle with common sense. Drivers are good at using the cues around them to predict what will happen next, and they know that what you can’t see is often as important as what you can see. Our goal in working with MIT and Stanford is to bring a similar type of intuition to the vehicle.

The general feeling is that since V2V is cooperative—where everybody has to be effectively speaking the same language and there have to be rules about how to manage congestion on the channel—there are a lot of challenges there. We think [that area] is being led effectively by NHTSA right now, and that is a good way forward.

We see NHTSA leadership as important in the adoption, and the timeline will probably be heavily tied to NHTSA’s plans going forward. Whether V2V is a required technology for automated driving or not is still an open question. Drivers drive vehicles today; in theory an automated driver could do the same without V2V. Certainly if there was the ability to have that extra information available that V2V gives us, we would make use of it.

Comments

V2V should mean that any car has access to what every other car sees of the road, and a dynamic map of all the vehicles, pedestrians, cyclists, roadworks, obstacles etc becomes possible.
Given the vast information processing capabilities needed, this is inherently far safer and more aware than any individual car or driver could possibly be.

The potential of the technology is fantastic. But...knowing what we do about how much the government (and others) abuses information, who will really want their cars advertising that much information about where they are and what they're doing? People are already up in arms at the prospect of mileage-based taxation for just this reason.

Me, I will be sticking with my 35 year old, non-automated non-electronic truck until they completely prohibit it.

Okay, the good thing about this is information overload... even if the government would monitor your drive to Walmart or to the gas station or to wherever the odds of you being picked out of the potentially 10s of millions of cars, or trillions (or more bits) is absurd.

Id be more worried about the government being able to read packets mid-transit on your web browsing, or snooping in your personal networks (phone/facebook)(which it does).

V2V means just that... sure its broad casted widely, but its just cars talking to other cars.

Once you get past the paranoia, you'll see that its a hopeless cause, and you should just remain passive while they know every intimate detail youd rather not share with society.

I welcome anything that will take idiots from behind the wheel, or just prevent them from smashing into me.