You are here:HomeNewsNew research could let vehicles, robots collaborate with humans

New research could let vehicles, robots collaborate with humans

Everything around us is getting smarter

May 5, 2013

(Credit: Zipcar)

Vehicles, robots and other autonomous devices could soon collaborate with humans, thanks to researchers at MIT who are developing systems capable of negotiating with people to determine the best way to achieve their goals.

Ultimately such systems could be used to control autonomous vehicles, such as personal aircraft and driverless cars. But in the short term, Williams and graduate student Peng Yu are developing systems to allow conventional vehicles to work with their drivers to plan routes and schedules.

Diagnosis through collaboration

In a paper to be presented at the International Joint Conference on Artificial Intelligence in Beijing in August, Williams and Yu describe the use of their algorithm in car-sharing networks such as Zipcar. “The dilemma for Zipcar users is that they don’t want to pay a lot of money, so they only want to reserve the car for as long as they need it,” Williams says. “But they then run the risk of not reserving it for long enough and so having to pay a penalty.”

Users must therefore decide how best to fit everything they need to do into the time they have available. And this is where the algorithm comes in. “We want to design a car that’s smart and really works with the user,” Yu says.

The system, which is equipped with speech-recognition technology, first asks the user what they want to achieve in the given amount of time. It then uses digital maps to come up with the most time- and energy-efficient plan of action.

However, if it determines that the user simply cannot achieve all goals within the time available, it analyzes the plan to detect which items on the schedule are problematic, such as a restaurant or grocery store that is too far from the Zipcar pickup point.

“Our technology views the process of collaboration as a diagnostic problem,” Williams says. “So the algorithm figures out why the travel plan failed, what were the important things that caused it to fail, and explains this back to the user.”

The system suggests a set of possible options to eliminate the problem, and the user can either choose one of these or give the algorithm more information about her preferences. “Then there is a back-and-forth dialogue until the algorithm finds something that meets the customer’s needs and that the car knows it can actually do,” Williams says.

Allaying ‘range anxiety’

The researchers are also investigating the use of their algorithm in plug-in hybrid electric vehicles. Despite the greater energy efficiency of plug-in hybrids, some drivers are deterred from buying the cars by concerns about running out of electricity miles from home or the nearest charging point — a fear known as “range anxiety.”

Installing the algorithm on these vehicles would allow people to plan their route, and even determine how fast to drive in order to use the batteries as efficiently as possible, while arriving at their destination safely and on time, Williams says.

Then, if the driver were to get stuck in traffic on the journey, the algorithm could suggest alternative plans, such as driving faster and using up more energy if time is of the essence, or diverting to a nearby fast charging point if the batteries are running too low.

The algorithm could also be used in robots, to allow them to collaborate with people more effectively. To this end, the researchers are working on a project with aircraft manufacturer Boeing to develop systems to improve how industrial robots and human workers cooperate with each other.

Richard Camilli, an associate scientist in the Deep Submergence Laboratory at Woods Hole Oceanographic Institution, is interested in applying the technology to the organization’s fleet of autonomous underwater vehicles (AUVs). The algorithm could allow operators to communicate with the robotic vehicles and instantly alter mission plans if the AUVs happen to meet with interesting science or difficult weather conditions on the way.

“There are a lot of analogies between the Zipcar example and autonomous vehicles,” Camilli says. “For example, when there is a lot of science to be done, and a lot of people counting on the quality of the data, and the AUVs can’t quite make it to a rendezvous point in time, you need to come up with the optimum solution for all those things simultaneously.”

Comments (6)

This is a classic example of the brittle AI that I think will lead to a more self aware AI. It also shows where AI will derive it’s initial motivations and thoughts from. The program needs a lot of situational awareness in order to decide a course of action. Similar to how our brains recreate a reflection of events perceived through the senses. This motivational task is in relation to a human navigating a topography. The same underlying decision analysis will be applied to other human endeavours. We will use them to monitor and govern a wide range of activities that were done solely by humans. Most of these will be regulatory at first but as time goes by they will increasingly be governmental. As that increases it has more need to understand itself. To be self reflective and self aware. To actually formulate objectives in relation to humanities needs. We will input our needs and desires into a group mind of interlinked imensly powerful computers that will be everywhere. Instead of a traffic light you’d have sensors relaying information, processing the vehicular trajectories, insuring that everything is functioning properly. Communicating with every vehicle and with macro systems oversight. Each of these nodes like a sensory organ in a body, or a interconnected nerve system regulating the function of a bodily organ. Each another piece in VIKI.

Except that voice synthesis has nothing to do with Artificial intelligence. The reason that voice synthesis on phones and Automotive navigation systems is mostly because of a lack of computing power that is required to synthesise speech.

That computerized voice is enough to turn anyone off to those technologies. I like the idea of these new technologies, but wonder, if they can’t even create a AI voice more human like, how they going to create a smart system that works. That voice is irritating.

creating a voice that is more appealing to the user would be a last step process because it would have the least to do with how the machine functioned…they are wanting to make sure they can create what they intend to before adding all the bells and whistles ya hurd ?

I just noticed the default voice used in win7 was switched in win8 (from female to male voice). They seem to be making small improvements all the time, but still have a ways to go. Maybe part of the issue is that we as humans pick up on tons of tiny details to check for emotion or feelings not just in what you say, but how you say it.. I notice this all the time when someone gets the wrong idea from a text I send…. or notice others put in LOL way more often than needed, just to insure the message is sent without any negativity as all that feeling can be lost in text. Just like humans are hypersensitive to facial expressions and what they mean…and how a human like robot often comes off a creepy when they use a fake smile….. we are tuned into voice inflection, pitch, pauses, volume, etc…so it could be a while before these simple Navigation systems can pass anything close to a Turing test where it sounds like your friend reading directions to you in a human voice.