Session:「Interactivity in Autonomous Vehicles」

What Makes an Automated Vehicle a Good Driver?

論文アブストラクト：
An automated vehicle needs to learn how human road users experience the intentions of other drivers and understand how they communicate with each other in order to avoid misunderstandings and prevent giving a negative external image during interactions. The aim of the present study is to identify a cooperative lane change indication which other drivers understand unambiguously and prefer when it comes to lane change announcements in a dense traffic situation on the highway. A fixed-base driving simulator study is conducted with N = 66 participants in Germany in a car-following scenario. Participants rated, from the lag driver's perspective, different lane change announcements of another driver which varied in lateral movements (i.e., duration, lateral offset). Main findings indicate that a medium offset and moderate duration of lateral movement is experienced as most cooperative. The results are crucial for the development of lane change strategies for automated vehicles.

論文アブストラクト：
Take-over requests (TORs) in highly automated vehicles are cues that prompt users to resume control. TORs however, are often evaluated in non-moving driving simulators. This ignores the role of motion, an important source of information for users who have their eyes off the road while engaged in non-driving related tasks. We ran a user study in a moving-base driving simulator to investigate the effect of motion on TOR responses. We found that with motion, user responses to TORs vary depending on the road context where TORs are issued. While previous work showed that participants are fast to respond to urgent cues, we show that this is true only when TORs are presented on straight roads. Urgent cues issued on curved roads elicit slower responses than non-urgent cues on curved roads. Our findings indicate that TORs should be designed to be aware of road context to accommodate natural user responses.

論文アブストラクト：
Drivers use nonverbal cues such as vehicle speed, eye gaze, and hand gestures to communicate awareness and intent to pedestrians. Conversely, in autonomous vehicles, drivers can be distracted or absent, leaving pedestrians to infer awareness and intent from the vehicle alone. In this paper, we investigate the usefulness of interfaces (beyond vehicle movement) that explicitly communicate awareness and intent of autonomous vehicles to pedestrians, focusing on crosswalk scenarios. We conducted a preliminary study to gain insight on designing interfaces that communicate autonomous vehicle awareness and intent to pedestrians. Based on study outcomes, we developed four prototype interfaces and deployed them in studies involving a Segway and a car. We found interfaces communicating vehicle awareness and intent: (1) can help pedestrians attempting to cross; (2) are not limited to the vehicle and can exist in the environment; and (3) should use a combination of modalities such as visual, auditory, and physical.