In-Vehicle Technologies: Experience & Research

This page is devoted to discussions regarding specific in-vehicle technologies: cell phones, navigation systems, night vision systems, wireless Internet, and information and entertainment systems, among others. The purpose is to provide an avenue for drivers to share their experiences with, and impressions of these technologies so that benefits of these systems can be realized without causing unsafe driver distraction. Although specific in-vehicle devices are emphasized here, comment and discussion relevant to other non-technological or conventional sources of distraction are also welcome. Be sure to take or view results of our informal polls.

Talking while driving encompasses many different kinds of interactions. Consider these three distinct Scenarios: 1. Conversing with a front-seat passenger while driving 2. Conversing with a friend via phone while driving 3. Interacting with an automated voice system while driving (e.g. managing a list of email messages, or listening to text-to-speech email). S1 is the situation with the least amount of distraction for the driver. The driver and passenger share the same visual and physical contexts: they can both see and react to the driving environment. The conversational interaction is responsive to the driving task: when the driver is in the hot seat (heavy traffic, changing lanes, preparing for a maneuver, etc.), the driver and passenger are both sensitive to this priority and adapt/suspend the conversation accordingly. S2 is more demanding for the driver, because the remote interlocutor is not aware of the demands of the driving situation at any point in time. The passenger does not adapt the conversation in order to accommodate the driving task, so the driver is solely responsible for managing the conversation vis a vis driving. It's harder to do. Also, the driver lacks the visual cues that support conversational interaction with someone who is physically present: think of the difference between speaking with someone on the phone, and someone who is with you. Do you find that it requires more concentration to remain focused on a conversation with someone you cannot see? S3 is like S2 with an additional dimension of difficulty: managing speech interaction with a remote interlocutor who does not adapt the interaction to the driving situation, and who has just landed here from another planet (an automated voice rec system). He doesn't know much English yet, and he doesn't know much about protocols for conversational interaction with humans -- he expects us to figure out how to interact with him. He has a hearing impediment -- he finds it hard to distinguish human speech from the rest of the noise signal, so he frequently misunderstands us. S3 is not at all comparable to S1 (or even S2) in terms of cognitive workload. S1 and S2 are normal human interactions. We can assume that we are understood when we speak with a human -- we do not have to listen closely to verify this. We do not have to think about what the interlocutor expect to hear. We do not have to use strict discourse protocols in order to change the subject, or to resume a conversation that has been interrupted by another event. Our natural human methods of interaction impose a lighter workload than the artificial, imperfect methods of interaction with voice rec systems.