The contribution of gestures to spatial memory and navigation

Awardee/Collaborator: Alexia Galati, University of Cyprus

Collaborator: Marios Avraamides, University of Cyprus

In July 2013, funded by the SAVI grant, Alexia Galati from the University of Cyprus visited Nora Newcombe (SILC PI) and Steven Weisberg at Temple University to develop a project that examines the contribution of gestures to people’s spatial representations and navigation performance. In the project, participants study route directions while either gesturing or not, then they navigate that route from memory in a virtual environment (Virtual SILCton). Finally, their memory of that environment is assessed. At Temple, Alexia and her hosts collaborated to flesh out the details of the proposed experiment, creating and piloting route descriptions, adapting features of the virtual environment, and finalizing the tests used to assess the navigators’ memory representations. In addition, during the lab-based exchange, Alexia participated in the meetings of the Research in Spatial Cognition group at Temple, giving a talk to the group and meeting with the group’s members and affiliates.

Figure 1. Aerial view map of the layout of buildings with two routes (solid lines) in the virtual environment.

Upon returning to Cyprus, the project was implemented at the Experimental Psychology Lab, in collaboration with Marios Avraamides. The data collection was completed by the end of the fall semester. In this first experiment, after completing self-report and psychometric measures intended to capture individual differences in spatial ability, 36 participants studied a route description and navigated that route in the virtual environment (see Figure 1). Their memory of the environment was then assessed through a pointing task, in which they located landmark buildings within the virtual environment, and a model-building task, in which they arranged the route’s landmark buildings on a map. This series of tasks was done twice: for one route, participants at study were instructed to perform gestures congruent with the described path (Gesture condition), whereas for the other route they were instructed to keep their hands still (No Gesture condition) (see Figure 2). The order of these conditions and of the particular routes was counterbalanced across participants.

Figure 2. A participant studying a set of route directions in the Gesture condition.

So far, findings suggest that gesturing while processing route descriptions influences navigators’ resulting spatial representations selectively, depending on the difficulty of the route, the navigators’ spatial abilities, and their prior learning strategies. Overall, gesturing at study did not influence participants' accuracy in the pointing and model building tasks. However, for the “easier” route (on which navigators were more accurate), preventing gestures after a gesture experience (i.e., in the second block) worsened performance in both tasks. This pattern was driven by participants with low spatial abilities on the Philadelphia Spatial Abilities Scale (PSAS) and the Santa Barbara Sense of Direction scale (SBSOD-CY): participants who could not gesture while studying the easier route made larger pointing errors. The lower participants' PSAS and SBSOD-CY scores were, the more pronounced this difference was.

Currently, the videos of participants’ navigation in the virtual environment are being coded (see Figure 3). By determining the duration of the navigators’ route, the frequency of their errors (including choice point errors, such as taking the wrong turn, and missed choice points, such as bypassing a turn), and the frequency of their pauses, we hope to address directly how gesturing at study influences navigation performance and how navigation performance may have mediated the accuracy of participants’ memory representations.

Follow-up experiments in the coming months will further elucidate the circumstances under which gestures might confer an advantage in navigation performance. We are specifically interested in examining whether the performance of navigators who spontaneously gesture at study differs from those who don’t. Moreover, by accompanying descriptions with arrows rather than gestures, we are also interested in examining whether such perceptual reinforcement of relevant information (the described turns) operates similarly to the sensorimotor information afforded by gestures.

Spatial and Social Cognitive Processes in Language and Action

Awardee/Collaborator: Amy Pace, San Diego State University

Figure 1. View of the Hungarian Parliament Building (located in Pest) taken from across the Danube (in the hills of Buda).

In May 2013, Amy Pace participated in a collaborative exchange at the Cognitive Development Center, directed by Dr. Gergely Csibra, at the Central European University in Budapest, Hungary. The SAVI supplement supported scholarly interaction and mentorship in topics relating to spatial learning across diverse areas of scientific inquiry including language acquisition, social communication, and cognitive neuroscience. The specific aims of this scholarly exchange were to: (1) open a dialogue to identify common mechanisms in the development of spatial- and social-cognition; (2) engage in training activities to build cross-disciplinary understanding of methodology; and (3) provide a platform for future collaboration.

Figure 3. Dr. Gergely Csibra at lunch with members of the Cognitive Development Center.

During the visit, Amy had the opportunity to present findings from ERP research on how toddlers process unfamiliar actions (Pace, Carver, & Friend, 2013) and receive feedback on an ongoing project in collaboration with Leslie Carver (UCSD), Kathy Hirsh-Pasek (Temple University), and Dani Levine (Temple University) investigating the neural correlates of dynamic event segmentation at 10 to 11 months.

Figure 2. Amy presents her lecture, "Spatial and Social Cognitive Processes in Language and Action" at the Central European University.

In addition, the team participated in instructional workshops to share information across methodologies. The Csibra lab, for example, is beginning to conduct infant research using optical imaging (NIRS) as a non-invasive measure of the brain’s hemodynamic response related to activation triggered by different perceptual modalities (Gervain et al., 2011). Additional training in time frequency analyses currently in development by lab members at the CDC were demonstrated on infant EEG and ERP data.

As an additional part of the exchange, we organized an informal discussion panel on the role of perceptual and conceptual learning mechanisms that may support cognitive development in spatial and social domains. This debate is central to understanding how spatial information (e.g., motion kinematics, velocity, and trajectory) and intentional cues (e.g., gaze following, referential gesture, joint-attention) work together to shape children’s perception, interpretation and representation of the world. This exchange served as a foundation for ongoing international collaboration.

SILC Resources

You are here: SILC Home PageSILC Showcase Showcase April 2014: An up-date on two of the Lab-based exchange projects supported through our Thematic Network in Spatial Cognition (TNSC) via an NSF SAVI award supplement