This file was created by the Typo3 extension
sevenpack version 0.7.14
--- Timezone: CEST
Creation date: 2018-05-24
Creation time: 19-46-02
--- Number of references
83
book2088How far can we get with just visual information? Path integration and spatial updating studies in Virtual Reality
[Wie weit kommt man mit visueller Information allein? Pfadintegrations- und spatial updating Studien in Virtueller Realität]2003213How do we find our way in everyday life? In real world situations, it typically takes a considerable amount of time to get completely lost. In most Virtual Reality (VR) applications, however, users are quickly lost after only a few simulated turns. This happens even though many recent VR applications are already quite compelling and look convincing at first glance. So what is missing in those simulated spaces? Why is spatial orientation there not as easy as in the real world? In other words, what sensory information is essential for accurate, effortless and robust spatial orientation? How are the different information sources combined and processed?
In this thesis, these and related questions were approached by performing a series of spatial orientation experiments in various VR setups as well as in the real world. Modeling of the underlying spatial orientation processes finally led to a comprehensive framework based on logical propositions, which was applied to both our experiments and selected experiments from the literature.Tübingen, Univ., Diss., 2003http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.logos-verlag.de/cgi-bin/engbuchmid?isbn=0440&lng=deu&id=Logos VerlagBerlin, GermanyMPI Series in Biological Cybernetics ; 8Biologische KybernetikMax-Planck-GesellschaftPhD978-3-8325-0440-3bernieBERieckearticleMeilingerRB2013Local and global reference frames for environmental spacesQuarterly Journal of Experimental Psychology20143673542-569Two experiments examined how locations in environmental spaces, which cannot be overseen from one location, are represented in memory: by global reference frames, multiple local reference frames, or orientation-free representations. After learning an immersive virtual environment by repeatedly walking a closed multisegment route, participants pointed to seven previously learned targets from different locations. Contrary to many conceptions of survey knowledge, local reference frames played an important role: Participants performed better when their body or pointing targets were aligned with the local reference frame (corridor). Moreover, most participants turned their head to align it with local reference frames. However, indications for global reference frames were also found: Participants performed better when their body or current corridor was parallel/orthogonal to a global reference frame instead of oblique. Participants showing this pattern performed comparatively better. We conclude that survey tasks can be solved based on interconnected local reference frames. Participants who pointed more accurately or quickly additionally used global reference frames.http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2014/QJEP-2014.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.tandfonline.com/doi/abs/10.1080/17470218.2013.821145#.UwdVTs6P6QB10.1080/17470218.2013.821145meilingerTMeilingerbernieBERieckehhbHHBülthoffarticleTeramotoR2011Dynamic visual information facilitates object recognition from novel viewpointsJournal of Vision2011111013:111-13Normally, people have difficulties recognizing objects from novel as compared to learned views, resulting in increased reaction times and errors. Recent studies showed, however, that this “view-dependency” can be reduced or even completely eliminated when novel views result from observer's movements instead of object movements. This observer movement benefit was previously attributed to extra-retinal (physical motion) cues. In two experiments, we demonstrate that dynamic visual information (that would normally accompany observer's movements) can provide a similar benefit and thus a potential alternative explanation. Participants performed sequential matching tasks for Shepard–Metzler-like objects presented via head-mounted display. As predicted by the literature, object recognition performance improved when view changes (45° or 90°) resulted from active observer movements around the object instead of object movements. Unexpectedly, however, merely providing dynamic visual information depicting the viewpoint change showed an equal benefit, despite the lack of any extra-retinal/physical self-motion cues. Moreover, visually simulated rotations of the table and hidden target object (table movement condition) yielded similar performance benefits as simulated viewpoint changes (scene movement condition). These findings challenge the prevailing notion that extra-retinal (physical motion) cues are required for facilitating object recognition from novel viewpoints, and highlight the importance of dynamic visual cues, which have previously received little attention.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.journalofvision.org/content/10/13/11.abstract10.1167/10.13.11terawWTeramotobernieBERieckearticle6104Auditory self-motion simulation is facilitated by haptic and vibrational cues suggesting the possibility of actual motionACM Transactions on Applied Perception2009863:201-20Sound fields rotating around stationary blindfolded listeners sometimes elicit auditory circular vection, the illusion that the listener is physically rotating. Experiment 1 investigated whether auditory circular vection depends on participants&lsquo; situational awareness of movability, that is, whether they sense/know that actual motion is possible or not. While previous studies often seated participants on movable chairs to suspend the disbelief of self-motion, it has never been investigated whether this does, in fact, facilitate auditory vection. To this end, 23 blindfolded participants were seated on a hammock chair with their feet either on solid ground (movement impossible) or suspended (movement possible) while listening to individualized binaural recordings of two sound sources rotating synchronously at 60/s. Although participants never physically moved, situational awareness of movability facilitated auditory vection. Moreover, adding slight vibrations like the ones result
ing from actual chair rotation increased the frequency and intensity of vection. Experiment 2 extended these findings and showed that nonindividualized binaural recordings were as effective in inducing auditory circular vection as individualized recordings. These results have important implications both for our theoretical understanding of self-motion perception and for the applied field of self-motion simulations, where vibrations, nonindividualized binaural sound, and the cognitive/perceptual framework of movability can typically be provided at minimal cost and effort.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://portal.acm.org/citation.cfm?id=1577755.1577763&amp;coll=portal&amp;dl=ACM&amp;idx=J932&amp;part=transaction&amp;WantType=Transactions&amp;title=ACM%20Transactions%20on%20Applied%20Perception%20(TAP)&amp;CFID=56761798&amp;CFTOKEN=27688131Biologische KybernetikMax-Planck-Gesellschaften10.1145/1577755.1577763bernieBERieckeDFeuereissenJJRieserarticle6103Moving sounds enhance the visually-induced self-motion illusion (circular vection) in virtual realityACM Transactions on Applied Perception2009262:71-27While rotating visual and auditory stimuli have long been known to elicit self-motion illusions (circular vection), audiovisual interactions have hardly been investigated. Here, two experiments investigated whether visually induced circular vection can be enhanced by concurrently rotating auditory cues that match visual landmarks (e.g., a fountain sound). Participants sat behind a curved projection screen displaying rotating panoramic renderings of a market place. Apart from a no-sound condition, headphone-based auditory stimuli consisted of mono sound, ambient sound, or low-/high-spatial resolution auralizations using generic head-related transfer functions (HRTFs). While merely adding nonrotating (mono or ambient) sound showed no effects, moving sound stimuli facilitated both vection and presence in the virtual environment. This spatialization benefit was maximal for a medium (20 × 15) FOV, reduced for a larger (54 × 45) FOV and unexpectedly absent for the smallest (10 × 7.5) FOV. Increasing auraliza
tion spatial fidelity (from low, comparable to five-channel home theatre systems, to high, 5 resolution) provided no further benefit, suggesting a ceiling effect. In conclusion, both self-motion perception and presence can benefit from adding moving auditory stimuli. This has important implications both for multimodal cue integration theories and the applied challenge of building affordable yet effective motion simulators.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://portal.acm.org/citation.cfm?id=1498700.1498701&amp;coll=portal&amp;dl=ACM&amp;idx=J932&amp;part=transaction&amp;WantType=Transactions&amp;title=ACM%20Transactions%20on%20Applied%20Perception%20(TAP)&amp;CFID=56761798&amp;CFTOKEN=27688131Biologische KybernetikMax-Planck-Gesellschaften10.1145/1498700.1498701bernieBERieckeAVäljamäejspJSchulte-Pelkumarticle4781Consistent Left-Right Reversals for Visual Path Integration in Virtual Reality: More Than a Failure to Update One&lsquo;s Heading?Presence20084172143-175http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Riecke_Presence2008_Figures_color_4781[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.kyb.mpg.de/bu/people/bernie/Riecke_08_Presence/Riecke_08_Presence_Figures_color.pdfBiologische KybernetikMax-Planck-Gesellschaftendoi:10.1162/pres.17.2.143bernieBERieckearticle4628Visual control of posture in real and virtual environmentsPerception and Psychophysics20081701158-165Two experiments investigated the stabilizing influence of vision on human upright
posture in real and virtual environments. Visual stabilization was assessed by comparing
eyes-open to eyes-closed conditions while participants attempted to maintain balance in
the presence of a stable visual scene. Visual stabilization in the virtual display was
reduced compared to real world viewing. This difference was partially accounted for by
the reduced field of view in the virtual display. When the retinal flow in the virtual
display was removed by using dynamic random dot stereograms with single frame
lifetimes (cyclopean stimuli), vision did not stabilize posture. There was also an overall
larger stabilizing influence of vision when adopting more unstable stances (e.g., one-foot
compared to side-by-side stance). Reducing the graphics latency of the virtual display by
63% did not increase visual stabilization in the virtual display. Other visual and
psychological differences between real and virtual environments are discussed.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.ingentaconnect.com/content/psocpubs/prp/2008/00000070/00000001/art00014Biologische KybernetikMax-Planck-Gesellschaften10.3758/PP.70.1.158JWKellybernieBERieckeloomisJMLoomisACBeallarticle3768Spatial updating in virtual reality: the sufficiency of visual informationPsychological Research20069713298-313Robust and effortless spatial orientation critically relies on automatic and obligatory spatial
updating, a largely automatized and reflex-like process that transforms our mental egocentric
representation of the immediate surroundings during ego-motions. A rapid pointing paradigm
was used to assess automatic/obligatory spatial updating after visually displayed upright rotations
with or without concomitant physical rotations using a motion platform. Visual stimuli
displaying a natural, subject-known scene proved sufficient for enabling automatic and obligatory
spatial updating, irrespective of concurrent physical motions. This challenges the prevailing
notion that visual cues alone are insufficient for enabling such spatial updating of rotations,
and that vestibular/proprioceptive cues are both required and sufficient. Displaying optic flow
devoid of landmarks during the motion and pointing phase was insufficient for enabling automatic
spatial updating, but could not be entirely ignored either. Interestingly, additional physical
motion cues hardly improved performance, and were insufficient for affording automatic
spatial updating. The results are discussed in the context of the mental transformation hypothesis
and the sensorimotor interference hypothesis, which associates difficulties in imagined
perspective switches to interference between the sensorimotor and cognitive (to-be-imagined)
perspective.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Riecke__06_PsychologicalResearch_onlinePublication__Spatial_Updating_in_Virtual_Reality_-_The_Sufficiency_of_Visual_Information_3768[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.springerlink.com/content/321247260x5446j0/fulltext.pdfBiologische KybernetikMax-Planck-Gesellschaften10.1007/s00426-006-0085-zbernieBERieckedwcDWCunninghamhhbHHBülthoffarticle3767Cognitive Factors can Influence Self-Motion Perception (Vection) in Virtual RealityACM Transactions on Applied Perception2006733194-216Research on self-motion perception and simulation has traditionally focussed on the contribution of physical stimulus properties (bottom-up factors) using abstract stimuli. Here, we demonstrate that cognitive (top-down) mechanisms like ecological relevance and presence evoked by a virtual environment can also enhance visually induced self-motion illusions (vection). In two experiments, naive observers were asked to rate presence and the onset, intensity, and convincingness of circular vection induced by different rotating visual stimuli presented on a curved projection screen (FOV: 54°×45°). Globally consistent stimuli depicting a natural 3D scene proved more effective in inducing vection and presence than inconsistent (scrambled) or unnatural (upside-down) stimuli with similar physical stimulus properties. Correlation analyses suggest a direct relationship between spatial presence and vection. We propose that the spatial reference frame evoked by the naturalistic environment increased the believability of
the visual stimulus, such that it was more easily accepted as a stable scene with respect to which visual motion is more likely to be judged as self-motion than object-motion. This work extends our understanding of mechanisms underlying self-motion perception, and might thus help to improve the effectiveness and believability of Virtual Reality applications.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Riecke__06_TAP__Cognitive_Factors_can_Influence_Self_Motion_Perception_-Vection-_in_Virtual_Reality__asPrinted_3767[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://doi.acm.org/10.1145/1166087.1166091Biologische KybernetikMax-Planck-Gesellschaftenhttp://doi.acm.org/10.1145/1166087.1166091bernieBERieckejspJSchulte-PelkummariosMNAvraamidesmvdhMvon der HeydehhbHHBülthoffarticle3029Visual cues can be sufficient for triggering automatic, reflexlike spatial updatingACM Transactions on Applied Perception2005723183-215"Spatial updating" refers to the process that automatically updates our egocentric mental representation of our
immediate surround during self-motions, which is essential for quick and robust spatial orientation. To investigate
the relative contribution of visual and vestibular cues to spatial updating, two experiments were performed in a
high-end Virtual Reality system. Participants were seated on a motion platform and saw either the surrounding
room or a photorealistic virtual model presented via head-mounted display or projection screen. After upright
rotations, participants had to point "as accurately and quickly as possible" to previously-learned targets that were
outside of the current field of view (FOV). Spatial updating performance, quantified as response time, config-uration
error, and pointing error, was comparable in the real and virtual reality conditions when the FOV was
matched. Two further results challenge the prevailing basic assumptions about spatial updating: First, automatic,
reflex-like spatial updating occurred without any physical motion, i.e., visual motion information from a known
scene alone can indeed be sufficient, especially for large FOVs. Second, continuous motion information is not,
in fact, mandatory for spatial updating - merely presenting static images of new orientations proved sufficient,
motivating our distinction between continuous and instant-based spatial updating.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/riecke-heyde-buelthoff-acm-tap-2005_3029[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://doi.acm.org/10.1145/1077399.1077401Biologische KybernetikMax-Planck-Gesellschaften10.1145/1077399.1077401bernieBRieckemvdhMvon der HeydehhbHHBülthoffarticle1202Visual Homing is possible without Landmarks: A Path Integration Study in Virtual RealityPresence: Teleoperators and Virtual Environments200210115443-473The literature often suggests that proprioceptive and especially vestibular cues are required for navigation and spatial orientation tasks involving rotations of the observer. To test this notion, we conducted a set of experiments in virtual environments where only visual cues were provided. Participants had to execute turns, reproduce distances or perform triangle completion tasks. Most experiments were performed in a simulated 3D field of blobs, thus restricting navigation strategies to path integration based on optic flow. For our experimental setup (half-cylindrical 180° projection screen), optic flow information alone proved to be sufficient for untrained participants to perform turns and reproduce distances with negligible systematic errors, irrespective of movement velocity. Path integration by optic flow was sufficient for homing by triangle completion, but homing distances were biased towards the mean response. Additional landmarks that were only temporarily available did not improve homing performance. However, navigation by stable, reliable landmarks led to almost perfect homing performance. Mental spatial ability test scores correlated positively with homing performance especially for the more complex triangle completion tasks, suggesting that mental spatial abilities might be a determining factor for navigation performance. In summary, visual path integration without any vestibular or kinesthetic cues can be sufficient for elementary navigation tasks like rotations, translations, and triangle completion.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1202.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://mitpress.mit.edu/catalog/item/default.asp?sid=7FFDB92B-EF42-408A-9F41-504F210447B3&ttype=6&tid=9224Biologische KybernetikMax-Planck-Gesellschaft10.1162/105474602320935810bernieBERieckeveenHAHCvan VeenhhbHHBülthoffinproceedingsSoykaLSFRM2016Enhancing stress management techniques using virtual reality2016785-88Chronic stress is one of the major problems in our current fast paced society. The body reacts to environmental stress with physiological changes (e.g. accelerated heart rate), increasing the activity of the sympathetic nervous system. Normally the parasympathetic nervous system should bring us back to a more balanced state after the stressful event is over. However, nowadays we are often under constant pressure, with a multitude of stressful events per day, which can result in us constantly being out of balance. This highlights the importance of effective stress management techniques that are readily accessible to a wide audience. In this paper we present an exploratory study investigating the potential use of immersive virtual reality for relaxation with the purpose of guiding further design decisions, especially about the visual content as well as the interactivity of virtual content. Specifically, we developed an underwater world for head-mounted display virtual reality. We performed an experiment to evaluate the effectiveness of the underwater world environment for relaxation, as well as to evaluate if the underwater world in combination with breathing techniques for relaxation was preferred to standard breathing techniques for stress management. The underwater world was rated as more fun and more likely to be used at home than a traditional breathing technique, while providing a similar degree of relaxation.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deResearch Group MohlerDepartment Bülthoffhttp://dl.acm.org/citation.cfm?id=2931017Jain, E. , S. JoergACM PressNew York, NY, USAAnaheim, CA, USAACM Symposium on Applied Perception (SAP '16)978-1-4503-4383-110.1145/2931002.2931017fsoykaFSoykaleyrerMLeyrerjsmallwoodJSmallwoodcfergusonCFergusonbernieBERieckemohlerBJMohlerinproceedings5097Auditory self-motion illusions ("circular vection") can be facilitated by vibrations and the potential for actual motion20088147-154It has long been known that sound fields rotating around a stationary, blindfolded observer can elicit self-motion illusions ("circular vection") in 20--60% of participants. Here, we investigated whether auditory circular vection might depend on whether participants sense and know that actual motion is possible or impossible. Although participants in auditory vection studies are often seated on moveable seats to suspend the disbelief of self-motion, it has never been investigated whether this does, in fact, facilitate vection. To this end, participants were seated on a hammock chair with their feet either on solid ground ("movement impossible" condition) or suspended ("movement possible" condition) while listening to individualized binaural recordings of two sound sources rotating synchronously at 60°/s. In addition, hardly noticeable vibrations were applied in half of the trials. Auditory circular vection was elicited in 8/16 participants. For those, adding vibrations enhanced vection in all dependent measures. Not touching solid ground increased the intensity of self-motion and the feeling of actually rotating in the physical room. Vection onset latency and the percentage of trials where vection was elicited were only marginally significantly (p<.10) affected, though. Together, this suggests that auditory self-motion illusions can be stronger when one senses and knows that physical motion might, in fact, be possible (even though participants always remained stationary). Furthermore, there was a benefit both of adding vibrations and having one's feet suspended. These results have important implications both for our theoretical understanding of self-motion perception and for the applied field of self-motion simulations, where both vibrations and the cognitive/perceptual framework that actual motion is possible can typically be provided at minimal cost and effort.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RieckeFeuereissenRieser_08_APGV_submitted-Preprint__Auditory%20self-motion%20illusions%20can%20be%20facilitated%20by%20vibrations%20and%20the%20potential%20for%20actual%20motion_5097[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://apgv.local/archive/apgv08/Creem-Regehr, S. H., K. MyszkowskiACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftLos Angeles, CA, USA5th Symposium on Applied Perception in Graphics and Visualization (APGV 2008)en978-1-59593-981-410.1145/1394281.1394309bernieBERieckeDFeuereissenJJRieserinproceedings5534Navigation modes in virtual environments: walking vs. joystick20088192There is considerable evidence that people have difficulty maintaining orientation in virtual environments. This difficulty is usually attributed to poor idiothetic cues, such as the absence of proprioception and other sources of information provided by self locomotion. The lack of proprioceptive cues presents a strong argument against the use of a joystick interface, and the importance of full physical movement for navigation tasks has also recently been confirmed by Ruddle and Lessels [2006], who showed that subjects performing a navigational task were superior when they were allowed to walk freely rather than when they could only physically rotate themselves or only move virtually. Our study seeks to confirm the results of Ruddle and Lessels.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://apgv.local/archive/apgv08/Creem-Regehr, S. H., K. MyszkowskiACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftLos Angeles, CA, USA5th Symposium on Applied Perception in Graphics and Visualization (APGV 2008)en978-1-59593-981-410.1145/1394281.1394321PPengbernieBERieckeBWilliamsTPMcNamaraBBodenheimerinproceedings4437An Integrative Theory of Spatial Orientation in the Immediate Environment200781822http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RieckeMcNamara_07_CogSci_1page_poster__An_Integrative_Theory_of_Spatial_Orientation_in_the_Immediate_Environment_4437[1].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.cogsci.rpi.edu/csjarchive/proceedings/2007/forms/contents5.htmCurranRed Hook, NY, USABiologische KybernetikMax-Planck-GesellschaftNashville, TN, USA29th Annual Conference of the Cognitive Science Society (CogSci 2007)enbernieBERieckeTPMcNamarainproceedings4439Orientation Specificity in Long-Term-Memory for Environmental Spaces20078473-478This study examined orientation specificity in long-term
human memory for environmental spaces. Twenty
participants learned an immersive virtual environment by
walking a multi-segment route in one direction. The
environment consisted of seven corridors within which target
objects were located. In the testing phase, participants were teleported to different locations in the environment and were asked to identify their location and heading and then point towards previously learned targets. As predicted by viewdependent theory, participants pointed more accurately when oriented in the direction in which they originally learned each corridor. No support was found for a global reference direction underlying the memory of the whole layout or for an exclusive orientation-independent memory. We propose a "network of reference frames" theory to integrate elements of the different theoretical positions.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/orientation%20specificity%20in%20environmental%20spaces%20final_4439[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://csjarchive.cogsci.rpi.edu/proceedings/2007/CurranRed Hook, NY, USABiologische KybernetikMax-Planck-GesellschaftNashville, TN, USA29th Annual Conference of the Cognitive Science Society (CogSci 2007)en978-1-605-60507-4meilingerTMeilingerbernieBERieckehhbHHBülthoffinproceedings4430Consistent left-right errors for visual path integration in virtual reality: more than a failure to update one's heading?20077139Optic flow is known to enable humans to estimate heading, translations, and rotations.
Here, we investigated whether optic flow simulating self-motions in virtual reality might also enable natural and intuitive spatial orientation, without the need for error-corrective feedback or training.
After visually displayed passive excursions along 1- or 2-segment paths, participants had to point toward the starting point "as accurately and quickly as possible".
Turning angles were announced in advance to obviate encoding errors due to misperceived turning angles. Nevertheless, many participants still produced surprisingly large systematic and random errors, and perceived task difficulty and response times were unexpectedly high.
Moreover, 11 of the 24 participants showed consistent qualitative errors, namely left-right reversals -- despite not misinterpreting the visually simulated motion direction.
Careful analysis suggests that some, but not all, of the left-right inversions can be explained by a failure to update visually displayed heading changes.
Left-right inversion was correlated with reduced mental spatial ability (corroborating earlier results), but not gender.
In conclusion, optic flow was clearly insufficient for enabling natural and intuitive spatial orientation or automatic spatial updating, even when advance information about turning angles was provided.
We posit that investigating qualitative errors for basic spatial orientation tasks using, e.g., point-to-origin paradigms can be a powerful tool for benchmarking VR setups from a human-centered perspective.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/apgv07-139_4430[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.apgv.org/archive/apgv07/Wallraven, C. , V. SundstedtACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftTübingen, Germany4th Symposium on Applied Perception in Graphics and Visualization (APGV 2007)en978-1-59593-670-710.1145/1272582.1272616bernieBERieckeinproceedings4791Do HDR displays support LDR content?: a psychophysical evaluationACM Transactions on Graphics20077263:381-7The development of high dynamic range (HDR) imagery has brought us to the verge of arguably the largest change in image display technologies since the transition from black-and-white to color television. Novel capture and display hardware will soon enable consumers to enjoy the HDR experience in their own homes. The question remains, however, of what to do with existing images and movies, which are intrinsically low dynamic range (LDR). Can this enormous volume of legacy content also be displayed effectively on HDR displays? We have carried out a series of rigorous psychophysical investigations to determine how LDR images are best displayed on a state-of-the-art HDR monitor, and to identify which stages of the HDR imaging pipeline are perceptually most critical. Our main findings are: (1) As expected, HDR displays outperform LDR ones. (2) Surprisingly, HDR images that are tonemapped for display on standard monitors are often no better than the best single LDR exposure from a bracketed sequence. (3) Most impor
tantly of all, LDR data does not necessarily require sophisticated treatment to produce a compelling HDR experience. Simply boosting the range of an LDR image linearly to fit the HDR display can equal or even surpass the appearance of a true HDR image. Thus the potentially tricky process of inverse tone mapping can be largely circumvented.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/SIGGRAPH07_camera_ready_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://portal.acm.org/toc.cfm?id=1276377&amp;amp;amp;coll=portal&amp;amp;amp;dl=ACM&amp;amp;amp;type=issue&amp;amp;amp;idx=J778&amp;amp;amp;part=transaction&amp;amp;amp;WantType=Transactions&amp;amp;amp;title=ACM%20Transactions%20on%20Graphics%20%28TOG%29ACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftSan Diego, CA, USA34th International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH 2007)en10.1145/1275808.1276425akyuzAOAkyuzrolandRWFlemingbernieBERieckeEReinhardhhbHHBülthoffinproceedings4654Physical self-motion facilitates object recognition, but does not enable view-independence20077142It is well known that people have difficulties in recognizing an object from novel views as compared to learned views, resulting in increased response times and/or errors. This so-called view-dependency has been confirmed by many studies. In the natural environment, however, there are two ways of changing views of an object: one is to rotate an object in front of a stationary observer (object-movement), the other is for the observer to move around a stationary object (observer-movement). Note that almost all previous studies are based on the former procedure. Simons et al. [2002] criticized previous studies in this regard and examined the difference between object- and observer-movement directly. As a result, Simons et al. [2002] reported the elimination of this view-dependency when novel views resulted from observer-movement, instead of object-movement. They suggest the contribution of extra-retinal (vestibular and proprioceptive) information to object recognition. Recently, however, Zhao et al. [2007] repor
ted that the observer&amp;amp;amp;lsquo;s movement from one view to another only decreased view-dependency without fully eliminating it. Furthermore, even this effect vanished for rotations of 90° instead of 50°. Larger rotations were not tested. The aim of the present study was to clarify the underlying mechanism of this phenomenon and to investigate larger angles of view change (45-180°, in 45° steps).http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/apgv07-142_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.apgv.org/archive/apgv07/Wallraven, C. , V. SundstedtACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftTübingen, Germany4th Symposium on Applied Perception in Graphics and Visualization (APGV 2007)en978-1-59593-670-710.1145/1272582.1272619terawWTeramotobernieBERieckeinproceedings4318Can People Not Tell Left from Right in VR? Point-to-origin Studies Revealed Qualitative Errors in Visual Path Integration200733-10Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In psychological experiments, such disorientation is often compensated for by extensive training. The current study investigated participants' sense of direction by means of a rapid point-to-origin task without any training or performance feedback. This allowed us to study participants' intuitive spatial orientation in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. After visually displayed passive excursions along one-or two-segment trajectories, participants were asked to point back to the origin of locomotion "as accurately and quickly as possible". Despite using a high-quality video projection with a 84deg times 63deg field of view, participants' overall performance was rather poor. Moreover, six of the 16 participants exhibited striking qualitative errors, i.e., consistent left-right confusions that have not been observed in comparable real world experiments. Taken together, this study suggests that even an immersive high-quality video projection system is not necessarily sufficient for enabling natural spatial orientation in VR. We propose that a rapid point-to-origin paradigm can be a useful tool for evaluating and improving the effectiveness of VR setups in terms of enabling natural and unencumbered spatial orientation and performance.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RieckeWiener_07_VR2007_resubmitted__Can%20people%20not%20tell%20left%20from%20right%20in%20VR%20-%20Point-to-origin%20studies%20revealed%20qualitative%20errors%20in%20visual%20path%20integration_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://conferences.computer.org/vr/2007/Sherman, W. A., M. Lin, A. SteedIEEEPiscataway, NJ, USABiologische KybernetikMax-Planck-GesellschaftCharlotte, NC, USAIEEE Virtual Reality (VR 2007)en1-4244-0906-310.1109/VR.2007.352457bernieBERieckemalteJMWienerinproceedings4063Simple User-Generated Motion Cueing can Enhance Self-Motion Perception (Vection) in Virtual Reality200611104-107Despite amazing advances in the visual quality of virtual environ-ments, affordable-yet-effective self-motion simulation still poses a major challenge. Using a standard psychophysical paradigm, the effectiveness of different self-motion simulations was quantified in terms of the onset latency, intensity, and convincingness of the per-ceived illusory self motion (vection). Participants were asked to actively follow different pre-defined trajectories through a naturalistic virtual scene presented on a panoramic projection screen using three different input devices: a computer mouse, a joystick, or a modified manual wheelchair. For the wheelchair, participants exerted their own minimal motion cueing using a simple force-feedback and a velocity control paradigm: small translational or rotational motions of the wheelchair (limited to 8cm and 10°, re-spectively) initiated a corresponding visual motion with the visual velocity being proportional to the wheelchair deflection (similar to a joystick). All dependent measures showed a clear enhancement of the perceived self-motion when the wheelchair was used instead of the mouse or joystick. Compared to more traditional approaches of enhancing self-motion perception (e.g., motion platforms, free walking areas, or treadmills) the current approach of using a simple user-generated motion cueing has only minimal requirements in terms of overall costs, required space, safety features, and technical effort and expertise. Thus, the current approach might be promising for a wide range of low-cost applications.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/VRST2006_paper_-_Simple_User-Generated_Motion_Cueing_can_Enhance_Self-Motion_Perception_in_Virtual_Reality_asResubmitted_4063[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffwww.vrst.ploegos.comSlater, M. , Y. Kitamura, A. Tal, A. Amditis, Y. ChrysanthouACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftACMLimassol, CyprusACM Symposium on Virtual Reality Software and Technology (VRST 2006)en1-59593-321-210.1145/1180495.1180517bernieBERieckeinproceedings3937Point-to-origin experiments in VR revealed novel qualitative errors in visual path integration20067156Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In psychological experiments, such disorientation is often compensated for by extensive training and performance feedback. The current study investigated participants' sense of direction by means of a rapid point-to-origin task without any training or performance feedback. This allowed us to study participants' intuitive spatial orientation processes in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. From an applied perspective, such a paradigm could be employed for evaluating the effectiveness and usability of a given VR setup for enabling natural and unencumbered spatial orientation even for first-time users, which is important for tasks such as architecture walk-throughs, evacuation scenario training, or driving/flight simulators.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RieckeWiener_06APGV_resubmitted1page__Point-to-origin%20experiments%20in%20VR%20revealed%20novel%20qualitative%20errors%20in%20visual%20path%20integration_3937[1].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.apgv.org/archive/apgv06/Fleming, R.W. , S. KimACM PressNew York, USABiologische KybernetikMax-Planck-GesellschaftBoston, MA, USA3rd Symposium on Applied Perception in Graphics and Visualization (APGV 2006)en1-59593-429-410.1145/1140491.1140533bernieBERieckemalteJMWienerinproceedings3466Influence of Auditory Cues on the visually-induced Self-Motion Illusion (Circular Vection) in Virtual Reality2005949-57This study investigated whether the visually induced selfmotion illusion (“circular vection”) can be enhanced by
adding a matching auditory cue (the sound of a fountain
that is also visible in the visual stimulus). Twenty observers viewed rotating photorealistic pictures of a market place projected onto a curved projection screen (FOV: 54°x45°).
Three conditions were randomized in a repeated measures
within-subject design: No sound, mono sound, and
spatialized sound using a generic head-related transfer
function (HRTF). Adding mono sound increased
convincingness ratings marginally, but did not affect any of
the other measures of vection or presence. Spatializing the
fountain sound, however, improved vection (convincingness
and vection buildup time) and presence ratings
significantly. Note that facilitation was found even though
the visual stimulus was of high quality and realism, and
known to be a powerful vection-inducing stimulus. Thus,
HRTF-based auralization using headphones can be
employed to improve visual VR simulations both in terms of
self-motion perception and overall presence.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Riecke__05_paper4Presence2005_Influence_of_Auditory_Cues_on_the_visually-induced_Self-Motion_Illusion_-Circular_Vection-_in_Virtual_Reality_3466[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttps://ispr.info/presence-conferences/previous-conferences/presence-2005/Slater, M.University College LondonLondon, UKBiologische KybernetikMax-Planck-GesellschaftLondon, UK8th Annual International Workshop on Presence (PRESENCE 2005)en0-9551232-0-8bernieBERieckejspJSchulte-PelkumfranckFCaniardhhbHHBülthoffinproceedings3489Measuring Vection in a Large Screen Virtual Environment20058103-109This paper describes the use of a large screen virtual environment to induce the perception of translational and rotational self-motion. We explore two aspects of this problem. Our first study investigates how the level of visual immersion (seeing a reference frame) affects subjective measures of vection. For visual patterns consistent with translation, self-reported subjective measures of self-motion were increased when the floor and ceiling were visible outside of the projection area. When the visual patterns indicated rotation, the strength of the subjective experience of circular vection was unaffected by whether or not the floor and ceiling were visible. We also found that circular vection induced by the large screen display was reported subjectively more compelling than translational vection. The second study we present describes a novel way in which to measure the effects of displays intended to produce a sense of vection. It is known that people unintentionally drift forward if asked to run in place while blindfolded and that adaptations involving perceived linear self-motion can change the rate of drift. We showed for the first time that there is a lateral drift following perceived rotational self-motion and we added to the empirical data associated with the drift effect for translational self-motion by exploring the condition in which the only self-motion cues are visual.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/mohler-etal-apgv-2005_3489[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://portal.acm.org/citation.cfm?id=1080421Bülthoff, H.H., T. TrosciankoACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftSIGGRAPHLa Coruña, Spain2nd Symposium on Applied Perception in Graphics and Visualization (APGV 2005)en1-59593-139-210.1145/1080402.1080421mohlerBJMohlerbernieBERieckeWBThompsonhhbHHBülthoffinproceedings3467Scene Consistency and Spatial Presence Increase the Sensation of Self-Motion in Virtual Reality20058111-118The illusion of self-motion induced by moving visual stimuli ("vection") has typically been attributed to low-level, bottom-up perceptual processes. Therefore, past research has focused primarily on examining how physical parameters of the visual stimulus (contrast, number of vertical edges etc.) affect vection. Here, we investigated whether higher-level cognitive and top-down processes - namely global scene consistency and spatial presence - also contribute to the illusion. These factors were indirectly manipulated by presenting either a natural scene (the Tübingen market place) or various scrambled and thus globally inconsistent versions of the same stimulus. Due to the scene scrambling, the stimulus could no longer be perceived as a consistent 3D scene, which was expected to decrease spatial presence and thus impair vection. Twelve naive observers were asked to indicate the onset, intensity, and convincingness of circular vection induced by rotating visual stimuli presented on a curved projection screen (FOV: 54°x45°). Spatial presence was assessed using presence questionnaires. As predicted, scene scrambling impaired both vection and presence ratings for all dependent measures. Neither type nor severity of scrambling, however, showed any clear effect. The data suggest that higher-level information (the interpretation of the globally consistent stimulus as a 3D scene and stable reference frame) dominated over the low-level (bottom-up) information (more contrast edges in the scrambled stimuli, which are known to facilitate vection). Results suggest a direct relation between spatial presence and self-motion perception. We posit that stimuli depicting globally consistent, naturalistic scenes provide observers with a convincing spatial reference frame for the simulated environment which allows them to feel "spatially present" therein. We propose that this, in turn, increases the believability of the visual stimuli as a stable "scene" with respect to which visual motion is more likely to be judged as self-motion. We propose that not only low-level, bottom-up factors, but also higher-level factors such as the meaning of the stimulus are relevant for self-motion perception and should thus receive more attention. This work has important implications for both our understanding of selfmotion perception and motion simulator design and applications.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf3467.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://portal.acm.org/citation.cfm?id=1080422Bülthoff, H.H., T. TrosciankoACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftACM Special Interest Group on Computer Graphics and Interactive TechniquesLa Coruña, Spain2nd Symposium on Applied Perception in Graphics and Visualization (APGV 2005)en1-59593-139-210.1145/1080402.1080422bernieBERieckejspJSchulte-PelkummariosMAvraamidesmvdhMvon der HeydehhbHHBülthoffinproceedings2765Top-Down and Multi-Modal Influences on Self-Motion Perception in Virtual Reality200571-10INTRODUCTION: Much of the work on self-motion perception and simulation has investigated the contribution of
physical stimulus properties (so-called “bottom-up” factors). This paper provides an overview of recent experiments demonstrating that illusory self-motion perception can also benefit from “top-down” mechanisms, e.g. expectations, the interpretation and meaning associated with the stimulus, and the resulting spatial presence in the simulated environment.
METHODS: Several VR setups were used as a means to independently control different sensory modalities,
thus allowing for well-controlled and reproducible psychophysical experiments. Illusory self-motion perception
(vection) was induced using rotating visual or binaural auditory stimuli, presented via a curved projection screen
(FOV: 54x40.5°) or headphones, respectively. Additional vibrations, subsonic sound, or cognitive frameworks were
applied in some trials. Vection was quantified in terms of onset time, intensity, and convincingness ratings.
RESULTS & DISCUSSION: Auditory vection studies showed that sound sources participants associated with stationary
“acoustic landmarks” (e.g., a fountain) can significantly increase the effectiveness of the self-motion illusion,
as compared to sound sources that are typically associated to moving objects (like the sound of footsteps). A
similar top-down effect was observed in a visual vection experiment: Showing a rotating naturalistic scene in VR
improved vection considerably compared to scrambled versions of the same scene. Hence, the possibility to interpret the stimulus as a stationary reference frame seems to enhance the self-motion perception, which challenges the prevailing opinion that self-motion perception is primarily bottom-up driven. Even the mere knowledge that one might potentially be moved physically increased the convincingness of the self-motion illusion significantly, especially when additional vibrations supported the interpretation that one was really moving. CONCLUSIONS: Various topdown mechanisms were shown to increase the effectiveness of self-motion simulations in VR, even though they have received little attention in the literature up to now. Thus, we posit that a perceptually-oriented approach that combines both bottom-up and top-down factors will ultimately enable us to optimize self-motion simulations in terms of both effectiveness and costs.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Riecke__05__VE_HCI2005__Top-Down_and_Multi-Modal_Influences_on_Self-Motion_Perception_in_Virtual_Reality__asOnConferenceCD_2252_2765[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.hci-international.org/Salvendy, G.ErlbaumMahwah, NJ, USABiologische KybernetikMax-Planck-GesellschaftLas Vegas, NV, USA11th International Conference on Human-Computer Interaction (HCI International 2005)en0-8058-5807-5bernieBERieckeLVästfjälljspJSchulte-Pelkuminproceedings2904Towards Lean and Elegant Self-Motion Simulation in Virtual Reality20053131-138Despite recent technological advances, convincing self-motion simulation in virtual reality (VR) is difficult to achieve, and users often suffer from motion sickness and/or disorientation in the simulated world. Instead of trying to simulate self-motions with physical realism (as is often done for, e.g., driving or flight simulators), we propose in this paper a perceptually oriented approach towards self-motion simulation. Following this paradigm, we performed a series of psychophysical experiments to determine essential visual, auditory, and vestibular/tactile parameters for an effective and perceptually convincing self-motion simulation. These studies are a first step towards our overall goal of achieving lean and elegant self-motion simulation in virtual reality (VR) without physically moving the observer. In a series of psychophysical experiments about the self-motion illusion (circular vection), we found that (i) vection as well as presence in the simulated environment is increased by a consistent, naturalistic visual scene when compared to a sliced, inconsistent version of the identical scene, (ii) barely noticeable marks on the projection screen can increase vection as well as presence in an unobtrusive manner, (iii) physical vibrations of the observer's seat can enhance the vection illusion, and (iv) spatialized 3D audio cues embedded in the simulated environment increase the sensation of self-motion and presence. We conclude that providing consistent cues about self-motion to multiple sensory modalities can enhance vection, even if physical motion cues are absent. These results yield important implications for the design of lean and elegant self-motion simulators.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2904.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.vr2005.orgFröhlich, B.IEEE Computer SocietyPiscataway, NJ, USABiologische KybernetikMax-Planck-GesellschaftBonn, GermanyIEEE Conference on Virtual Reality (VR '05)0-7803-8929-810.1109/VR.2005.83bernieBERieckejspJSchulte-PelkumfranckFCaniardhhbHHBülthoffinproceedings3233Perceiving simulated ego-motions in virtual reality: comparing large screen displays with HMDs20051344-355In Virtual Reality, considerable systematic spatial orientation problems frequently occur that do not happen in comparable real-world situations. This study investigated possible origins of these problems by examining the influence of visual field of view (FOV) and type of display device (head-mounted display (HMD) vs. projection screens) on basic human spatial orientation behavior. In Experiment 1, participants had to reproduce traveled distances and to turn specified target angles in a simple virtual environment without any landmarks that was projected onto a 180° half-cylindrical projection screen. As expected, distance reproduction performance showed only small systematic errors. Turning performance, however, was unexpectedly almost perfect (gain=0.97), with negligible systematic errors and minimal variability, which is unprecedented in the literature. In Experiment 2, turning performance was compared between a projection screen (FOV 84°×63°), an HMD (40°×30°), and blinders (40°×30°) that restricted the FOV on the screen. Performance was best with the screen (gain 0.77) and worst with the HMD (gain 0.57). We found a significant difference between blinders (gain 0.73) and HMD, which indicates that different display devices can influence ego-motion perception differentially, even if the physical FOVs are equal. We conclude that the type of display device (HMD vs. curved projection screen) seems to be more critical than the FOV for the perception of ego-rotations. Furthermore, large, curved projection screens yielded better performance than HMDs.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf3233.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://spiedigitallibrary.org/proceedings/resource/2/psisdg/5666/1/344_1Rogowitz, B. E., T. N. Pappas and S. J. DalySPIEBellingham, WA, USAProceedings of the SPIE ; 5666Human Vision and Electronic Imaging XBiologische KybernetikMax-Planck-GesellschaftSan Jose, CA, USAElectronic Imaging: Science and Technology0-8194-5639-X10.1117/12.610846bernieBERieckejspJSchulte-PelkumhhbHHBülthoffinproceedings2864Enhancing the Visually Induced Self-Motion Illusion (Vection) under Natural Viewing Conditions in Virtual Reality200410125-132The visually induced illusion of ego-motion (vection) is known to be facilitated by both static fixation points [1] and foreground stimuli that are perceived to be stationary in front of a moving background stimulus [2]. In this study, we found that hardly noticeable marks in the periphery of a projection screen can have similar vection-enhancing effects, even without fixating or suppressing the optokinetic reflex (OKR). Furthermore, vection was facilitated even though the marks had no physical depth separation from the screen. Presence ratings correlated positively with vection, and seemed to be mediated by the ego-motion illusion. Interestingly, the involvement/attention aspect of overall presence was more closely related to vection onset times, whereas spatial presence-related aspects were more tightly related to convincingness ratings. This study yields important implications for both presence theory and motion simulator design and applications, where one often wants to achieve convincing ego-motion simulation without restricting eye movements artificially.
SUPPORT: EU grant POEMS-IST-2001-39223 (see www.poems-project.info) and Max Planck Society.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2864.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttps://ispr.info/presence-conferences/previous-conferences/presence-2004/Alcañiz Raya, M.UPVValencia, SpainBiologische KybernetikMax-Planck-GesellschaftValencia, Spain7th Annual International Workshop Presence (PRESENCE 2004)84-97056-49-3bernieBERieckejspJSchulte-PelkummariosMNAvraamideshhbHHBülthoffinproceedings2764Spatial updating in real and virtual environments: contribution and interaction of visual and vestibular cues200489-17INTRODUCTION: When we move through the environment, the self-to-surround relations constantly change. Nevertheless, we perceive the world as stable. A process that is critical to this perceived stability is "spatial updating", which automatically updates our egocentric mental spatial representation of the surround according to our current self-motion. According to the prevailing opinion, vestibular and proprioceptive cues are absolutely required for spatial updating. Here, we challenge this notion by varying visual and vestibular contributions independently in a high-fidelity VR setup. METHODS: In a learning phase, participants learned the positions of twelve targets attached to the walls of a 5x5m room. In the testing phase, participants saw either the real room or a photo-realistic copy presented via a head-mounted display (HMD). Vestibular cues were applied using a motion platform. Participants' task was to point "as accurately and quickly as possible" to four targets announced consecutively via headphones after rotations around the vertical axis into different positions. RESULTS: Automatic spatial updating was observed whenever useful visual information was available: Paticipants had no problem mentally updating their orientation in space, irrespective of turning angle. Performance, quantified as response time, configuration error, and pointing error, was best in the real world condition. However, when the field of view was limited via cardboard blinders to match that of the HMD (40 × 30°), performance decreased and was comparable to the HMD condition. Presenting turning information only visually (through the HMD) hardly altered those results. In both the real world and HMD conditions, spatial updating was obligatory in the sense that it was significantly more difficult to ignore ego-turns (i.e., "point as if not having turned") than to update them as usual. CONCLUSION: The rapid pointing paradigm proved to be a useful tool for quantifying spatial updating. We conclude that, at least for the limited turning angles used (<60°), the Virtual Reality simulation of ego-rotation was as effective and convincing (i.e., hard to ignore) as its real world counterpart, even when only visual information was presented. This has relevant implications for the design of motion simulators for, e.g., architecture walkthroughs.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2764.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://portal.acm.org/citation.cfm?id=1012553Interrante, V. , A. McNamara, H.H. Bülthoff, H.E. RushmeierACM PressNew York, NY, USABiologische KybernetikMax-Planck-GesellschaftLos Angeles, CA, USA1st Symposium on Applied Perception in Graphics and Visualization (APGV 2004)en1-58113-914-410.1145/1012551.1012553bernieBERieckehhbHHBülthoffinproceedings1949Embedding presence-related terminology in a logical and
functional model20021037-52In this paper, we introduce first steps towards a logically
consistent framework describ-ing and relating items
concerning the phenomena of spatial presence, spatial
orientation, and spatial updating. Spatial presence can be
regarded as the consistent feeling of be-ing in a specific
spatial context, and intuitively knowing where one is with
respect to the immediate surround. The core idea is to try
to understand presence-related issues by ana-lyzing their
logical and functional relations. This is done by
determining necessary and/or sufficient conditions between
related items. This eventually leads to a set of necessary
prerequisites and sufficient conditions for spatial
presence, spatial orientation, and spatial updating. More
specifically, the logical structure of our framework allows
for novel ways of quantifying spatial presence and spatial
updating.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1949.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttps://ispr.info/presence-conferences/previous-conferences/presence-2002/Gouveia, F. R.Universidade Fernando PessoaPorto, PortugalBiologische KybernetikMax-Planck-GesellschaftMax Planck Institute for Biological Cybernetics, Tübingen, GermanyPorto, Portugal5th Annual International Workshop on Presence (PRESENCE 2002)972-8184-88-3mvdhMvon der HeydebernieBERieckeinbook631Visual Homing is possible without Landmarks: A Path Integration Study in Virtual Reality200097-134http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf631.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffvon der Heyde, M. , H.H. BülthoffMax-Planck-Institute for Biological CyberneticsTübingen, GermanyPerception and action in virtual environments: selected papers from the Cognitive and Computational Psychophysics Department, 1997 - 2000Biologische KybernetikMax-Planck-GesellschaftbernieBERieckeveenHAHCvan VeenhhbHHBülthofftechreport4490A novel immersive virtual environment setup for behavioural experiments in humans, tested on spatial memory for environmental spaces20073158We present a summary of the development of a new virtual reality setup for behavioural experiments in the area of spatial cognition. Most previous virtual reality setups can either not provide accurate body motion cues when participants are moving in a virtual environment, or participants are hindered by cables while walking in virtual environments with a head-mounted display (HMD). Our new setup solves these issues by providing a large, fully trackable walking space, in which a participant with a HMD can walk freely, without being tethered by cables. Two experiments on spatial memory are described, which tested this setup. The results suggest that environmental spaces traversed during wayfinding are memorised in a view-dependent way, i.e., in the local orientation they were experienced, and not with respect to a global reference direction.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/mpik-tr-158_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftMax-Planck-Institute for Biological Cybernetics, Tübingen, GermanyenmeilingerTMeilingerbernieBERieckebergerDBergerhhbHHBülthofftechreport4188Selected Technical and Perceptual Aspects of
Virtual Reality Displays200610154There is an increasing amount of different presentation techniques available for producing visual Virtual Reality
(VR) scenes. The purpose of this chapter is to give a brief and introductory overview of existing VR
presentation techniques and to highlight advantages and disadvantages of each technique, depending on the
specific applications. This should enable the reader to design and/or improve their VR visualization setup in
terms of both the perceptual aspects and the effectiveness for a given task or goal .
In this overview, we relate the different types of presentation techniques to aspects of human physiology of
visual perception which have important implications for VR setups. This will, by no means, be a complete
overview of all physiological aspects. For a detailed overview and introduction, see, e.g., Goldstein (2002).
The aim of a visual simulation is to achieve a convincing and perceptually realistic presentation of the simulated
environment. Ideally, the user should feel present in the virtual environment and not be able to tell whether it is
real or simulated. The human visual system uses several cues to form a percept of the surrounding environment.
We will have a closer look at some of these cues in the first section, as they are of crucial importance when
looking at simulated scenes. The remaining sections are concerned with possible technical implementations and
how these relate to the perceptual aspects and effectiveness for a given task.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RieckeNusseckSchulte-Pelkum_06_MPIK-TR_154__Selected%20Technical%20and%20Perceptual%20Aspects%20of%20Virtual%20Reality%20Displays_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftMax Planck Institute for Biological Cybernetics, TübingenenbernieBERieckenusseckH-GNusseckjspJSchulte-Pelkumtechreport4186Using the perceptually oriented approach to optimize spatial presence &amp; ego-motion simulation200610153This chapter is concerned with the perception and simulation of ego-motion in virtual environments, and how spatial
presence and other higher cognitive and top-down factors can contribute to improve the illusion of ego-motion in
virtual reality (VR). In the real world, we are used to being able to move around freely and interact with our
environment in a natural and effortless manner. Current VR technology does, however, not yet allow for natural,
real-life-like interaction between the user and the virtual environment. One crucial shortcoming in current VR is the
insufficient and often unconvincing simulation of ego-motion, which frequently causes disorientation, unease, and
motion sickness. We posit that a realistic perception of ego-motion in VR is a fundamental constituent for spatial
presence and vice versa. Thus, by improving both spatial presence and ego-motion perception in VR, we aim to
eventually enable performance levels in VR similar to the real world for basic tasks, e.g., spatial orientation and
distance perception, which are currently very problematic cases. Users frequently get lost easily in VR while
navigating, and simulated distances appear to be compressed and underestimated compared to the real world
(Witmer & Sadowski, 1998; Chance, Gaunet, Beall, & Loomis, 1998; Creem-Regehr, Willemsen, Gooch, and
Thompson, 2003; Knapp, 1999; Thompson, Willemsen, Gooch, Creem-Regehr, Loomis, & Beall, 2004, Stanney,
2002).http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RieckeSchulte-Pelkum_06_MPIK-TR-153_4186[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftMax Planck Institute for Biological Cybernetics, TübingenenbernieBERieckejspJSchulte-Pelkumtechreport4187Spatialized auditory cues enhance the visually-induced self-motion illusion (circular vection) in Virtual Reality200510138http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Riecke_05_TR-138__Spatialized%20auditory%20cues%20enhance%20the%20visually-induced%20self-motion%20illusion%20(circular%20vection)%20in%20Virtual%20Reality_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftMax Planck Institute for Biological Cybernetics, Tübingen, Germanyen1bernieBERieckejspJSchulte-PelkumfranckFCaniardhhbHHBülthofftechreport2574Influence of display device and screen curvature on perceiving and controlling simulated ego-rotations from optic flow20042122This study investigated how display parameters influence humans’ ability to control simulated egorotations
from optic flow. The literature on visual turn perception reports contradictory data, which might be partly
due to the different display devices used in these studies. In this study, we aimed at disentangling the influence of
display devices, screen curvature, and field of view (FOV) on the ability to control simulated ego-rotations solely
from visual information. In Experiment 1, FOV and display device (projection screen vs. head-mounted display
(HMD)) was manipulated. In Experiment 2, screen curvature and FOV were varied. Subjects’ task was to perform
visually simulated self-rotations with target angles between 45 and 270 degree. Stimuli consisted of limited lifetime dots on a dark background, and subjects used a joystick to control the turning angle of the visual stimulus. In Experiment 1, performance was tested in a within-subject design, using a curved projection screen (FOV 84 degree × 63 degree), a HMD (40 degree × 30 degree), and blinders (40 degree ×30 degree) that restricted the FOV on the screen. Performance was best with the screen
(gain factor 0.77) and worst with the HMD (gain 0.57). We found a significant difference between blinders (gain
0.73) and HMD, which indicates that different display devices can influence ego-motion perception differentially,
even if the physical FOVs are equal. In Experiment 2, screen curvature was found to influence the perception of
ego-rotations: At identical FOVs of 84 degree, participants undershot target angles on the curved screen (gain 0.84),
while they overshot target angles on the flat screen (gain 1.08). Perceptual mechanisms that may underlie these
results will be discussed. We conclude the following: First, differences between display devices (HMD vs. curved
projection screen) are more critical than the FOV for the perception of ego-rotations, with projection screens being
better than HMDs. Second, screen curvature significantly influences performance for visually simulated egorotations:
Compared to the flat screen, the curved screen enhanced the perception of ego-rotations. These findings have relevant implications for the design of motion simulators.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2574.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftMax Planck Institute for Biological Cybernetics, Tübingen, GermanyjspJSchulte-PelkumbernieBERieckemvdhMvon der HeydehhbHHBülthofftechreport2021Qualitative Modeling of Spatial Orientation Processes using
Logical Propositions: Interconnecting Spatial Presence, Spatial Updating, Piloting, and Spatial Cognition200212100In this paper, we introduce first steps towards a logically consistent
framework describing and relating items concerning the phenomena of
spatial orientation processes, namely spatial presence, spatial
updating, piloting, and spatial cognition. Spatial
presence can for this purpose be seen as the consistent feeling of being in a specific
spatial context, and intuitively knowing where one is with respect
to the immediate surround.
The core idea of the framework is to model spatial orientation-related
issues by analyzing their logical and functional relations. This is
done by determining necessary and/or sufficient conditions between
related items like spatial presence, spatial orientation,
and spatial updating. This eventually leads to a set of necessary prerequisites
and sufficient conditions for those items. More specifically, the logical structure of
our framework suggests novel ways of quantifying spatial presence
and spatial updating.
Furthermore, it allows to disambiguate between two complementing types of automatic spatial
updating: On the one hand, the well-known continuous spatial updating induced by continuous movement
information. On the other hand, a novel type of discontinuous, teleport-like ``instantaneous spatial
updating'' that allows participants to quickly adopt the reference frame of a new location without
any explicit motion cues.
ACKNOWLEDGEMENTS: This research was funded by the Max-Planck Society and the Deutsche
Forschungsgemeinschaft (SFB 550 Erkennen, Lokalisieren, Handeln: neurokognitive Mechanismen und ihre
Flexibilität)http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2021.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftMax Planck Institute for Biological Cybernetics, Tübingen, GermanybernieBERieckemvdhMvon der Heydetechreport635How to cheat in motion simulation: comparing the engineering and fun ride approach to motion cueing20011289The goal of this working paper is to discuss different motion cueing approaches. They stem either from the engineering field of building flight and driving
simulators, or from the modern Virtual Reality fun rides presented in amusement parks all over the world. The principles of motion simulation are summarized
together with the technical implementations of vestibular stimulation with limited degrees of freedom. A psychophysical experiment in Virtual Reality is
proposed to compare different motion simulation approaches and quantify the results using high-level psychophysical methods as well as traditional evaluation
methods.
ACKNOWLEDGEMENTS: This research was funded by the Max-Planck Society and the Deutsche Forschungsgemeinschaft (SFB 550 Erkennen,
Lokalisieren, Handeln: neurokognitive Mechanismen und ihre Flexibilität).http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf635.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftMax Planck Institute for Biological Cybernetics, Tübingen, GermanymvdhMvon der HeydebernieBERiecketechreport1203Visual Homing is possible without Landmarks: A Path Integration Study in Virtual Reality2000982The literature often suggests that proprioceptive and especially vestibular cues are required for navigation
and spatial orientation tasks involving rotations of the observer. To test this notion, we conducted a set of
experiments in virtual reality where only visual cues were provided. Subjects had to execute turns, reproduce distances or perform triangle completion tasks: After following two prescribed segments of a triangle, subjects had to return directly to the unmarked starting point. Subjects were seated in the center of a half-cylindrical 180 degree projection screen and controlled the visually simulated ego-motion with mouse buttons. Most experiments were performed in a simulated 3D field of blobs providing a convincing feeling of self-motion (vection) but no landmarks, thus restricting navigation strategies to path integration based on optic flow. Other experimental conditions included salient landmarks or landmarks that were only temporarily available. Optic flow information alone proved to be sufficient for untrained subjects to perform turns and reproduce distances with negligible systematic errors, irrespective of movement velocity. Path integration by optic flow was sufficient for homing by triangle completion, but homing distances were biased towards mean responses. Additional landmarks that were only temporarily available did not improve homing performance. However, navigation by stable, reliable landmarks led to almost perfect homing performance. Mental spatial ability test scores correlated positively with homing performance especially for the more complex triangle completion tasks, suggesting that mental spatial abilities might be a determining factor for navigation performance. Compared to similar experiments using virtual environments (Péruch et al., 1997; Bud, 2000) or blind locomotion (Loomis et al., 1993), we did not find the typically observed distance undershoot and strong regression towards mean turn responses. Using a virtual reality setup with a half-cylindrical 180 degree projection screen allowed us to demonstrate that visual path integration without any vestibular or kinesthetic cues is sufficient for elementary navigation tasks like rotations, translations, and homing via triangle completion.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1203.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftMax Planck Institute for Biological Cybernetics, Tübingen, GermanybernieBERieckeveenHAHCvan VeenhhbHHBülthoffposter5043Contribution and interaction of auditory and biomechanical cues for self-motion illusions ("circular vection")20084http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RieckeFeuereissenRieser_08_Poster4CyberwalkWorkshop_4web_5043[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.cyberwalk-project.org/Biologische KybernetikMax-Planck-GesellschaftTübingen, GermanyCyberwalk Workshop 2008enbernieBERieckeDFeuereissenJJRieserposter4786Similarity Between Room Layouts Causes Orientation-Specific Sensorimotor Interference In To-Be-Imagined Perspective SwitchesAbstracts of the Psychonomic Society2007111263May (2004) suggested that the difficulty of imagined perspective switches is partially caused by interference between the sensorimotor (actual) and to-be-imagined orientation.
Here, we demonstrate a similar interference, even if participants are in a remote room and dont know their physical orientation with respect to the to-be-imagined orientation.
Participants learned 15 target objects located in an office from one orientation (0°, 120°, or 240°).
Participants were blindfolded and disoriented before being wheeled to an empty test room of similar geometry. Participants were seated facing 0, 120°, or 240°, and asked to perform judgments of relative direction (e.g., imagine facing pen, point to phone).
Performance was facilitated when participants to-be-imagined orientation in the learning room was aligned with the corresponding orientation in the test room.
This suggests that merely being in an empty room of similar geometry can be sufficient to automatically re-anchor ones representation and thus produce orientation-specific interference.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/RieckeMcNamara_07_Poster4Psychonomics_final_4786[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://c.ymcdn.com/sites/www.psychonomic.org/resource/resmgr/Annual_Meeting/Past_and_Future_Meetings/2007/Abstracts07_%281%29.pdfBiologische KybernetikMax-Planck-GesellschaftLong Beach, CA, USA48th Annual Meeting of The Psychonomic SocietyenbernieBERieckeTPMcNamaraposterTeramotoR2007Physical self-motion facilitates object recognition, but does not enable view-independencePerception2007836ECVP Abstract Supplement210It is well known that people have difficulties recognizing an object from novel views as compared to learned views, resulting in increased response times and errors. Simons et al (2002 Perception & Psychophysics 64 521 - 530) reported, however, the elimination of this viewpoint dependence when novel views resulted from viewer movement instead of object movement. They suggest the contribution of extra-retinal information to object recognition. The aim of the present study was to clarify the underlying mechanism of this phenomenon and to investigate larger turning angles (45° - 180°, in 45° steps). Observers performed sequential-matching tasks with 5 original versus mirror-reversed objects (experiment 1) and with 10 different objects (experiment 2). Test views of the objects were manipulated either by viewer or object movement. Both experiments showed a significant overall advantage for viewer movements. Note, however, that performance was still viewpoint-dependent. Object recognition performance was also highly correlated with general mental spatial abilities assessed by a paper-and-pencil test. These results suggest an involvement of advantageous and cost-effective transformation mechanisms, but not a complete automatic spatial-updating mechanism, when observers move.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://pec.sagepub.com/content/36/1_suppl.tocArezzo, Italy30th European Conference on Visual Perception10.1177/03010066070360S101terawWTeramotobernieBERieckeposter4656Long-Term Memory for Environmental Spaces: the Case of Orientation Specificity2007710124This study examined orientation specificity in human long-term memory for environmental
spaces, and was designed to disambiguate between three theories concerning the organisation
of memory: reference direction theory [e.g., 1], view dependent theory [e.g., 2] and a theory
assuming orientation-independency [e.g., 3]. Participants learned an immersive virtual environment
by walking in one direction. The environment consisted of seven corridors within
which target objects were located. In the testing phase, participants were teleported to different
locations in the environment and were asked to identify their location and heading and then to
point towards previously learned targets. In experiment 1 eighteen participants could see the
whole corridor and were able to turn their head during the testing phase, whereas in experiment
2 visibility was limited and the twenty participants were asked to not turn their heads
during pointing. Reference direction theory assumes a global reference direction underlying
the memory of the whole layout and would predict better performance when oriented in the
global reference direction. However, no support was found for the reference direction theory.
Instead, as predicted by view-dependent theories, participants pointed more accurately when
oriented in the direction in which they originally learned each corridor, even when visibility
was limited to one meter for all orientations (all results p<.05). When the whole corridor
was visible, participants also self-localised faster when oriented in the learned direction. In
direct comparison participants pointed more accurately when facing the learned direction instead
of the global reference direction. With the corridors visible they also self-localised faster.
No support was found for an exclusive orientation-independent memory as performance was
orientation-dependent with respect to the learned orientation. We propose a ‘network of reference
frames’ theory which extends the view-dependent theory by stating how locations learned
from different views are connected within a spatial network. This theory is able to integrate
elements of the different theoretical positions.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk07/abstract.php?_load_id=meilinger01Biologische KybernetikMax-Planck-GesellschaftTübingen, Germany10th Tübinger Wahrnehmungskonferenz (TWK 2007)enmeilingerTMeilingerbernieBERieckenaimaNLaharnarhhbHHBülthoffposter4887Physical Self-Motion Facilitates Object Recognition, but Does Not Enable View-Independence2007710118It is well known that people have difficulties in recognizing an object from novel views as
compared to learned views, resulting in increased response times and/or errors. This so-called
view-dependency has been confirmed by many studies. In the natural environment, however,
there are two ways of changing views of an object: one is to rotate an object in front of a
stationary observer (object-movement), the other is for the observer to move around a stationary
object (observer-movement). Simons et al. [1] criticized previous studies in this regard
and examined the difference between object- and observer-movement directly. As a result,
Simons et al. reported the elimination of this view-dependency when novel views resulted
from observer-movement, instead of object-movement. They suggest the contribution of extraretinal
(vestibular and proprioceptive) information to object recognition. Recently, however,
Zhao et al. [2] reported that the observer’s movement from one view to another only decreased
view-dependency without fully eliminating it. Furthermore, even this effect vanished for rotations
of 90 instead of 50. The aim of the present study was to confirm the phenomenon
in our virtual reality environment and to clarify the underlying mechanism further by using
larger angles of view change (45-180, in 45 steps). Two experiments were conducted using
an eMagin Z800 3D Visor head-mounted display that was tracked by 16 Vicon MX 13 motion
capture cameras. Observers performed sequential-matching tasks. Five novel objects and
five mirror-reversed versions of these objects were created by smoothing the edges of Shepard-
Metzler’s objects. A mirror-reflected version of the learned object was used as a distractor in
Experiment 1 (N=13), whereas one of the other (i.e., not mirror-reversed) objects was randomly
selected on each trial as a distractor in Experiment 2 (N=15). Test views of the objects were
manipulated either by viewer or object movement. Both experiments showed a significant overall
advantage of viewer movements over object movements. Note, however, that performance
was still viewpoint-dependent. These results suggest an involvement of partially advantageous
and cost-effective transformation mechanisms, but not a complete automatic spatial-updating
mechanism as proposed by Simons et al. [1], when observers move.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk07/abstract.php?_load_id=teramoto01Biologische KybernetikMax-Planck-GesellschaftTübingen, Germany10th Tübinger Wahrnehmungskonferenz (TWK 2007)enterawWTeramotobernieBERieckeposter4629Spatial Orientation in the Immediate Environment: How Can the Different Theories be Reconciled?2007710127Recently, there has been an increasing interest in theories about human spatial memory and
orientation (see, e.g., [1] for a recent review). There is, however, an apparent conflict between
many of those theories that yet needs to be resolved. Here, we outline a theoretical framework
that aims at integrating two current theories of spatial orientation: May [2] proposed
that the difficulty of imagined perspective switches is caused, at least in part, by an interference
between the sensorimotor and the to-be-imagined perspectives. Riecke & von der Heyde
[3] developed a theoretical framework that is based on a network of logical propositions (i.e.,
necessary and sufficient conditions). They proposed that automatic spatial updating can only
occur if there is a consistency between the observer’s concurrent egocentric reference frames
(e.g., mediated by real world perception, virtual reality [VR], or imagined perspectives). We
propose that the underlying processes are the same, in the sense that a consistency between
egocentric representations [3] is equivalent to an absence of interference [2]. Whenever the
current egocentric representations of the immediate surroundings are consistent, there should
be no interference. According to [3], this state enables automatic spatial updating. We propose
that this lack of interference might also be able to explain other important phenomena, such as
the relative ease of adopting a new perspective after being disoriented. Conversely, interference
(inconsistency) between the primary, embodied egocentric representation and a to-be-imagined
(e.g., experimentally instructed) egocentric representation implies the difficulty of adopting a
new perspective. We posit that such interference or inconsistency also explains the difficulty
people have in ignoring bodily rotations. To avoid the vagueness that purely verbally defined
theories sometimes suffer from, we offer a well-defined graphical and structural representation
of our framework. Integrating logical and information flow representations in one coherent
framework not only provides a unified representation of previously seemingly isolated findings
and theories, but also fosters a deeper understanding of the underlying processes and enables
clear, testable predictions.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://twk.tuebingen.mpg.de/twk07/abstract.php?_load_id=riecke01Biologische KybernetikMax-Planck-GesellschaftTübingen, Germany10th Tübinger Wahrnehmungskonferenz (TWK 2007)enbernieBERieckeTPMcNamaraposterRieckeW2006Point-to-origin experiments in VR revealed novel qualitative errors in visual path integration2006833190Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In psychological experiments, such disorientation is often compensated for by extensive training. The current study investigated participants sense of direction by means of a rapid point-to-origin task without any training or performance feedback. This allowed us to study participants intuitive spatial orientation in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. After visually displayed passive excursions along oneor two-segment trajectories, participants were asked to point back to the origin of locomotion "as accurately and quickly as possible". Despite using a high-quality video projection with a 84°×63° field of view, participants overall performance was rather poor. Moreover, six of the 16 participants exhibited striking qualitative errors, i.e., consistent left-right confusions that have not been observed in comparable real world experiments. Taken together, this study suggests that even an immersive high-quality video projection system is not necessarily sufficient for enabling natural spatial orientation in VR. We propose that a rapid point-to-origin paradigm can be a useful tool for evaluating and improving the effectiveness of VR setups in terms of enabling natural and unencumbered spatial orientation and performance.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.siggraph.org/s2006/Boston, MA, USA33rd International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH 2006)10.1145/1179622.1179840bernieBERieckemalteJMWienerposter3790Bone conducted sound for mixed and virtual reality20059http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Vaeljamae__05_Presence05_BCSoundPoster_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deBiologische KybernetikMax-Planck-GesellschaftLondon, UK8th Annual Workshop PresenceenbernieAVäljamäePLarssonTGoodSStenfeltATajaduraposter3535Can auditory cues influence the visually induced self-motion illusion?Perception2005834ECVP Abstract Supplement82It is well known that a moving visual stimulus covering a large part of the visual field can induce compelling illusions of self-motion ('vection'). Lackner (1977 Aviation Space and Environmental Medicine 48 129 - 131) showed that sound sources rotating around a blindfolded person can also induce vection. In the current study, we investigated visuo-auditory interactions for circular vection by testing whether adding an acoustic landmark that moves together with the visual stimulus enhances vection. Twenty observers viewed a photorealistic scene of a market place that was projected onto a curved projection screen (FOV 54 deg × 40 deg). In each trial, the visual scene rotated at 30° s-1 around the Earth's vertical axis. Three conditions were randomised in a within-subjects design: no-sound, mono-sound, and spatialised-sound (moving together with the visual scene) played through headphones using a generic head-related transfer function (HRTF). We used sounds of flowing water, which matched the visual depiction of a fountain that was visible in the market scene. Participants indicated vection onset by deflecting the joystick in the direction of perceived self-motion. The convincingness of the illusion was rated on an 11-point scale (0 - 100%). Only the spatialised-sound that moved according to the visual stimulus increased vection significantly: convincingness ratings increased from 60.2% for mono-sound to 69.6% for spatialised-sound (t19 = -2.84, p = 0.01), and the latency from vection onset until saturated vection decreased from 12.5 s for mono-sound to 11.1 s for spatialised-sound (t19 = 2.69, p = 0.015). In addition, presence ratings assessed by the IPQ presence questionnaire were slightly but significantly increased. Average vection onset times, however, were not affected by the auditory stimuli. We conclude that spatialised-sound that moves concordantly with a matching visual stimulus can enhance vection. The effect size was, however, rather small (15%). In a control experiment, we will investigate whether this might be explained by a ceiling effect, since visually induced vection was already quite strong. These results have important implications for our understanding of multi-modal cue integration during self-motion.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/poster_jsp_ECVP2005_4web_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://pec.sagepub.com/content/34/1_suppl.tocBiologische KybernetikMax-Planck-GesellschaftA Coruña, Spain28th European Conference on Visual Perceptionen10.1177/03010066050340S101jspJSchulte-PelkumbernieBERieckefranckFCaniardhhbHHBülthoffposter3232Auditory cues can facilitate the visually-induced self-motion illusion (circular vection) in Virtual Reality20052874There is a long tradition of investigating the self-motion illusion induced by rotating visual stimuli ("circular vection"). Recently, Larsson et al. (2004)[1] showed that up to 50% of participants could also get some vection from rotating sound sources while blindfolded, replicating findings from Lackner (1977)[2]. Compared to the compelling visual illusion, though, auditory vection is rather weak and much less convincing.
Here, we tested whether adding an acoustic landmark to a rotating visual photorealistic stimulus of a natural scene can improve vection. Twenty observers viewed rotating stimuli that were projected onto a curved projection screen (FOV: 54°x40.5°). The visual scene rotated around the earth-vertical axis at 30°/s. Three conditions were randomized in a repeated measures within-subject design: No-sound, mono-sound, and 3D-sound using a generic head-related transfer function (HRTF).
Adding mono-sound showed only minimal tendencies towards increased vection and did not affect presence-ratings at all, as assessed using the Schubert et al. (2001) presence questionnaire [3]. Vection was, however, slightly but significantly improved by adding a rotating 3D-sound source that moved in accordance with the visual scene: Convincingness ratings increased from 60.2% (mono-sound) to 69.6% (3D-sound) (t(19)=-2.84, p=.01), and vection buildup-times decreased from 12.5s (mono-sound) to 11.1s (3D-sound) (t(19)=2.69, p=.015). Furthermore, overall presence ratings were increased slightly but significantly. Note that vection onset times were not significantly affected (9.6s vs. 9.9s, p>.05).
We conclude that adding spatialized 3D-sound that moves concordantly with a visual self-motion simulation does not only increase overall presence, but also improves the self-motion sensation itself. The effect size for the vection measures was, however, rather small (about 15%), which might be explained by a ceiling effect, as visually induced vection was already quite strong without the 3D-sound (9.9s vection onset time). Merely adding non-spatialized (mono) sound did not show any clear effects. These results have important implications for the understanding or multi-modal cue integration in general and self-motion simulations in Virtual Reality in particular.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf3232.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk05/abstract.php?_load_id=riecke01Biologische KybernetikMax-Planck-GesellschaftTübingen, Germany8th Tübingen Perception Conference (TWK 2005)enbernieBERieckejspJSchulte-PelkumfranckFCaniardhhbHHBülthoffposter2538The effect of cognition on the visually-induced illusion of self-motion (vection)Journal of Vision2004848891INTRODUCTION: The illusion of self-motion induced by moving visual stimuli has typically been attributed to bottom-up perceptual processes. Here, we investigated whether a cognitive factor such as spatial presence can contribute to the illusion. Spatial presence was indirectly manipulated by presenting either a photorealistic image of a natural scene or modified versions of the same stimulus. Those were created by either scrambling image parts in a mosaic-like manner or by slicing the original image horizontally and randomly reassembling it. We expected scene modifications to decrease spatial presence and thus impair vection. METHODS: Twelve observers viewed stimuli projected onto a curved projection screen (FOV: 54 ×40.5 ). Dependent measures included vection onset time, vection intensity, and convincingness of the illusion (0–100% ratings). Spatial presence was assessed with presence questionnaires. RESULTS: Scene modification led to both reduced presence scores and impaired vection: Modified stimuli yielded significantly longer vection onset times, lower perceived intensity, and lower convincingness ratings than the intact market scene. No clear difference was found between the sliced and scrambled stimuli or among the number of slices or mosaics (2, 8, or 32). Results suggest that high level information (consistent reference frame for the intact market scene) dominated over the low-level information (more contrast edges in the scrambled stimulus, which are known to facilitate vection). CONCLUSIONS: Results suggest a direct relation between spatial presence and self-motion perception. We posit that stimuli depicting naturalistic scenes provide observers with a convincing reference frame for the simulated environment which enables them to feel “spatially present”. This, in turn, facilitates the self-motion illusion. This work has important implications for both self-motion perception and motion simulator design and applications.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2538.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.journalofvision.org/content/4/8/891.abstractBiologische KybernetikMax-Planck-GesellschaftSarasota, FL, USAFourth Annual Meeting of the Vision Sciences Society (VSS 2004)10.1167/4.8.891bernieBERieckejspJSchulte-PelkummariosMNAvraamidesmvdhMvon der HeydehhbHHBülthoffposter2766Vibrational cues enhance believability of ego-motion simulation200465104We investigated whether the visually induced perception of illusory self-motion (vection) can be influenced by vibrational cues. Circular vection was induced in
22 observers who viewed a naturalistic scene displayed on a projection screen (FOV 54°x40.5°). Two factors were varied: The velocity profile of the visual
stimulus (3 or 12 sec to reach 30°/s), and the presence or absence of vibrations. Vibrations were generated by 4 subwoofers mounted below the seat and floor panel. Participants used a joystick to indicate vection onset, and the convincingness of the illusion was rated by magnitude estimation. Data analysis showed that fast accelerations resulted in shorter vection-onset times. Convincingness ratings were affected significantly by the vibrations: With the vibrations, vection was rated to be more convincing. Vection-onset latency, however, was not influenced by vibrations. Interestingly, 3 participants stated that vibrations reduced vection because the vibration amplitudes were not matched to the visual velocity profiles and thus became unrealistic. We conclude that vibrations can influence the convincingness of vection, but that cognition has a moderating effect: If conflicts between visual and vibrational cues are registered, vection seems to be reduces because of the cognitive conflict. These results are relevant for the design of ego-motion simulators.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2766.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://imrf.mcmaster.ca/IMRF/2004/Biologische KybernetikMax-Planck-GesellschaftMax-Planck-Institut für biologische Kybernetik, Tübingen, GermanyBarcelona, Spain5th International Multisensory Research Forum (IMRF 2004)jspJSchulte-PelkumbernieBERieckehhbHHBülthoffposterSchultePelkumRvB2004Kognitiver Einfluss auf die Wahrnehmumgen von simulierten Eigenbewegungen (Zirkularvektion)Experimentelle Psychologie2004446239http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.allpsych.uni-giessen.de/teap/index.phpGiessen, Germany46. Tagung Experimentell Arbeitender Psychologen (TeaP 2004),jspJSchulte-PelkumbernieBERieckemvdhMvon der HeydehhbHHBülthoffposter2504Top-Down Influence on Visually Induced Self-Motion Perception (Vection)200427154INTRODUCTION: The prevailing notion of visually induced illusory self-motion perception
(vection) is that the illusion arises from bottom-up perceptual processes. Therefore, past research
has focused primarily on examining how physical parameters of the visual stimulus
(contrast, number of vertical edges etc.) affect vection. In this study, we examined the inuence
of a top-down process: Spatial presence in the simulated scene. Spatial presence was
manipulated by presenting either a photorealistic image of the T¨ubingen market place or modi
ed versions of the same stimulus. Modied stimuli were created by either slicing the original
image horizontally and randomly reassembling it or by scrambling image parts in a mosaic-like
manner. We expected scene modication to decrease spatial presence and thus impair vection.
METHODS: Ten naive observers viewed stimuli projected onto a curved projection screen
subtending a eld of view (FOV) of 54
~
x40.5
~
. We measured vection onset times and had
participants rate the convincingness of the self-motion illusion for each trial using a 0100%
scale. In addition, we assessed spatial presence using standard presence questionnaires.
RESULTS: As expected, scene modication led to both reduced presence scores and impaired
vection: Modied stimuli yielded longer vection onset times and lower convincingness
ratings than the intact market scene (t(9)=-2.36, p=.043 and t(9)=3.39, p=.008, resp.). It should
be pointed out that the scrambled conditions had additional high contrast edges (compared
to the sliced or intact stimulus). Previous research has shown that adding vertical high contrast
edges facilitate vection. Therefore, one would predict that the scrambled stimuli should
improve vection. The results show, however, a tendency towards reduced vection for the scrambled
vs. sliced or intact stimuli. This suggests that the low-level information (more contrast
edges in the scrambled stimulus) were dominated by high level information (consistent reference
frame for the intact market scene). Interestingly, the number of slices or mosaics (2, 8,
or 32 per 45
~
FOV) had no clear inuence on either perceived vection or presence; two slices
were already enough to impair scene presence.
CONCLUSIONS: These results suggest that there might be a direct relation between spatial
presence and self-motion perception. We posit that stimuli depicting naturalistic scenes
provide observers with a convincing reference frame for the simulated environment which enables
them to feel spatially present in that scene. This, in turn, facilitates the self-motion
illusion. This work not only can shed some light on ego-motion perception, but also has important
implications for motion simulator design and application.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2504.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk04/index.php, H. A. Mallot, R. Ulrich, F. A. WichmannBiologische KybernetikMax-Planck-GesellschaftTübingen, Germany7th Tübingen Perception Conference (TWK 2004)bernieBERieckejspJSchulte-PelkummariosMAvraamidesmvdhMvon der HeydehhbHHBülthoffposter2323Qualitative modeling of spatial orientation processes using a logical network of necessary and sufficient conditions200311118http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2323.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.opam.net/opam2003/index.htmBiologische KybernetikMax-Planck-GesellschaftVancouver, Canada11th Annual Workshop on Object Perception, Attention, and Memory (OPAM 2003)bernieBERieckemvdhMvon der Heydeposter2024Screen curvature does influence the perception of visually simulated ego-rotationsJournal of Vision20031039411In general, the literature suggests that visual information alone is insufficient to control rotational self-motion accurately. Typically, subjects misperceive simulated self-rotations when no vestibular or proprioceptive feedback is available (see Bakker et al., 1999; 2001 — these studies were done with HMDs). On the other hand, Riecke et al. (2002) found nearly perfect turning performance when a curved, half-cylindrical projection screen with a large FOV of 180 was used. So far, no study has systematically looked at the effect of screen curvature on ego-motion perception.
To investigate whether screen curvature influences turning performance, we had 14 participants perform visually simulated ego-rotations either using a flat projection screen (FOV 86 ∞ 64 ) or a curved projection screen (radius 2m) with the same FOV in a within-subject repeated-measures design. Subjects saw a “star field” of limited lifetime dots without any landmarks, and they used a joystick to control instructed turn angles between 45 and 270 (steps of 45 ). No feedback about accuracy was provided. A repeated-measures ANOVA revealed a significant effect of screen curvature, and also an interaction between curvature and turn angle: While target angles were undershot on the curved screen (gain factor 0.84), a surprising overshoot was observed for the flat screen (gain factor 1.12). Subjects' verbal reports indicate that on the curved screen, the simulated self-rotations looked more realistic than on the flat screen. This may have led them to overestimate turns on the curved screen (thus undershoot turn angles) and to underestimate turns on the flat screen (thus overshoot turn angles). A possible explanation is that rotational lamellar flow on the flat screen was misperceived as translational flow rather than as rotational flow. Results indicate that screen curvature is a critical parameter to be considered for ego-motion simulation and vection studies.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2024.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.journalofvision.org/content/3/9/411.abstractBiologische KybernetikMax-Planck-GesellschaftMax Planck Institut für biologische KybernetikSarasota, FL, USAThird Annual Meeting of the Vision Sciences Society (VSS 2003)10.1167/3.9.411jspJSchulte-PelkumbernieBERieckemvdhMvon der HeydehhbHHBülthoffposter2127Reflex-like spatial updating can be adapted without any sensory conflictPerception2003932ECVP Abstract Supplement99Reflex-like processes are normally recalibrated with a concurrent sensory conflict. Here, we investigated reflex-like (obligatory) spatial updating (online updating of our egocentric spatial reference frame during self-motion, which is largely beyond conscious control). Our object was to adapt vestibularly induced reflex-like spatial updating with the use of a purely cognitive interpretation of the angle turned--that is, without any concurrent sensory conflict, just by presenting an image with a different orientation, after physical turns in complete darkness. The experiments consisted of an identical pre-test and post-test, and an adaptation phase in between. In all three phases, spatial updating was quantified by behavioural measurements of the new post-rotation orientations (rapid pointing to invisible landmarks in a previously learned scene). In the adaptation phase, visual feedback was additionally provided after the turn and pointing task (display of an orientation that differed from the actual turning angle by a factor of 2). The results show that the natural, unadapted gain of perceived versus real turn angle in the pre-test was increased by nearly a factor of 2 in the adaptation phase and remained at this level during the post-test. We emphasise that at no point was simultaneous visual and vestibular stimulation provided. We conclude that vestibularly driven reflex-like spatial updating can be adapted without any concurrent sensory conflict, just by a pure cognitive conflict. That is, the cognitive discrepancy between the vestibularly updated reference frame (which served for the pointing) and the subsequently received static visual feedback was able to recalibrate the interpretation of self-motion.[Supported by Max Planck Society and Deutsche Forschungsgemeinschaft (SFB 550).]http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2127.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://pec.sagepub.com/content/32/1_suppl.tocBiologische KybernetikMax-Planck-GesellschaftParis, France26th European Conference on Visual Perception10.1068/ecvp03abernieBERieckeKBeykirchmvdhMvon der HeydehhbHHBülthoffposter2025Influence of display parameters on perceiving visually simulated ego-rotations: a systematic investigation200326173In Virtual Reality, subjects typically misperceive visually simulated turning angles. The
literature on this topic reports inconclusive data. This may be partly due to the dierent
display devices and eld of views (FOV) used in these studies. Our study aims to
disentangle the specic in
uence of display devices, FOV, and screen curvature on the
perceived turning angle for visually simulated ego-rotations. In Experiment 1, display
devices (HMD vs. curved projection screen) and FOV were manipulated. Subjects were
seated in front of the screen and saw a star eld of limited lifetime dots on a dark background.
They were instructed to perform simulated ego-rotations between 45 and 225
using a joystick to control the rotation of the image. In a within-subject design, performance
was compared between a projection screen (FOV 86 64), a HMD (40 30),
and blinders that reduced the FOV on the screen to 40 30. Generally, all target
angles were undershot. We found gain factors of 0.74 for the projection screen, 0.71 for
the blinders, and 0.56 for the HMD. The reduction of the FOV on the screen had no
signicant eect (p=0.407), whereas the dierence between the HMD and blinders with
identical FOV was signicant (p<0.01). In Experiment 2, screen curvature was manipulated.
Subjects performed the same task as in Experiment 1, either on a
at projection
screen or on a curved screen (radius 2m, FOV 86 64 for both). Screen curvature had
a signicant eect (p<0.001): While subjects turned too far on the
at screen (gain
1.12), they did not turn far enough on the curved screen (gain 0.84). Subjects' verbal
reports indicate that rotational optic
ow on the
at screen was misperceived as
translational
ow. We conclude the following: First, display devices seem to be more
critical than FOV for simulated ego-rotations, the projection screen being superior to
the HMD. Second, screen curvature is an important parameter to be considered for
simulation of ego-motion in virtual reality.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2025.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk03/Biologische KybernetikMax-Planck-GesellschaftMax Planck Institut für biologische KybernetikTübingen, Germany6. Tübinger Wahrnehmungskonferenz (TWK 2003)jspJSchulte-PelkumbernieBERieckemvdhMvon der Heydeposter1960Perceiving and controlling simulated ego-rotations by optic flow: Influence of field of view (FOV) and display devices on ego-motion perception20021121This study investigated humans ability to control simulated ego-rotations from optic flow. The stimuli consisted of limited lifetime dots on a dark background. In a within-subject design, performance was tested using a curved projection screen (FOV 86°×63°), a HMD (40°×30°), and blinders (40°×30°) that restricted the FOV on the screen. Participants typically undershot intended turn angles. Performance was best with the screen (gain factor 0.77) and worst with the HMD (gain 0.57). A significant difference between blinders (gain 0.73) and HMD indicates that different display devices can influence ego-motion perception differentially, even if the physical FOVs are equal.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1960.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.opam.net/archive/opam2002/OPAM2002Abstracts.pdfBiologische KybernetikMax-Planck-GesellschaftKansas City, KS, USA10th Annual Workshop on Object Perception and Memory (OPAM 2002)jspJSchulte-PelkumbernieBERieckemvdhMvon der HeydehhbHHBülthoffposter628Spatial updating in virtual environments: What are vestibular cues good for?Journal of Vision20021127421When we turn ourselves, our sensory inputs somehow turn the “world inside our head” accordingly so as to stay in alignment with the outside world. This “spatial updating” occurs automatically, without conscious effort, and is normally “obligatory” (i.e., cognitively impenetrable and hard to suppress). We pursued two main questions here: 1) Which cues are sufficient to initiate obligatory spatial updating? 2) Under what circumstances do vestibular cues become important?
STIMULI: A photo-realistic virtual replica of the Tübingen market place was presented via a curved projection screen (84×63° FOV). For vestibular stimulation, subjects were seated on a Stewart motion platform. TASK: Subjects were rotated consecutively to random orientations and asked to point “as accurately and quickly as possible” to 4 out of 22 previously-learned targets. Targets were announced consecutively via headphones and chosen to be outside of the current FOV.
Photo-realistic visual stimuli from a well-known environment including an abundance of salient landmarks allowed accurate spatial updating (mean absolute pointing error, pointing variability, and response time were 16.5°, 17.0°, and 1.19s, respectively). Moreover, those stimuli triggered spatial updating even when participants were asked to ignore turn cues and “point as if not having turned”, (32.9°, 27.5°, 1.67s, respectively). Removing vestibular turn cues did not alter performance significantly. This result conflicts with the prevailing opinion that vestibular cues are required for proper updating of ego-turns. We did find that spatial updating benefitted from vestibular cues when visual turn information was degraded to a mere optic flow pattern. Under all optic flow conditions, however, spatial updating was impaired and no longer obligatory. We conclude that “good” visual landmarks can initiate obligatory spatial updating and overcome the visuo-vestibular cue conflict.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf628.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.journalofvision.org/content/2/7/421Biologische KybernetikMax-Planck-GesellschaftSarasota, FL, USASecond Annual Meeting of the Vision Sciences Society (VSS 2002)10.1167/2.7.421bernieBERieckemvdhMvon der HeydehhbHHBülthoffposter1898Contribution and interaction of visual and vestibular cues
for spatial updating in real and virtual environments20029431158In a series of experiments, we established a speeded pointing paradigm to investigate the influence and interaction of visual and vestibular stimulus parameters for spatial updating in real and virtual environments.
STIMULI: Participants saw either the real surround or a photorealistic virtual replica presented via HMD or projection screen. A Stewart motion platform was used for vestibular stimulation. TASK: After simulated or real ego-turns, participants were asked to quickly point towards different previously-learned target objects. Targets were announced consecutively via headphones and chosen to be outside of the current field of view.
Performance in real and virtual environments was comparable.
Photorealistic visual stimuli from well-known environments
including an abundance of salient landmarks proved sufficient to initiate obligatory spatial updating and hence turn the world inside our head, even against our conscious will and without corresponding vestibular cues. Spatial updating benefitted from vestibular cues only when visual turn information was reduced to optic flow information only. There, however, spatial updating was impaired and no longer obligatory. Apart form the well-known smooth spatial updating induced by continuous movement information, we found also a discontinuous,
jump-like spatial updating that allowed participants to
quickly adopt a new orientation without any explicit motion
cues.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1898.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffvan Meer, E.; et al.Biologische KybernetikMax-Planck-GesellschaftBerlin, Germany43. Kongress der Deutschen Gesellschaft für PsychologiebernieBERieckemvdhMvon der HeydehhbHHBülthoffposter632Spatial updating experiments in Virtual Reality: What makes the world turn around in our head?200225162During ego-turns, our mental spatial representation of the surround is automatically
rotated to stay in alignment with the physical surround. We know that this “spatial updating”
process is effortless, automatic, and typically obligatory (i.e., cognitively impenetrable
and hard-to-suppress). We were interested in two main questions here: 1) Can visual
cues be sufficient to initiate obligatory spatial updating, in contrast to the prevailing opinion
that vestibular cues are required? 2) How do vestibular cues, field of view (FOV),
display method, turn amplitude and velocity influence spatial updating performance?
STIMULI: A photo-realistic virtual replica of the Tübingen market place was presented
via a curved projection screen (84x63° FOV or restricted to 40x30°) or a head-mounted
display (HMD, 40x30°). A Stewart motion platform was used for vestibular stimulation.
TASK: Participants were rotated successively to different orientations and asked to point
“as quickly and accurately as possible” to four targets randomly selected from a set of 22
salient landmarks previously learned. Targets were announced consecutively via headphones
and selected to be outside of the visible range (i.e., between 42° and 105° left or
right from straight ahead). Performance was quantified as absolute pointing error, pointing
variability, and response time.
In general, participants had no problem mentally updating their orientation in space
(UPDATE condition) and spatial updating performance was the same as for rotations
where they were immediately returned to the previous orientation (CONTROL condition).
Spatial updating was always “obligatory” in the sense that it was significantly more
difficult to IGNORE ego-turns (i.e., “point as if not having turned”). We observed this
data pattern irrespective of turning velocity, head mounted display (HMD) or projection
screen usage, and amount of vestibular cues accompanying the visual turn. Increasing the
visual field of view (from 40x30° FOV to 84x63°) increased UPDATE performance
especially for larger turns, i.e., potentially more difficult tasks. IGNORE performance,
however, was unaltered. Large turns (>80°) were almost as easy to UPDATE as small
turns, but much harder to IGNORE (p<0.05). This suggests that larger turns result in a
more obligatory (hard-to-suppress) spatial updating of the world inside our head.
We conclude that photo-realistic visual stimuli from well-known environments including
an abundance of salient landmarks are sufficient to trigger spatial updating and hence
turn the world inside our head, irrespective of vestibular cues. This result conflicts with
the prevailing opinion that vestibular cues are required for proper updating of ego-turns.
Several factors might explain this difference, primarily the immersiveness of our visualization
setup and the abundance of natural landmarks in a well-known environment.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf632.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk02/Bülthoff, H. H.; Gegenfurtner, K. R.; Mallot, H. A.; Ulrich, R.Biologische KybernetikMax-Planck-GesellschaftTübingen, Germany5. Tübinger Wahrnehmungskonferenz (TWK 2002)bernieBERieckemvdhMvon der HeydehhbHHBülthoffposter629How real is virtual reality really? comparing spatial updating using pointing tasks in real and virtual environmentsJournal of Vision20011213321When moving through space, we continuously update our egocentric mental spatial representation of our surroundings. We call this seemingly effortless, automatic, and obligatory (i.e., hard-to-suppress) process “spatial updating”. Our goal here is twofold: 1) To quantify spatial updating; 2) Investigate the importance and interaction of visual and vestibular cues for spatial updating. In a learning phase (20 min) subjects learned the positions of twelve targets attached to the walls, 2.5m away. Subjects saw either the real environment or a photo-realistic copy presented via a head-mounted display (HMD). A motion platform was used for vestibular stimulation. In the test phase subjects were rotated to different orientations and asked to point “as quickly and accurately as possible” to four targets announced consecutively via headphones. In general, subjects had no problem mentally updating their orientation in space and were as good as for rotations where they were immediately returned to the original orientation. Performance, quantified as response time, absolute pointing error and pointing variability, was best in the real world condition. However, when the field of view was limited via cardboard blinders to match that of the HMD (40×30 deg), performance decreased and was comparable to the HMD condition. Presenting turning information only visually (through the HMD) hardly altered those results. In both the real world and HMD conditions, spatial updating was obligatory in the sense that it was significantly more difficult to IGNORE ego-turns (i.e., “point as if not having turned”) than to UPDATE them as usual. Speeded pointing tasks proved to be a viable method for quantifying “spatial updating”. We conclude that, at least for the limited turning angles used (<60í), the Virtual Reality simulation of ego-rotation was as effective and convincing (i.e., hard to ignore) as its real world counterpart, even when only visual information was presented.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf629.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.journalofvision.org/1/3/321/Biologische KybernetikMax-Planck-GesellschaftSarasota, FL, USAFirst Annual Meeting of the Vision Sciences Society (VSS 2001)10.1167/1.3.321bernieBERieckemvdhMvon der HeydehhbHHBülthoffposter630No visual dominance for remembered turns - Psychophysical experiments on the integration of visual and vestibular cues in Virtual RealityJournal of Vision20011213188In most virtual reality (VR) applications turns are misperceived, which leads to disorientation. Here we focus on two cues providing no absolute spatial reference: optic flow and vestibular cues. We asked whether: (a) both visual and vestibular information are stored and can be reproduced later; and (b) if those modalities are integrated into one coherent percept or if the memory is modality specific. We used a VR setup including a motion simulator (Stewart platform) and a head-mounted display for presenting vestibular and visual stimuli, respectively. Subjects followed an invisible randomly generated path including heading changes between 8.5 and 17 degrees. Heading deviations from this path were presented as vestibular roll rotation. Hence the path was solely defined by vestibular (and proprioceptive) information. The subjects' task was to continuously adjust the roll axis of the platform to level position. They controlled their heading with a joystick and thereby maintained an upright position. After successfully following a vestibularly defined path twice, subjects were asked to reproduce it from memory. During the reproduction phase, the gain between the joystick control and the resulting visual and vestibular turns were independently varied. Subjects learned and memorized curves of the vestibularly defined virtual path and were able to reproduce the amplitudes of the turns. This demonstrates that vestibular signals can be used for spatial orientation in virtual reality. Since the modality with the bigger gain factor had a dominant effect on the reproduced turns, the integration of visual and vestibular information seems to follow a “max rule”, in which the larger signal is responsible for the perceived and memorized heading change.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf630.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.journalofvision.org/1/3/188/Biologische KybernetikMax-Planck-GesellschaftSarasota, FL, USAFirst Annual Meeting of the Vision Sciences Society (VSS 2001)10.1167/1.3.188mvdhMvon der HeydebernieBERieckedwcDWCunninghamhhbHHBülthoffposter68How do we know where we are? Contribution and interaction of visual and vestibular cues for spatial updating in real and virtual environments20013146In order to know where we are when moving through space, we constantly update our mental egocentric representation of the environment, matching it to our motion. This process, termed "spatial updating", is mostly automatic, effortless, and obligatory (i.e., hard-to-suppress). Our goal here is twofold: 1) To quantify spatial updating; 2) To investigate the importance and interaction of visual and vestibular cues for spatial updating.
The stimuli consisted of twelve targets (the numbers from 1 to 12, arranged in a clockface manner) attached to the walls of a 5x5m room. Subjects saw either the real room or a photo-realistic 3D model of it presented via a head-mounted display (HMD). For vestibular stimulation, subjects were seated on a Stewart motion platform. After each rotation, the subjects' task was to point "as quickly and accurately as possible" to four targets announced consecutively via headphones. Spatial updating performance was quantified in terms of response time and pointing error (absolute error and variance) in three different spatial updating conditions: Subjects were (a) rotated to a different orientation (UPDATE condition); (b) rotated as in (a), but asked to ignore that rotation and "point as if not having turned" (IGNORE); (c) rotated to a new orientation and immediately back to the original orientation before being asked to point (CONTROL); Each of the twelve subjects was presented with six stimulus conditions (blocks A-F, 15 min. each) in balanced order, with different amount of visual and vestibular information available.
Performance, especially response times, varied considerably between subjects, but showed the same overall pattern: 1) Performance was best in the real world condition (block A). When the field of view was limited via cardboard blinders (block B) to match that of the HMD (40x30°), performance decreased and was comparable to the HMD condition (block C). Presenting only visual information for the turns (through the HMD, block D) decreased the performance slightly further. 2) In those four blocks where there was visual information available about the rotation, subjects performed equally well in the UPDATE and CONTROL conditions. Performance in the IGNORE condition, however, was significantly impaired, indicating that spatial updating was indeed obligatory in the sense of being hard-to-suppress. 3) When subjects were blindfolded (block E) or saw a constant image of the scene (block F), IGNORE performance increased and was comparable to the UPDATE performance. This suggests that spatial updating was no longer obligatory when visual cues about the motion were removed.
Speeded pointing tasks proved to be a viable method for quantifying "spatial updating". We conclude that, at least for the regular target arrangement and limited turning angles used (<60°), the Virtual Reality simulation of ego-rotation was as effective and convincing (i.e., hard to ignore) as its real world counterpart, even when only visual information was available.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf68.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk01/Psenso.htmH.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. UlrichBiologische KybernetikMax-Planck-GesellschaftTübingen, Germany4. Tübinger Wahrnehmungskonferenz (TWK 2001)bernieBERieckemvdhMvon der HeydehhbHHBülthoffposter63Visual-vestibular sensor integration follows a max-rule: results from psychophysical experiments in virtual reality20013142Perception of ego turns is crucial for navigation and self-localization. Yet in most virtual reality (VR) applications turns are misperceived, which leads to disorientation. Here we focus on two cues providing no absolute spatial reference: optic flow and vestibular cues. We asked whether: (a) both visual and vestibular information are stored and can be reproduced later; and (b) if those modalities are integrated into one coherent percept or if the memory is modality specific. In the following experiment, subjects learned and memorized turns and were able to reproduce them even with different gain factors for the vestibular and visual feedback.
We used a VR setup including a motion simulator (Stewart platform) and a head-mounted display for presenting vestibular and visual stimuli, respectively. Subjects followed an invisible randomly generated path including heading changes between 8.5 and 17 degrees. Heading deviations from this path were presented as vestibular roll rotation. Hence the path was solely defined by vestibular (and proprioceptive) information. One group of subjects' continuously adjusted the roll axis of the platform to level position. They controlled their heading with a joystick and thereby maintained an upright position. The other group was passively guided through the sequence of heading turns without any roll signal. After successfully following a vestibularly defined path twice, subjects were asked to reproduce it from memory. During the reproduction phase, the gain between the joystick control and the resulting visual and vestibular turns were independently varied by a factor of 1/sqrt(2), 1 or sqrt(2).
Subjects from both groups learned and memorized curves of the vestibularly defined virtual path and were able to reproduce the amplitudes of the turns. This demonstrates that vestibular signals can be used for spatial orientation in virtual reality. Since the modality with the bigger gain factor had for both groups a dominant effect on the reproduced turns, the integration of visual and vestibular information seems to follow a "max rule", in which the larger signal is responsible for the perceived and memorized heading change.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf63.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk01/Psenso.htmH.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. UlrichBiologische KybernetikMax-Planck-GesellschaftTübingen, Germany4. Tübinger Wahrnehmungskonferenz (TWK 2001)mvdhMvon der HeydebernieBERieckedwcDWCunninghamhhbHHBülthoffposter111Do we really need vestibular and proprioceptive cues for homing?Investigative Ophthalmology & Visual Science20005414S225http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf111.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftFort Lauderdale, FL, USAAnnual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 2000)hhbHHBülthoffbernieBERieckeveenHAHCvan Veenposter164Humans can extract distance and velocity from vestibular perceived accelerationJournal of Cognitive Neuroscience2000412Supplement77Purpose: The vestibular system is known to measure accelerations for linear forward movements. Can humans integrate these vestibular signals to derive reliably distance and velocity estimates? Methods: Blindfolded naive volunteers participated in a psychophysical experiment using a Stewart-Platform motion simulator. The vestibular stimuli consisted of Gaussian-shaped translatory velocity profiles with a duration of less than 4 seconds. The full two-factorial design covered 6 peak accelerations above threshold and 5 distances up to 25cm with 4 repetitions. In three separate blocks, the subjects were asked to verbally judge on a scale from 1 to 100 traveled distance, maximum velocity and maximum acceleration. Results: Subjects perceived distance, velocity and acceleration quite consistently, but with systematic errors. The distance estimates showed a linear scaling towards the mean and were independent of accelerations. The correlation of perceived and real velocity was linear and showed no systematic influence of distances or accelerations. High accelerations were drastically underestimated and accelerations close to threshold were overestimated, showing a logarithmic dependency. Conclusions: Despite the fact that the vestibular system measures acceleration only, one can derive peak velocity and traveled distance from it. Interestingly, even though maximum acceleration was perceived non linear, velocity and distance was judged consistently linear.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf164.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://cognet.mit.edu/library/conferences/paper?paper_id=47341Biologische KybernetikMax-Planck-GesellschaftSan Francisco, CA, USA7th Annual Meeting of the Cognitive Neuroscience SocietymvdhMvon der HeydebernieBERieckedwcDWCunninghamhhbHHBülthoffposter165Humans can separately perceive distance, velocity and acceleration from vestibular stimulation20002148The vestibular system is known to measure changes in linear and angular position
changes in terms of acceleration. Can humans judge these vestibular signals as acceleration
and integrate them to reliably derive distance and velocity estimates?
Twelve blindfolded naive volunteers participated in a psychophysical experiment using a
Stewart-Platform motion simulator. The vestibular stimuli consisted of Gaussian-shaped
translatory or rotatory velocity profiles with a duration of less than 4 seconds. The full
two-factorial design covered 6 peak accelerations above threshold and 5 distances with 4
repetitions. In three separate blocks, the subjects were asked to verbally judge on a scale
from 1 to 100 the distance traveled or the angle turned, maximum velocity and maximum
acceleration.
Subjects judged the distance, velocity and acceleration quite consistently, but with systematic
errors. The distance estimates showed a linear scaling towards the mean response
and were independent of accelerations. The correlation of perceived and real velocity was
linear and showed no systematic influence of distances or accelerations. High accelerations
were drastically underestimated and accelerations close to threshold were overestimated,
showing a logarithmic dependency. Therefore, the judged acceleration was close
to the velocity judgment. There was no significant difference between translational and
angular movements.
Despite the fact that the vestibular system measures acceleration only, one can derive
peak velocity and traveled distance from it. Interestingly, even though maximum acceleration
was perceived non-linearly, velocity and distance judgments were linear.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf165.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk00/H.H. Bülthoff, M. Fahle, K.R. Gegenfurtner, H.A. MallotBiologische KybernetikMax-Planck-GesellschaftTübingen, Germany3. Tübinger Wahrnehmungskonferenz (TWK 2000)mvdhMvon der HeydebernieBERieckedwcDWCunninghamhhbHHBülthoffposter133Reicht optischer Fluß wirklich nicht zum Heimfinden?20002139Die Literatur legt nahe, daß selbst für einfache Orientierungs- und Heimfindeaufgaben
die durch optischen Fluß gegebene Information unzureichend ist und vestibuläre und kinästhetische
Reize benötigt werden. Um diese Behauptung zu testen, führten wir Dreiecksvervollständigungsexperimente
in einer virtuellen Umgebung durch, die als einzige
Informationsquelle optischen Fluß anbot.
Die simulierte Eigenbewegung wurden visuell auf einer halbzylindrischen 180° Projektionsleinwand
(7m Durchmesser) dargeboten und über Maus-Tasten gesteuert. Damit die
Versuchspersonen zur Navigation nur Pfadintegration und keine Landmarkeninformation
verwenden konnten, bestand die simulierte Welt lediglich aus einer 3D Punktewolke.
Diese enthielt keinerlei hilfreiche Orientierungspunkte (Landmarken), vermittelte jedoch
ein überzeugendes Gefühl von Eigenbewegung (Vektion). In Exp 1 sollten die Versuchspersonen
Drehungen um bestimmte Winkel ausführen sowie Distanzen reproduzieren,
wobei die Geschwindigkeiten randomisiert wurden. Exp 2 & 3 waren Dreiecksvervollständigungsexperimente:
Versuchspersonen folgten zwei Schenkeln eines Dreiecks und
sollten dann selbstständig zum nicht markierten Ausgangspunkt zurückfinden. In Exp 2
wurden fünf verschiedene gleichschenklige Dreiecke für Links- und Rechtsdrehungen
verwendet, in Exp 3 hingegen 60 verschiedene Dreiecke mit randomisierten Schenkellängen
und Winkeln.
Unabhängig von der Bewegungsgeschwindigkeit konnten untrainierte Versuchspersonen
in Exp 1 Drehungen und Distanzen mit nur geringfügigem systematischen Fehler ausführen.
Wir fanden in Exp 2 & 3 generell eine lineare Korrelation zwischen ausgeführten
und korrekten Werten für die Meßgrößen Drehwinkel und zurückgelegte Distanz. Für die
weitere Analyse verwendeten wir deshalb für beide Meßgrößen die Steigungen der
Regressionsgeraden (“Kompressionsrate”) und die Abweichungen vom korrekten Wert
(signed error). Exp 2 zeigte keine signifikanten Fehler (d.h. generelle Über- oder Unterschätzung)
für Drehungen oder Distanzen. Distanzantworten waren stark in Richtung
Mittelwert verschoben (Kompressionsrate 0.58), Winkelantworten jedoch kaum (0.91).
Für randomisierte Dreiecksgeometrien in Exp 3 reduzierte sich diese Tendenz zu mittleren
Antworten für Distanzen (0.86), verstärkte sich jedoch für Drehungen (0.77).
In ähnlichen Experimenten zur Dreiecksvervollständigung unter Beschränkung auf visuelle
Information (Virtual Reality: Péruch et al., Perc. ‘97; Duchon et al., Psychonomics
‘99) und auf propriozeptive Reize (blindes gehen: Loomis et al., JEP ‘93) zeigte sich eine
starke Tendenz zu mittleren Drehwinkeln (Kompressionsrate < 0.5), die wir nicht fanden.
Die Tendenz, bei reinen Drehaufgaben in visuellen virtuellen Umgebungen nicht weit
genug zu drehen (Péruch ‘97; Bakker, Presence ‘99) konnte ebenfalls nicht beobachtet
werden (Exp 1). Pfadintegration aufgrund optischen Flusses erwies sich in unseren Experimenten
als ausreichend und verläßlich für Orientierungs- und Heimfindeaufgaben.
Vestibuläre und kinästhetische Information waren hierfür nicht erforderlich.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf133.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk00/Biologische KybernetikMax-Planck-GesellschaftTübingen, Germany3. Tübinger Wahrnehmungskonferenz (TWK 2000)bernieBERieckeveenHAHCvan VeenhhbHHBülthoffposter326Visual homing to a virtual homeInvestigative Ophthalmology & Visual Science19995404798Purpose: Results from previous studies (e.g. Loomis et al., JEP, 1993) suggest that proprioceptive
cues play a major role in human homing behaviour. We conducted triangle completion experiments in
virtual environments to measure homing performance based solely on visual cues.
Methods: Subjects were seated in the centre of a large half-cylindrical 180° projection screen and
steered smoothly through the simulated scene using mouse buttons. Experiments were conducted in two
environments: an extended volume filled with random blobs (inducing strong vection), and a
photorealistic town containing distinct landmarks. On each trial, subjects had to return to their
starting point after moving outwards along two prescribed segments (40m long, subtending a 30°..150°
horizontal angle) of an imaginary triangle. To exclude scene-matching as a homing strategy, the
simulated environment was modified to a different but similar one just before the subject started
the return movement.
Results: We found strong systematic errors in distance travelled but only small deviations in
turning angles. After some practice the variability (standard deviation) of the responses typically
dropped to roughly 10m for distance and 10 degree for turns (lower variability for town than for
blobs-scene). Omitting the scene modification before the return movement resulted in nearly perfect
performance, stressing the dominant role of piloting under natural conditions. Exchanging the mouse
interface for a more realistic bicycle interface, thus introducing proprioceptive cues for side-ways
tilt and pedal resistance, reduced the systematic error in rotation but also increased the overall
variability.
Conclusion: Path integration using optical information alone is sufficient for accurate homing.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftFort Lauderdale, FL, USAAnnual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1999)veenHAHCvan VeenbernieBERieckehhbHHBülthoffposter309Is homing by optic flow possible?Journal of Cognitive Neuroscience1999411Supplement76We conducted triangle completion experiments in virtual environments to investigate the role of optic flow in human homing performance. Ego-motion was visually simulated on an half-cylindrical 180-degree-projection screen of 7m diameter using the mouse buttons as input device. Subjects had to return to the origin after moving outwards along two prescribed segments of the triangle. To exclude scene-matching as a homing strategy, subjects were "teleported" to a different, however similar environment for the return path ("scene-swap condition"). Experiments were performed in two simulated environments: A cloud-of-dot-like environment and a photorealistic town. Only the latter contained landmarks and explicit scaling information. We found systematic errors in distances traveled, but not in turns performed. Homing based on optic flow alone in the cloud-of-dot environment was possible and led to similar performance as navigation in the town. Omitting scene-swap in a control experiment led to almost perfect homing performance in the town, suggesting that scene-matching (whenever possible) plays the dominant role in homing accuracy. A comparison with results from Loomis et al. (JEP, 1993), who studied triangle completion based on path integration using proprioceptive cues showed that optic flow information in our experiments led to considerably smaller systematic errors. Using scene-swap and virtual environments proved to be a successful paradigm to disentangle the role of two major information sources in spatial orientation: optic flow (path integration) versus landmarks (piloting).http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf309.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftHanover, NH, USA6th Annual Meeting of the Cognitive Neuroscience SocietybernieBERieckeveenHAHCvan VeenhhbHHBülthoffposter308Heimfinden in virtuellen Umgebungen1999284Ergebnisse früherer Studien deuten darauf hin, daß propriozeptive Reize für das menschliche
Heimfindeverhalten eine wesentliche Rolle spielen (z.B. Loomis et al., JEP, 1993). Wir untersuchten den Einfluß visueller Information und speziell des optischen Flußes auf
die Heimfindeleistungen anhand von Dreiecksvervollständigungsexperimenten in virtuellen
Umgebungen.
Versuchspersonen sollten zum Ausgangspunkt zurückfinden, nachdem sie sich entlang zweier vorgegebener Dreiecksschenkel von diesem entfernt hatten. Die Versuchsumgebung wurde auf einer halbzylinderförmigen 180°-Projektionsleinwand dargestellt. Dabei wurden die simulierten Eigenbewegungen über die Maustasten gesteuert. Die Experimente wurden in zwei verschiedenen Szenarien durchgeführt: Einer Punktewolke, die einen hohen Grad an Vektion (Gefühl für Eigenbewegung) hervorruft und einer photorealistischen Kleinstadt mit zahlreichen salienten Landmarken. Um Landmarkennavigation auszuschließen wurden für den Rückweg sämtliche Landmarken ausgetauscht (“Szenenwechsel” Bedingung).
Wir fanden starke systematische Fehler in der zurückgelegten Distanz, nicht jedoch in den Drehwinkeln. In einem Kontrollexperiment resultierte der Verzicht auf Szenenwechsel in fast perfekten Homingleistungen. Dies legt nahe, daß image matching (falls möglich) einen dominanten Einfluß auf die Heimfindegenauigkeit hat.
Optischer Fluß in der Punktewolke erwies sich als ausreichend, um die Heimfindeaufgabe zu lösen und führte zu ähnlichen Heimfindeleistungen wie in der Stadtumgebung.
Die Verwendung von Szenenwechsel in virtuellen Umgebungen ermöglichte es, den Einfluß zweier wesentlicher Komponenten der visuellen Raumorientierung voneinander zu separieren: Optischer Fluß (Pfadintegration) versus Landmarken (Piloting).http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf308.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment Bülthoffhttp://www.twk.tuebingen.mpg.de/twk99/Biologische KybernetikMax-Planck-GesellschaftTübingen, Germany2. Tübinger Wahrnehmungskonferenz (TWK 99)bernieBERieckeveenHAHCvan Veenthesis3788Self-motion and Presence in the Perceptual Optimization of a
Multisensory Virtual Reality Environment200512http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Aleksander_Valjamae_05_LicentiateThesis_[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.s2.chalmers.se/~em1swan/publications.htmBiologische KybernetikMax-Planck-GesellschaftChalmers University of Technology, Department of Signals and System Communication Systems Group, SE-412 96 Gothenburg, SWEDENPhDenbernieAVäjamäethesisRiecke2003How far can we get with just visual information? Path integration and spatial updating studies in Virtual Reality2003714http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/Riecke-Thesis.pdfhttp://www.kyb.tuebingen.mpg.deDepartment BülthoffEberhard-Karls-Universität TübingenPhDbernieBERieckethesis466Untersuchung des menschlichen Navigationsverhaltens anhand von Heimfindeexperimenten in virtuellen Umgebungen19981031http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf466.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffBiologische KybernetikMax-Planck-GesellschaftEberhard-Karls-Universität TübingenDiplombernieBERieckeconferenceRiecke2008Auditory and multi-modal contributions to self-motion perception: Why we might want to listen200824http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffInvited Lecturehttp://www.cirmmt.org/activities/workshops/research/multimodal-influences?set_language=fr&-C=Montreal, CanadaWorkshop on Multimodal Influences on Perceived Self MotionbernieBRieckeconferenceMeilingerRB2007Orientation Specificity in Long-Term Memory for
Environmental Spaces200783158This study examined orientation specificity in human long-term memory for environmental spaces. Thirty-eight participants learned an immersive virtual environment by walking in one direction. The environment consisted of seven corridors within which target objects were located. In the testing phase, participants were teleported to different locations in the environment and were asked to identify their location and heading and then to point towards previously learned targets. As predicted by view-dependent theories, participants pointed more
accurately when oriented in the direction in which they
originally learned each corridor; even when visibility was
limited to one meter. When the whole corridor was visible,
participants also self-localised better when oriented in the
learned orientation. No support was found for a global reference direction underlying the memory of the whole layout or for an exclusive orientation-independent memory. We propose a ?network of reference frames? theory to integrate elements of the different theoretical positions.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffAbstract Talkhttp://sites.univ-provence.fr/wlpc/escop07_2/proceedings_ESCOP2007.pdfMarseille, France15th Meeting of the European Society for Cognitive Psychology (ESCOP 2007)meilingerTMeilingerbernieBERieckehhbHHBülthoffconferenceRiecke2006Self-motion perception and spatial orientation in virtual environments200684Despite recent technological advances, convincing self-motion simulation in Virtual Reality (VR) is difficult to achieve, and users of-ten suffer from motion sickness and/or disorientation in the simulated world. Instead of trying to simulate self-motions with physical realism (as is often done for, e.g., driving or flight simulators), we propose in this paper a perceptually oriented approach towards self-motion simulation. Following this paradigm, we performed a series of psychophysical experiments to determine essential visual, auditory, and vestibular/tactile parameters for an effective and perceptually convincing self-motion simulation. These studies are a first step towards our overall goal of achieving lean and elegant self-motion simulation in Virtual Reality (VR) without physically moving the observer. In a series of psychophysical experiments about the self- motion illusion (circular/linear vection), we found that (i) vection as well as presence in the simulated environment is increased by a consistent, naturalistic visual scene when compared to a sliced, inconsistent version of the identical scene, (ii) barely noticeable marks on the projection screen can increase vection as well as presence in an unobtrusive manner, (iii) physical vibrations of the observer's seat as well as inaudible subsonic cues can enhance the vection illusion, (iv) For the first time, it was shown that HRTF- based spatial audio cues can be used to reliably induce vection in up to 80% of blindfolded observers, (v) spatialized 3D audio cues embedded in the simulated environment increase the sensation of self- motion and presence. (vi) small physical motions (jerks of just a few cm or degrees) that accompany the onset of the visually simulated motion enhance vection, (vii) even the mere knowledge that one might potentially be moved physically increased the convincingness of the self-motion illusion significantly, especially when additional vibrations supported the interpretation that one was really moving. We conclude that providing consistent cues about self-motion to multiple sensory modalities can enhance vection, even if physical motion cues are absent. We propose that the spatial reference frame evoked by a naturalistic and cross-modally consistent virtual environment increases the believability of the stimulus, such that it is more easily accepted as a stable reference frame with respect to which visual or auditory motion is more likely to be judged as self- motion than object-motion. Compared to more traditional approaches of enhancing self-motion perception (e.g., motion platforms, free walking areas, or treadmills) the current, perceptually-oriented approach has only minimal requirements in terms of overall costs, required space, safety features, and technical effort and expertise. Thus, our approach might be promising for a wide range of low-cost applications.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffInvited Lecturehttp://www.cvr.yorku.ca/content/self-motion-perception-and-spatial-orientation-virtual-environmentsToronto, ON, CanadaYork University: Centre for Vision ResearchbernieBRieckeconference3901Visually induced linear vection is enhanced by small physical accelerations200661796Wong &amp;amp; Frost (1981) showed that the onset latency of visually induced self-rotation illusions (circular vection) can be reduced by concomitant small physical motions (jerks).
Here, we tested whether (a) such facilitation also applies for translations, and (b) whether the strength of the jerk (degree of visuo-vestibular cue conflict) matters.
14 naïve observers rated onset, intensity, and convincingness of forward linear vection induced by photorealistic visual stimuli of a street of houses presented on a projection screen (FOV: 75°×58°). For 2/3 of the trials, brief physical forward accelerations (jerks applied using a Stewart motion platform) accompanied the visual motion onset.
Adding jerks enhanced vection significantly; Onset latency was reduced by 50%, convincingness and intensity ratings increased by more than 60%.
Effect size was independent of visual acceleration (1.2 and 12m/s^2) and jerk size (about 0.8 and 1.6m/s^2 at participants head for 1 and 3cm displacement, respectively), and showed no interactions.
Thus, quantitative matching between the visual and physical acceleration profiles might not be as critical as often believed as long as they match qualitatively and are temporally synchronized.
These findings could be employed for improving the convincingness and effectiveness of low-cost simulators without the need for expensive, large motion platforms.http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Riecke_poster4IMRF_2006_Visually%20Induced%20Linear%20Vection%20is%20Enhanced%20by%20Small%20Physical%20Accelerations_4web_3901[0].pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffAbstract Talkhttp://imrf.mcmaster.ca/2006/viewabstract.php%3Fid=96&symposium=0.htmlBiologische KybernetikMax-Planck-GesellschaftDublin, Ireland7th International Multisensory Research Forum (IMRF 2006)enbernieBERieckefranckFCaniardjspJSchulte-Pelkumconference3935Wahrnehmung von Eigenbewegung in Virtual Reality: kognitive und multi-sensorische Aspekte200634872Zur Untersuchung der Eigenbewegungsillusion (Vektion) wurden klassischerweise abstrakte visuelle Stimuli (z.B. Streifenmuster) verwendet. Wir untersuchten mit Hilfe von Virtual Reality kognitive und multi-sensorische Effekte der Eigenbewegungswahrnehmung - diese Aspekte fanden bisher kaum Berücksichtigung. In einer Serie von Vektionsexperimenten fanden wir folgende Ergebnisse: Eine photorealistische Szene eines Raumes verstärkt die Vektion, verglichen zu abstrakten visuellen Stimuli, die keine räumliche Interpretation zulassen. In vier multi-sensorischen Vektionsexperimenten (auditiv-somatosensorisch, visuell-somatosensorisch, visuell-auditiv, visuell-vestibulär) fanden wir jeweils eine Verstärkung der Vektion durch multi-sensorische Stimulation. Hierbei scheint es einen moderierenden kognitiven Effekt zu geben: So erzeugten z.B. Geräusche von statischen Geräuschquellen (Brunnen) mehr Vektion als solche, die sich in der Umwelt bewegen (Schritte). Generell trat die multi-sensorische Verstärkung nur in
solchen Fällen auf, in denen eine ökologisch valide Übereinstimmung zwischen den Stimuli vorlag. Somit scheint bei der multi-sensorischen Eigenbewegungswahrnehmung eine kognitive Bewertung den Integrationsprozess der Sinnesinformation zu beeinflussen - dies wurde in bisherigen Erklärungsmodellen nicht berücksichtigt.http://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffAbstract Talkhttps://www.teap.de/memory/Abstractband_48_2006_mainz.pdfBiologische KybernetikMax-Planck-GesellschaftMainz, Germany48. Tagung Experimentell Arbeitender Psychologen (TeaP 2006)dejspJSchulte-PelkumbernieBERieckePLarssonAVäljamäeDVästfjällhhbHHBülthoffconference2505Circular vection is facilitated by a consistent photorealistic scene200310637It is well known that large visual stimuli that move in a uniform manner can induce illusory sensations of self-motion in stationary observers. This perceptual phenomenon is commonly referred to as vection. The prevailing notion of vection is that the illusion arises from bottom-up perceptual processes and that it mainly depends on physical parameters of the visual stimulus (e.g., contrast, spatial frequency etc.). In our study, we investigated whether vection can also be influenced by top-down processes: We tested whether a photorealistic image of a real scene that contains consistent spatial information about pictorial depth and scene layout (e.g., linear perspective, relative size, texture gradients etc.) can induce vection more easily than a comparable stimulus with the same image statistics where information about relative depth and scene layout has been removed. This was done by randomly shuffling image parts in a mosaic-like manner. The underlying idea is that the consistent photorealistic scene might facilitate vection by providing the observers with a convincing mental reference frame for the simulated environment so that they can feel "spatially present" in that scene. That is, the better observers accept this virtual scene instead of their physical surrounding - i.e., the simulation setup - as the primary reference frame, the less conflict between the two competing reference frames should arise and therefore spatial presence and ego-motion perception in the virtual scene should be enhanced. In a psychophysical experiment with 18 observers, we measured vection onset times and convincingness ratings of sensed ego-rotations for both visual stimuli. Our results confirm the hypothesis that cognitive top-down processes can influence vection: On average,we found 50% shorter vection onset times and 30% higher convincingness ratings of vection for the consistent scene. This finding suggests that spatial presence and ego-motion perception are closely related to one another. The results are relevant both for the theory of ego-motion perception and for ego-motion simulation applications in Virtual Reality.http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/Presence-2003-SchultePelkum.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffAbstract Talkhttps://ispr.info/presence-conferences/previous-conferences/presence-2003/Biologische KybernetikMax-Planck-GesellschaftAalborg, Denmark6th Annual International Workshop on Presence (PRESENCE 2003)jspJSchulte-PelkumbernieBERieckemvdhMvon der HeydehhbHHBülthoffconference1952Teleporting works: Spatial updating experiments in Virtual Tübingen20021121http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1952.pdfhttp://www.kyb.tuebingen.mpg.dehttp://www.kyb.tuebingen.mpg.deDepartment BülthoffAbstract Talkhttp://www.opam.net/archive/opam2002/OPAM2002Abstracts.pdfBiologische KybernetikMax-Planck-GesellschaftKansas City, KS, USA10th Annual Workshop on Object Perception and Memory (OPAM 2002)bernieBERieckemvdhMvon der HeydehhbHHBülthoff