NYU-X Holodeck Research AreasNSF MRI #1626098

Visual

The recent, rapid, and disruptive emergence of promising consumer grade VR as gaming platforms is a prime example of the need to accommodate rapidly evolving technologies.Technical parameters optimized by diverse visual equipment throughout the Holodeck and research include:

The goal is to create seamlessly realistic and truly immersive virtual experiences.

Audio

The audio capabilities of the instrument will use loudspeaker and HRTF-processed headphones, capable of reproducing high quality spatial audio. High quality headphones (e.g. Sennheiser HD650) will be equipped with Head-Related Transfer Function (HRTF) processing technology through the Max/MSP software. HRTFs will be personally measured using ScanIR - the impulse response measurement software developed at NYU, or user selected using procedures such scan IR, an impulse response software developed at NYU.

The instrument will be the first integrated auditory system, comprised of the most advanced audio reproduction systems that is tightly coupled with multimodal capacities across all equipment categories.

The goal is to enhance experience and provide and integrate 3D rapid prototyping and modeling with stereoscopic, interactive physical and immersive 3D visualizations on site and in distributed and virtual environments.

Human Dynamics

High quality, camera array-based motion capture system to acquire, analyze, classify, and stream spatio-temporal data from a wide variety of kinematic sources

Large-scale Tactonic Technologies (pressure sensing & imaging floor)

Brain Computer Interfaces

Eye-tracking (wearable/mobile, at a distance, and multi-person)

Affective and sociometric sensors and algorithms

The rich data sets that result are used to develop animations and other displayable forms; extrapolate data from nonverbal expressions; and for real-time control within the context of performance and HCI/UX research around body-centric manipulation of rich media. These distributed facilities can support physical isolation of human subjects performers, social actors, and control and analyze both social and nonverbal cues. Seamless and instantaneous transmission of data is supported between

Collaborative Research Areas & Applications

Research foci within the NYU Experiential SuperComputing Collaboration include enhancing and investigating social collaboration, and co-located and distributed human-human, human-agent, and/or human-robot interaction. Human-Agent and Human-Robotic Interaction research involves a repertoire of social and embodied agents and personal and service robotic capabilities. Physical and character interaction with natural spoken dialogue is supported wto create and study perceptive and expressive robotic characters that closely model dynamic face-to-face communication.Customizable ethnically-, age-, gender-, and culturally-diverse embodied virtual agents and social robots, are created and controlled by a mature suite of technologies that support natural spoken dialogs and mirror rich emotive facial gestures and social characteristics that synchronize with visual, physical, and prosodic speech and gesture production by the agents and robots. The equipment capacities integrate to realize rich multimodal environments, real-time data streams and analysis from sophisticated analytical models of individual and team behaviors, interactions, and creativity.

Scientific Simulation (Modeling, Visualization, andVerisimilitude):

The NYU Experiential SuperComputing Collaboration team integrates novel technologies and immersive environments to advance high-fidelity simulation, applicable to engineering, urban planning, bioengineering, creative expression and education. Capabilities include: assessing individual and team dynamics; skill development using gaming and simulation; and leveraging the expertise of co-located and distributed human and hybrid human agent/robotic teams through telepresence, tele-robotics, and human robot interaction.

The team nurtures collaborative transdisciplinary research including: mixed-reality environments and experiences; scientific and artistic visualizations; haptic input and feedback; and head-up augmented and virtual reality. Additional research capacity supports novel simulation robots with autonomy and capacity for social and emotional expression. A unique aspect involves the capacity for collaborative design of physical artifacts and related prototyping of advanced physical simulations; and the iterative assessment and improvement of compelling simulation scenarios involving data, outcomes and performance measures salient to multiple stakeholders (e.g., in urban planning: planners, designers, and residents; in education: students, parents, teachers, and local, regional and national administrators).

Studies of Teamwork, Design, and Outcomes:

Support for individual and small group immersive simulation research and assessment of relative benefits of individual and team training in virtual, hybrid, distributed and physically co-located simulation environments. Wearable computing and sensing devices (Brain Computer Interfaces (BCI), heads-up display, eye-tracking, skin conductance, and sociometric badges) are used to assess and interact with the attitudes, behaviors, and emotional intelligence of individuals and teams and distributed collective intelligence in Flow (optimal experience) and STUCK! (non-optimal experience). The collaboration also integrates physical fabrication communities fostering design thinking strategies, placing sophisticated design tools in the hands of distributed communities and end-users.

Scientific Modeling:

The team and instrument will also improve understanding of our recent experiments and simulations on cooperative fluid dynamical effects in flocks of flyers. The NYU Experiential SuperComputing Collaboration supports rapid prototyping of diverse modeling levels, using symbolic modeling tools that lie above basic computational modules.

Theoretical understanding of the emergence of cooperatively created structures in biology and physics.

Physical Acoustics and Collaboration:

The NYU Experiential SuperComputing Collaboration enables research on spatial and 3D sound to create accurate simulations of acoustic spaces and reproductions of sounds as they would appear in a natural environment in ways that enhance learning and collaboration.

Projects include:

Evaluation of immersive sound reproduction technologies

Environments that permit several sound reproduction technologies that can be combined to examine application synergies and tradeoffs