Using case studies from Rick and Morty: Virtual Rick-ality and Vacation Simulator, Wade will teach you about Owlchemy's design philosophy and give you a greater understanding VR interaction affordances and technical scope of emergent gameplay systems, with an eye towards inspiring you to embrace chaos in your own games!

And in "Engaging VR Storytelling: A Moss Postmortem" Polyarc artist Corinne Scrivens will go over the process and methodology that the studio went through while working on its cute mouse puzzler Moss to figure out how to tell compelling stories in VR, in ways that play to the strengths of the medium. She'll go over the lessons Polyarc learned creating within these limitations and how such constraints actually led to more creativity.

Come to this talk if you want to learn how to deliver narrative in a way that takes best advantage of the VR medium, as well as proven strategies on how to best do this within the realities of production.

You also won't want to miss "Shaping Culture in Social VR: Lessons from the Wild", especially if you're interested in (or invested in) a VR game with a social component. Presented by UCSC Social & Emotional Technology Lab researcher Joshua McVeigh-Schultz, this talk gathers insights from industry experts to propose a design framework for supporting the kinds of positive social interactions you want to have in this new medium.

Drawing from games like Rec Room and interviews with industry experts, this talk identifies specific design lessons related to onboarding, aesthetics of place, context cues, embodied affordances, media bridging, social mechanics, and moderation.

There's also a fascinating session on "Enhanced Immersivity: Using Speech Recognition for More Natural Player AI Interactions" that you should make time to see if you're at all curious about how complementary tech can help your games feel more real and alive. In this talk, Square Enix AI engineer Gautier Boeda will show you how to build a speech recognition pipeline to promote further interactivity between the user and the agents.

You'll learn to make a voice recognition system, which supports multiple languages while allowing a large variety of phrasing. You'll also learn some methods to improve spatial representation based interactions. While the original application is for VR, it can be applied to any game where there is a need for more immersive and interactive agents!

It's a promising session that promises to give you specific examples of spatial audio technology and how it enhances gameplay and immersion, including: HRTF-based binaural rendering for locating enemies; occlusion and transmission for modeling how sound is affected by vents or other openings; and real-time physics-based reverb for modeling smooth variations and subtle details in environmental reverb.

The speakers wil also discuss how the needs of Budget Cuts informed improvements in how Steam Audio integrates with Unity. Finally, they'll dig into the limitations in audio engine architecture when it comes to advanced spatial audio technology, and offer some suggestions for addressing them. Don't miss it!

Further details on these talks and many more are available now on the GDC 2019 Session Scheduler. There you can begin to lay out your GDC 2019, which takes place March 18th through the 22nd at the (newly renovated!) Moscone Center in San Francisco.

Bring your team to GDC! Register a group of 10 or more and save 10 percent on conference passes. Learn more here.​