REDEFINING REALITY.

VR AND SIMULATION CONFERENCE TRACK

See how virtual reality is revolutionizing professional workflows in every industry. Holding virtual models in your hands, walking through entire virtual buildings, simulating combat mission rehearsals, or practicing complex surgical procedures in virtual environments, are completely changing the way VR is used in the workplace.

INSTRUCTOR-LED WORKSHOPS

THE PREMIER AI EVENT COMES TO D.C.

David Weinstein

NVIDIA, Director of Professional Virtual Reality

Introducing Project Holodeck

NVIDIA is committed to the advancement of next-generation Virtual Reality, complete with stunning hi-fidelity, dynamic physical behaviors, and real-time social interactions. Within Holodeck friends will be able to create and share games, families will be able to explore vacation plans & experiences, designers will be able to evaluate new models, and robots will be able to learn new complex tasks. We'll discuss the Holodeck architecture and use-cases.

ABOUT THE SPEAKER: David Weinstein is the Director for Professional Virtual Reality at NVIDIA. As Director of Pro VR, he is responsible for NVIDIA's Professional VR Products, Projects, and SDK's. Prior to joining NVIDIA, Dave founded and ran three tech start-up companies.

Tom Kaye

NVIDIA, Senior Solutions Architect

Multi-User VR Solutions for Enterprise Deployment

Deploying PC-based virtual reality solutions throughout the enterprise poses challenges beyond the typical consumer model of one PC driving one headset for one user at one location. For consumer VR inside the home, the primary user typically owns, maintains, and controls access to both the PC and physical location. Business needs are different. As the number of locations or simultaneous users at each location increases, manageability becomes difficult and unwieldy. Enterprise requirements come into play such as deployment to temporary locations with limited setup/pack up time, limited physical space, robustness, scaling to many concurrent users, multi-user collaboration, remote IT management, configuration control, and system image replication. We'll introduce an experimental approach to multi-user VR deployment based on virtualization techniques that aims to address these enterprise use-case requirements.

ABOUT THE SPEAKER: Tom Kaye is a senior solutions architect at NVIDIA, where he focuses on professional uses of virtual and augmented reality. Tom's 30-plus years of professional visualization systems experience includes broad industry experience at Silicon Graphics as a sales engineer, principal systems engineer, solution architect, and product manager. He co-chairs the aerospace engineering industrial advisory board for the University of Alabama and serves on the Board of Trustees for the Naval Aviation Museum Foundation. Tom received his Bachelor of Science and Master of Science in aerospace engineering from the University of Alabama, where he developed 3D graphics for U.S. Army helicopter flight simulators.

Tim Woodard

NVIDIA, Sr. Solutions Architect

NVIDIA Tools and SDKs for Training and Simulation

In addition to making the world's most powerful GPUs, NVIDIA also develops a wide range of software tools and SDKs that can be used to make better training and simulation systems. These include support for multi-GPU configurations, VR, advanced rendering, high-performance computing, virtualization, and deep learning.

ABOUT THE SPEAKER: Tim Woodard is a senior solutions architect in the professional visualization group at NVIDIA. Tim has over 20 years of experience designing and developing software architectures for real-time flight simulation visual systems using modern OpenGL techniques, advanced C++, and Agile development processes. He has received patents and has published and presented papers at GTC, I/ITSEC, IMAGE, ASQ, and ITEC.

Heidi Buck

USN - SPAWAR, Director of BEMR Lab

A Look Inside the US Navy's Mixed Reality Lab

A look inside the US Navy's mixed reality lab to include an overview of Unity-based AR/VR projects that are being worked to support the warfighter. The talk will highlight 1) the recent live-fire exercise aboard the USS BUNKER HILL performed to test an augmented reality heads up display as part of a topside ship gunnery system and 2) the recent SECNAV Innovation Award winning work the team is doing to create 3D LIDAR scans of the entire USN Fleet and a collaboration environment that supports ship installations.

ABOUT THE SPEAKER: Ms. Heidi Buck joined the Space and Naval Warfare Systems Center, Pacific (SSC Pacific) in 2002 after receiving her Master's Degree from the University of California, San Diego (UCSD) in Electrical Engineering with a focus in Signal and Image Processing. She spent 13 years of her career at SSC Pacific working on computer vision systems to support the warfighter, focused primarily on automatic target recognition systems. In January 2015 she founded the Battlespace Exploitation of Mixed Reality (BEMR) Lab, a space for innovative thinkers to leverage low cost commercial sector technology in the mixed reality (virtual and augmented reality) space for collaboration between warfighter, researcher, government, industry and academia. The lab has received national attention, including visits from the Secretary of Defense, Congressmen, Fortune 500 CEOs, military leaders and numerous other VIPs.

Rich Rabbitz

Lockheed Martin, Principal Member of Engineering Staff

Applying Virtual Reality, and Augmented Reality to the Lifecycle Phases of Complex Products

We'll demonstrate how to best utilize GPU technology for Virtual Reality (VR) and Augmented Reality (AR) applications. The life cycle phases of a product such as a naval ship include design, production, deployment, operations, maintenance, and upgrades. Early in the design phase it is typical to have the product modeled in a 3D CAD system. By adding physical properties to the CAD model, a virtual model can be created and used to perform physics based simulations. These simulations are used to verify and validate the design. This virtual product model can be presented and manipulated in VR. The goal is to catch problems during the conceptual design phase before construction begins. VR and AR can both be applied during construction, operations, maintenance, and product upgrades. The Lockheed Martin Surface Navy Innovation Center (LM-SNIC) is investigating the best practices for applying VR and AR to the lifecycle phases of complex products such as naval ships, aircraft, and ground radars.

ABOUT THE SPEAKER: Richard (Rich) Rabbitz is a Principal Member of the Engineering Staff in the Ship Integration and Test organization at Lockheed Martin Rotary and Mission Systems (RMS). He has been leading the graphics group in the Lockheed Martin's Surface Navy Innovation Center (SNIC) since its opening in 2014. Rich applies 3D graphics technologies such OpenGL, CUDA, OptiX, Iray, Augmented Reality, and Virtual Reality to many different engineering disciplines throughout Lockheed Martin. He holds a Master of Science in Engineering (MSE) degree in Computer Science from the University of Pennsylvania. Rich is also a Professor in the Computer Science department at Rowan University.

Paul Kruszewski

wrnch, CEO

Using AI to Read Human Body Language in Real-Time from Standard Video

Inexpensive connected cameras in vehicles, drones and buildings provide immense volumes of raw video imagery. It is practically impossible for humans to monitor and understand all of this footage to determine actionable events. We'll present a computer vision AI technology that uses deep learning to understand and read human body language from standard 2D RGB video cameras. We'll describe in detail the stages of our NVIDIA® CUDA®-based pipeline, from training on DGX-1s to Titan X cloud-solutions to edge-based deployment on Jetson TX2s. We'll describe how this system can be integrated into a variety of industrial applications including human behavior monitoring for analytics and security; fall/accident detection in the home; and full body VR for collaboration and simulation. We'll also present live demos using a standard webcam and a Jetson TX2 dev kit.

ABOUT THE SPEAKER: Paul Kruszewski has been at the sharp point of the intersection of AI and real-time computers graphics since 2000 when he founded AI.implant to use AI to create and simulate huge crowds of interacting autonomous characters. Customers included Disney and Lucas Film for visual effects; Bioware and EA for game development; and L3 and Lockheed Martin for military simulation. AI.implant was acquired in 2005 by Presagis, the world's leading developer of software tools for military simulation and training. In 2007, he founded GRIP to use AI to create high fidelity autonomous characters capable of rich and complex behaviours. Customers included Bioware, Disney, EA and Eidos. GRIP was acquired in 2011 by Autodesk, the world's leading developer of software tools for digital entertainment. In 2014, he founded wrnch to use AI (deep learning and computer vision) to enable computers to read human body language.