Key components of VRWorks are being adopted by industry leading graphics engines such as Unreal Engine 4 and Unity 5 to provide developers an easy path to taking advantage of the SDK in their games and applications.

In addition, VRWorks includes SDKs that accelerate 360 video capture and processing as well as enhance professional VR environments like CAVES, Immersive Displays & Cluster Solutions.

Variable Rate Shading (VRS) is a new, easy to implement rendering technique enabled by Turing GPUs. It increases rendering performance and quality by applying a varying amount of processing power to different areas of the image.With VRS, single pixel shading operations can now be applied to a block of pixels, allowing applications to effectively vary the shading rate in different areas of the screen. VRS can be used to render more efficiently in VR by rendering to a surface that closely approximates the lens corrected image that is output to the headset display. VRS can also be coupled with eye-tracking to maximize quality in the foveated region.

Multi-Res Shading is an innovative rendering technique for VR whereby each part of an image is rendered at a resolution that better matches the pixel density of the lens corrected image. Multi-Res Shading uses Maxwell or later architecture features to render multiple scaled viewports in a single pass, delivering substantial performance improvements in pixel shading.

Lens Matched Shading uses the Simultaneous Multi-Projection architecture of NVIDIA Pascal-based GPUs to provide substantial performance improvements in pixel shading. The feature improves upon Multi-res Shading by rendering to a surface that closely approximates the lens corrected image that is output to the headset display. This avoids rendering many pixels that would otherwise be discarded before the image is output to the VR headset.

Traditionally, VR applications have to draw geometry twice -- once for the left eye, and once for the right eye. Single Pass Stereo uses the new Simultaneous Multi-Projection architecture of NVIDIA Pascal-based GPUs to draw geometry only once, then simultaneously project both right-eye and left-eye views of the geometry. This allows developers to almost double the geometric complexity of VR applications, increasing the richness and detail of their virtual world.

Multi-View Rendering (MVR) is a feature in Turing GPUs that expands upon Single Pass Stereo, increasing the number of projection views for a single rendering pass from two to four. All four of the views available in a single pass are now position-independent and can shift along any axis in the projective space.. By rendering four projection centers, Multi-View Rendering can power canted HMDs (non-coplanar displays) enabling extremely wide fields of view and novel display configurations.

Additional Features

Late Latch (DX11)- Late-Latch is a generic mechanism useful to VR application developers and VR headset developers for providing latency-sensitive data to the GPU as late as possible. In traditional frame pipelining, such latency-sensitive data is sampled by CPU before it executes its portion of the work, but it is out of date by the time GPU begins rendering. Late-Latching cuts down this latency, reducing discomfort and motion sickness.

VRWorks Graphics for Headset Developers

Context Priority provides headset developers with control over GPU scheduling to support advanced virtual reality features such as asynchronous time warp, which cuts latency and quickly adjusts images as gamers move their heads, without the need to re-render a new frame. Combined with the Pre-emption support of NVIDIA’s Pascal-based GPUs, Context Priority delivers faster, more responsive refreshes of the VR display.

With Direct Mode, the NVIDIA driver treats VR headsets as head mounted displays accessible only to VR applications, rather than a normal Windows monitor that your desktop shows up on, providing better plug and play support and compatibility for the VR headset.

Front Buffer Rendering enables the GPU to render directly to the front buffer to reduce latency.

GPUDirect for Video technology enables low latency video transfers to and from the GPU enabling developers to seamlessly overlay video and graphics into VR environments. GPUDirect for Video can be used in conjunction with the 360 Video SDK for fast ingest of multiple camera streams. GPUDirect for Video is available for Quadro GPUs.

VRWorks Audio

VRWorks Audio is aimed at Game and Application developers and bring a new level of audio fidelity and immersion by ray tracing sound in 3D space, creating a realistic acoustic image of geometrically complex environments. With Turing, VRWorks Audio is accelerated up to 6x compared to Pascal based GPUs.

The demand for realism increases dramatically the instant a player puts on a head-mounted display (HMD) - images, sounds, and interactions make or break the immersiveness of the experience. For example, audio in a small room will sound different than the same audio outdoors because of the reflections caused by sound bouncing off the walls of the room.

Realistically modelling touch interactions and environment behavior is critical for delivering full presence in VR. Today’s VR experiences deliver touch interactivity through a combination of positional tracking, hand controllers, and haptics. NVIDIA’s PhysX Constraint Solver detects when a hand controller interacts with a virtual object and enables the game engine to provide a physically-accurate visual and haptic response. PhysX also models the physical behavior of the virtual world around you so that all interactions, whether it be an explosion or hand splashing through water, are accurate and behave as in the real world.

NVIDIA provides various synchronization techniques to prevent tearing and image misalignment while creating one large desktop that is driven from multiple Quadro GPU’s or clusters. Various technologies like Frame Lock, Stereo Lock, Swap Groups & Swap Barriers are available to help developers design seamless and expansive VR CAVE & Cluster environments.

GPUDirect for Video technology enables low latency video transfers to and from the GPU enabling applications to seamlessly overlay video and graphics into VR environments. GPUDirect for Video can be used in conjunction with the 360 Video SDK for fast ingest of multiple camera streams.