AMD Breaks New Ground With the LiquidVR SDK

Solving the performance issues holding virtual reality back…

Today at GDC AMD’s Raja Koduri introduced the LiquidVR SDK aimed at enabling new virtual reality (VR) solutions using AMD’s technologies. This SDK gives developers the ability to use AMD’s GPUs to accelerate head tracking computations thanks to a feature AMD calls Latest Data Latch. It also minimizes latency and stuttering thanks to asynchronous shaders which take advantage of the asynchronous compute engines in AMD’s GCN GPUs to execute image processing and image rendering tasks in parallel. AMD’s SDK moves multi-GPU options for VR solutions forward by assigning a GPU to each eye to avoid latency issues and making it easier for programmers to integrate multi-GPU support into game engines. The final feature of AMD’s LiquidVR SDK is Direct-to-Display which enables direct front buffer rendering and renders images to the VR display without passing those images through the operating system.

Starting from the top with AMD’s Latest Data Latch feature it’s clear that AMD understands the importance of latency to VR applications. The purpose of latest data latch is to ensure that the most up to date head tracking inputs are used for rendering. It does this by changing how head tracking data is sent to the GPU. Traditionally the CPU will package the input data and save it so that the GPU can access it when its ready. This can lead to issues when the GPU doesn’t grab the latest packet or when the CPU can’t package and submit the input fast enough. With AMD’s latest data latch instead of the CPU sending a packet of input data to GPU to use at its leisure, the CPU instead sends a link that the GPU can access to receive the absolute most recent input data. The effect of this change in how head tracking data is handled results in reduced latency between when the user moves and the VR headset is able to compensate for those movements.

Asynchronous shaders for VR is another big feature in AMD’s LiquidVR SDK. Thanks to the Asynchronous Compute Engines present in AMD’s Graphics Core Next GPU architecture AMD is now enabling VR game engine developers to minimize latency and eliminate stuttering and judder by executing image processing and image rendering tasks in parallel. This is an improvement over a synchronous method where each task has to wait for the tasks that were submitted to the GPU before it to finish executing before it can be executed itself. This capability enables Hardware-Accelerated Time Warp which updates the head tracking data after a frame has been rendered and warps the current frame so that it reflects the different between the head tracking data the frame was rendered with and the current head tracking data. This trick goes a long way towards minimizing the latency between the users inputs and the VR solution’s responses.

AMD’s Affinity Multi-GPU feature lowers latency and improves performance for VR solutions using multi-GPU setups. It does so in a novel way by assigning a single GPU per eye in a dual GPU configuration in order to avoid the latency inherent to a traditional alternative frame rendering (AFR) scheme. AFR solutions require both GPUs to know what the other GPU is doing and hold a duplicate copy of their frame buffer in local memory. Because each GPU is tasked to render images for a specific eye in AMD’s new scheme they do not have to keep track of what the other GPU is doing because there is no overlap between what they are rendering. Instead the coherency of the rendering is managed by CPU. Of course tasking the CPU keep the GPUs rendering correctly seems like a big ask but it’s actually less work than it would be doing in an AFR mode where it would be sending duplicate commands to both GPUs so that they could both render the entirety of the scene rather than just their slice.

The final piece of AMD’s LiquidVR SDK is Direct-to-Display which is aimed at getting the operating system out-of-the-way when rendering to VR headsets. It allows applications to directly control what is displayed on a VR headset without any input from the OS. The major benefits to the user are the ability to use VR headsets with operating systems that don’t natively support them and to avoid any additional latency that passing through the OS layer might add.

AMD believes that they’ve solved the performance issues that make VR headsets unpalatable to most users with the performance enhancements that their LiquidVR SDK brings to the table. They done this by embracing the low latency rendering efforts that VR companies have been championing and then solving the stuttering and juddering issues that arrived with these new low latency display techniques.

As with any software effort by AMD they’ve lined up a few partners; namely Oculus, nDreams, and Crytek to announce their support for the LiquidVR SDK. The SDK is being launched today as an alpha that will be available only to developers registered with AMD. Although considering how many developers signed up for AMD’s Gaming Evolved and Mantle programs, I don’t think there’s any reason to believe that major developers won’t have access to the LiquidVR SDK after today.

With the LiquidVR SDK AMD is planting its flag in the ground. AMD is serious about VR and even more serious about enabling developers to build VR solutions using its technologies. If you’re a fan of VR then today will be one for the history books because hardware manufactures like AMD are finally taking VR seriously and working to solve the performance problems that have kept virtual reality solutions from going mainstream.S|A

Thomas Ryan is a freelance technology writer and photographer from Seattle, living in Austin. You can also find his work on SemiAccurate and PCWorld. He has a BA in Geography from the University of Washington with a minor in Urban Design and Planning and specializes in geospatial data science. If you have a hardware performance question or an interesting data set Thomas has you covered.

Thank you, Subscribers!

Thank you to our Subscribers, past and present. You are appreciated. You are what keeps SemiAccurate going, what allows us to maintain our journalism, what keeps us ad-free, what allows us to tell it like it is, it is still just you. You, the reader and subscriber, we thank you.

If you want to know more about subscriptions, both free and paid, the information can be found here.

For more on our track record of leading edge journalism see Fully Accurate.

Our Writers

Charlie Demerjian is the founder of Stone Arch Networking Services and S|A. SemiAccurate.com is a technology news site; addressing hardware design, software selection, customization, security and maintenance, with over one million views per month. He is a technologist and analyst specializing in semiconductors, system and network architecture.

As head writer of SemiAccurate.com, he regularly advises writers, analysts, and industry executives on technical matters and long lead industry trends. Charlie is also a council member with Gerson Lehman Group.

Thomas Ryan is a freelance technology writer and photographer from Seattle, living in Austin. You can find his work on SemiAccurate and PCWorld. He has a BA in Geography from the University of Washington with a minor in Urban Design and Planning and specializes in geospatial data science. If you have a hardware performance question or an interesting data set Thomas has you covered.