GDC 2014

Many of our sessions from this years GDC are available as videos on in the GDC Vault and can be watched for free.

Avoiding Catastrophic Performance Loss: Detecting CPU-GPU Sync Points

CPU-GPU sync points continue to be a one of the largest performance sinks on the PC while simultaneously proving maddeningly difficult to locate.
John will begin with a brief review of the Direct3D and OpenGL driver models, as well as a description of the two types of sync points
applications face. With that foundation in place, he will demonstrate a new, vendor-agnostic technique to detect CPU-GPU sync points.
Afterwards, he'll cover how to fix CPU-GPU sync points in the context of a real-world, complex application. Additionally, a complete set of
problematic entry points will be provided for each of Direct3D11 and OpenGL.

One of the most exciting features of DirectX 11.2 is tiled resources, which allow for full-featured, hardware-accelerated virtual texturing that overcomes many of the limitations of traditional software-based virtual texturing algorithms, all while working on a large variety of already shipping graphics hardware. This talk will cover the basics of how to use the tiled resources feature, and cover a set of rendering techniques to allow your game to take full advantage of the feature.

Getting the most out of D3D on the software side can be at least as important as making the GPU run efficiently. API inefficiencies are easily as important as GPU inefficiencies when delivering the best experience to your customers. This talk will highlight some typical problems, and it will delve into possible solutions. If youve ever asked, Why can I only draw X items per frame? this may be the talk for you.
Attendees will gain greater insights into advanced utilization of the Direct3D 11 graphics API, as used for high-end techniques in popular, AAA-shipping titles.

DirectX 11 is now becoming widely adopted by game developers, having made its way to next-generation consoles with broad support on PC. The aim of this talk is to cover interesting visual effects used in some recently released game titles that leverage the advanced features of DirectX 11, and introduce new ways of looking at the tessellation part of the DirectX 11 rendering pipeline. The second part of the talk will cover tips and tricks to increase the visual fidelity of tessellated geometry, and things to be cautious of if you are planning to add tessellation support into your engine.

NVIDIA GameWorks allows the creation of a more immersive game environment than was possible in previous-generation games. In the first part of the presentation, we'll show how CD Projekt RED integrated NVIDIA HairWorks into The Witcher 3 to provide more realistic fur and hair simulation. We'll discuss how to avoid common problems when using dynamic fur and hair, and we'll also go over the artist workflow. In the second part of the presentation, we'll provide an overview of the different GameWorks components: simulating realistic waves with NVIDIA WaveWorks, cinematic smoke and fire through NVIDIA FlameWorks, realistic skin rendering with NVIDIA FaceWorks, improved shadows with NVIDIA ShadowWorks, and more.

This session presents the technologies behind NVIDIA GRID and game streaming in the cloud. In the first part, we'll cover the key components of GRID — like using the GRID API's optimal capture, efficient compression, fast streaming, and low latency to deliver an ultimate cloud gaming experience. In the second part, we'll go over optimization guidelines and available libraries that can be used for game engines running in the cloud — gamepad controller support, improved sign-on for games, adding global illumination, and PhysX. In the third part, we'll demonstrate how to use Amazon G2 servers with the NVIDIA GRID APIs to stream games on the cloud.

DirectX Advancements in the Many-Core Era: Getting the Most out of the PC Platform

See how upcoming improvements in Windows graphics APIs can be practically used by next-generation game engines to get dramatic gains in work submission efficiency and scalability across multiple CPU cores.

This talk will focus on the path toward bringing a fully capable OpenGL renderer to Unreal Engine 4. It will cover the challenges of mapping to the API, cross-platform shader management, porting to Android, and the steps needed to get great performance. Additionally, we'll dive into the Tegra K1 processor and demonstrate how having a fully functional OpenGL path enables high-quality rendering on a handheld device powered by Tegra K1. Attendees will leave this session with an understanding of the work required to get a fully functional, high-performance OpenGL backend on a modern engine that can span mobile to high-end PCs.