First disclosed this evening with teaser videos related to a GDC presentation on Unity, today AMD is announcing two developer-oriented features: real-time ray tracing support for the company's ProRender rendering engine, and Radeon GPU Profiler 1.2.

Though Microsoft’s DirectX Raytracing (DXR) API and NVIDIA’s DXR backend “RTX Technology” were announced today as well, the new ProRender functionality appears to be largely focused on game and graphical development as opposed to an initiative angled for real-time ray tracing in shipping games. Similarly, while Radeon GPU Profiler (RGP) has not received a major update since December 2017, as it is AMD’s low-level hardware-based debugging/tracing tool for Radeon GPUs this is likewise purely for developers.

In any case, for Radeon ProRender AMD is bringing support for mixing real time ray-tracing with traditional rasterization for greater computational speed. As with today's other real-time ray tracing announcements, AMD's focus is on capturing many of the photorealism benefits of ray tracing without the high computational costs. At a basic level this is achieved by limiting the use of ray tracing to where it's necessary, enough so that it can be done in real-time alongside a rasterizer. Unfortunately beyond a high-level overview, this is all AMD has revealed at this time. We're told a proper press release will be coming out tomorrow morning with further details.

As for the new version of RGP, 1.2 introduces interoperability with RenderDoc, a popular frame-capture based graphics debugging tool, as well as improved frame overview. The update also brings detailed barrier codes, relating to granular regulation of graphical work among DX12 units.

Regardless, AMD has yet more to say on the ray-tracing topic. Along with tomorrow's press release, AMD has a GDC talk scheduled for Wednesday on “Real-time ray-tracing techniques for integration into existing renderers,” presumably discussing ProRender in greater detail.

Post Your Comment

16 Comments

My understanding is that ray tracing isn't and hasn't been the best way to render things for some time, but takes far less "special casing" than the best methods.

In other words, it will make Indie/Unity games look "almost AAA" level, but don't count on them being quite there. If it works (and works reasonably well on nvidia (or consoles), we will hopefully see more interesting indie works.

That or AAA games won't look as good but suddenly be wildly more profitable for publishers. Actually I'd count on that second bit a lot more (if it works well enough on consoles, which I'm not really expecting).Reply

Pure naive ray tracing is crap, lacking quality and speed. Existing rasterization hacks are faster yet look better than such ray tracing ever could.A few additions to get modern ray tracing that can at least handle soft shadows (say by using beams instead of individual rays) ... and now we are talking. Potential to have near perfect quality for 99% of cases, but it is even slower and I guess it will never make sense for games - except in addition to another method.

But you still have issues with oil on water, for which you need TMM or such. Then you have butterfly wings, requiring FEM/FDTD/, maybe want to add some physical scattering and again you need other methods to calculate scattering from that ...Fortunately at least nonlinear effects can be safely ignored - if you are looking at such light, you won't for long.Reply

You make the mistake that what ray tracing lacks quality. True ray tracing will always provide the best quality, but due to the speed(or lack of it), it has not been useful for real time animation. Anything that reduces quality will of course, improve the performance, but at the expense of not making ray tracing all that useful.

As GPU technology improves, the potential grows for making ray tracing useful for real time tasks. We are still decades away from that, but it is there. The key is still to provide an API so that different hardware methods can be used/attempted and not need to have software specifically written for one vendors implementation.Reply

It's only good if you're aiming for physical rendering. There's no special casing as there is with rasterization because it works like real world. But there is an irony: you end up special-casing raytraicing and end up setting up contrived modeling when you do NOT want physical or photorealistic rendering, and that would be the case with most games and most 3DFX and 3D CG usage now.Reply

It's not either/or, is it? It sounds like you can pick what techniques to use for different parts of the render. And just because you want a non-photorealistic style for a game doesn't mean that you wouldn't like immersive reflections and shadows, for instance. I can definitely imagine some raytracing techniques adding positively to the experience of games that aren't aiming for purely physical rendering.Reply

From the sparse details Nvidia has mentioned in their implementation, it sounds like it will compete for resources because it is implemented as software running on their shader cores (very similar to how GPU rendering plugins work for 3D modeling programs) So assuming it is the same here, then you have to sacrifice your compute cores for rendering work.

The issue is that GPUs already have their hardware rasterizers (the ROPs) and if that's good enough then the extra couple percent more realistic fidelity for whatever part of the scene you want to render may not be worth the tradeoff in games realtime entertainment applications.

However, I can see the usage for this in professional applications that need physical simulation.Reply

I don't understand what you are saying about the shader cores and the ROPs. ROPs don't just automatically render a scene. They work in conjunction with the shaders to compute the output.

And everything has to compete for resources. Even if it's specialized hardware it's competing for resources because its competing for die area. The hardware rasterizers themselves are taking up resources, then.

The way to look at it is the cost/benefit you get from putting more resources into algorithms. Rasterization involves a bunch of hacks that allow various levels of quality by making approximations and reformulations of the problem. Eventually there is no way to efficiently apply increased computation power to a method and a new method must be found. Is there a guarantee such a method can be found for every useful effect that is consistent with the rasterization scheme as well as more efficient than raytracing? Additionally, the application of some wanted effects excludes the computation-saving hacks used in others. I can imagine a case where the mix of method A with raytracing method Z may be both visually preferable and less computationally expensive than the equivalent combinations of methods B and C because although C has decent quality and uses less computation than Z it is incompatible with the good and efficient method A, and so the more expensive B must be used instead.

To me what you are saying is "rasterization quality is good enough", which could be extended to "current visual quality is good enough" since Microsoft seems to be saying that the best way to increase visual quality in the future is through increasing use of raytracing techniques.Reply

My point was that small Unity developers might well decide that "physical or photorealistic" rendering is good enough and that they don't have time to special case everything.

Getting all the light and shadows right (to a degree, special casing should avoid cases where it takes a completely unrealistic number of rays) is worth plenty. And it won't take extra developer time. Whether gamers demand that "special level of fakeness" in AAA games remains to be seen.Reply