Another take on ray tracing

The debate that is real-time ray tracing for games is not new by any stretch, but it does have a very strong renewed interest thanks to some articles we published recently from Daniel Pohl. The first was published in December of 2006, titled "Ray Tracing and Gaming - Quake 4: Ray Traced Project". To quote:

Ray tracing has the potential to become the widely used rendering
technology on desktop computers. The number of CPU cores is increasing
and special purpose ray tracing-hardware-prototypes (http://www.saarcor.de/)
show impressive results in speed improvement. It is still a long way to
playing computer games in graphics like the Lord of the
Ring-movies, but we are getting closer to it.

I wrote an article in September of 2007 called "Rendering Games with Raytracing Will Revolutionize Graphics" that looked at ray tracing performance improvements since Daniel's first article, what role Intel and their Larrabee project are playing in raytracing graphics and other benefits ray tracing might offer like scaling down to other devices.

Daniel's second article was released in January of this year: "Ray Tracing and Gaming - One Year Later" that offers up examples from Intel about how the benefits of ray tracing can directly affect gamers and game engines. If you haven't read all of these incredibly interesting pieces, I suggest you do so.

There are many people that don't see ray tracing as the holy grail of gaming graphics, however. A corporation like NVIDIA, that has a vested interested in graphics beyond the scale of any other organization today, has to take a more pragmatic look at rendering technologies including both rasterization and ray tracing; unlike Intel they have decades of high-end rasterization research behind them and see the future of 3D graphics remaining with that technology rather than switching to something new like ray tracing.

I recently was able to spend some time with NVIDIA's Dr. David Kirk, Chief Scientist of NVIDIA, and ask him some questions regarding the rasterization and ray tracing debate. I think you'll find his answers to be quite interesting in light of all the hype and exciting about Intel's developments.

PC Perspective: Ray tracing obviously has some advantages when it comes to high levels of geometry in a scene, but what are you doing to offset that advantage in traditional raster renderers?

David Kirk, NVIDIA: I'm not sure which specific advantages you are referring to, but I can cover some common misconceptions that are promulgated by the CPU ray tracing community. Some folks make the argument that rasterization is inherently slower because you must process and attempt to draw every triangle (even invisible ones)—thus, at best the execution time scales linearly with the number of triangles. Ray tracing advocates boast that a ray tracer with some sort of hierarchical acceleration data structure can run faster, because not every triangle must be drawn and that ray tracing will always be faster for complex scenes with lots of triangles, but this is provably false.

There are several fallacies in this line of thinking, but I will cover only two. First, the argument that the hierarchy allows the ray tracer to not visit all of the triangles ignores the fact that all triangles must be visited to build the hierarchy in the first place. Second, most rendering engines in games and professional applications that use rasterization also use hierarchy and culling to avoid visiting and drawing invisible triangles. Backface culling has long been used to avoid drawing triangles that are facing away from the viewer (the backsides of objects, hidden behind the front sides), and hierarchical culling can be used to avoid drawing entire chunks of the scene. Thus there is no inherent advantage in ray tracing vs. rasterization with respect to hierarchy and culling.

PC Perspective: Antialiasing is somewhat problematic for ray tracing, since the "rays" being cast either hit something, or they don’t. Hence post-processing effects might be problematic. Are there other limitations that ray tracing has that you are aware of?

David Kirk, NVIDIA: Speed. Notwithstanding some contrived demos, ray tracing is currently significantly slower than rasterization. Given a complex scene with lots of triangles, lots of changes from frame to frame, lots of lights, and complex shaders, modern ray tracers running on multicore CPUs are not fast enough for high-quality, real-time rendering. Another benefit of ray tracing that is often cited is that you can render everything with Ray Tracing. But, if you choose to do everything with ray tracing, that's a lot of rays!

First, you must trace rays for visibility (what objects the eye sees directly) and antialiasing. Then, for each object that is hit, you must trace shadow rays, to determine if the point on the surface can "see" the light or if it is in shadow. More modern film-rendering software goes a step beyond this and looks not only at light sources, but considers that every other surface in the environment can reflect light. So effectively everything is a light source.

A related effect is called ambient occlusion. An example of ambient occlusion is in the corner of a room, where the points on the surface of the wall near the corner can't "see" very much of the room, so those points are not as well-lit as points in the center of the wall. In order to do a good job of rendering these effects, you would have to shoot tens or hundreds of rays per pixel. This is far from real time. As a side note, these effects are "soft" and very well-approximated through rasterization and texturing techniques in real-time.

PC Perspective: While the benefits of ray tracing do look compelling, why is it that NVIDIA and AMD/ATI have concentrated on the traditional rasterization architectures rather than going ray tracing?

David Kirk, NVIDIA: Reality intrudes into the most fantastic ideas and plans. Virtually all games and professional applications make use of the modern APIs for graphics: OpenGL(tm) and DirectX(tm). These APIs use rasterization, not ray tracing. So, the present environment is almost entirely rasterization-based. We would be foolish not to build hardware that runs current applications well.

Ray tracing may be the future of rendering—it’s definitely part of the future at least. There is an old joke that goes "Ray tracing is the technology of the future and it always will be!". I don't think that's completely true, but I do believe that ray tracing is not the answer—it's part of an answer. Furthermore, I believe that the C/C++ language programming interfaces for GPU computing are useful for programming ray tracing to run on GPUs. Over time, I expect the graphics APIs will evolve to embrace ray tracing as part of the 3D graphics "bag of tricks" for making images.

PC Perspective: Is there an advantage in typical pixel shader effects with ray tracing or rasterization? Or do many of these effects work identically regardless?

David Kirk, NVIDIA: Whether rendering with rasterization or ray tracing, every visible surface needs to be shaded and lit or shadowed. Pixel shaders run very effectively on rasterization hardware and the coherence, or similarity, of nearby pixels is exploited by the processor architecture and special graphics hardware, such as texture caches. Ray tracers don't exploit that coherence in the same way. This is partly because a "shader" in a ray tracer often shoots more rays, for shadows, reflections, or other effects. There are other opportunities to exploit coherence in ray tracing, such as shooting bundles or packets of rays. These techniques introduce complexity into the ray tracing software, though.

PC Perspective: Do you see a convergence between ray tracing and rasterization? Or do the disadvantages of both render types make it unpalatable?

David Kirk, NVIDIA: I don't exactly see a convergence, but I do believe that hybrid rendering is the future. Ray tracing is excellent at producing some effects, but slow at others. So, if you try to use ray tracing for everything, there is a very good change that it won't be very fast. Rasterization is blisteringly fast, but not well-suited to all visual effects. A hybrid renderer can parsimoniously choose the best of multiple techniques, to produce the best quality images quickly. By the way, that is how the "gold standard" of picture making—film rendering—works. Films are produced using many techniques, not exclusively ray tracing. Those pictures are not real-time and still they don't choose ray tracing for everything. Why is that? It's because ray tracing is not good for everything, and too slow for many things.

PC Perspective: In terms of die size, which is more efficient in how they work?

David Kirk, NVIDIA: I don't think that ray tracing vs. rasterization has anything to do with die size. Rasterization hardware is very small and very high-performance, so it is an efficient use of silicon die size. Rasterization and ray tracing both require a lot of other processing, for geometry processing, shading, hierarchy traversal, and intersection calculations. GPU processor cores, whether accessed through graphics APIs or a C/C++ programming interface such as CUDA, are a very efficient use of silicon for processing.

PC Perspective: Because GPUs are becoming more general processing devices, do you think that next generation (or gen +2) would be able to handle some ray tracing routines? Would there be a need for them to handle those routines?

David Kirk, NVIDIA: There are GPU ray tracing programs now. Several have been published in research conferences such as Siggraph. Currently, those programs are roughly as fast as any CPU-based ray tracing program. I suspect that as people learn more about programming in CUDA and become more proficient at GPU computing, these programs will become significantly faster.

PC Perspective: What are your thoughts on the ability for ray tracing to scale across multiple platforms easily by decreasing the number of rays (and thus the resolution) and adapting the application to different hardware such as mobile gaming and/or cell phones? How does this compare to how rasterization engines can scale?

David Kirk, NVIDIA: This is really not an important difference between ray tracing, rasterization, or any other rendering technique. You can always make pictures faster by making them smaller and thus lower resolution. One important benefit of rasterization is that the power consumption is much lower for fixed function hardware than programmable hardware. So, rasterization has a significant advantage for mobile platforms where power is a concern. I still hope that we'll be able to use ray tracing for some "eye candy" effects on low-power platforms, though :-)

Dr. Kirk obviously has as different opinion on the future of ray tracing that Intel's researchers do - and that is to be expected. NVIDIA (and AMD/ATI for that matter) have a monetary requirement for reasonable research in the area of computer graphics, and thus their decisions are going to be based less on the "pie-in-the-sky" theory of graphics and more on the real-world applications of any such technology can accomplish.

NVIDIA obviously disagrees then with the statements Intel has made about the collaboration of rasterization and ray tracing renderers not being feasible. Dr. Kirk mentions that both will be able to work together to provide better image quality without dramatically hindering performance though the details of such implementations aren't available.

Also interesting is Dr. Kirk's mentioning of the CUDA platform that allows programmers to write applications for GPUs in the same way they write them for CPUs. I am very curious to see how a completely independent party would view the performance of each option; Intel's CPU-based ray tracers of today or NVIDIA's CUDA based ray tracing option. Which would be faster and easier to work with? These are questions that we can hope to get answered as ray tracing evolves and more and more interested parties get involved at universities and game developers.

It is also apparent that Intel has a LOT of work ahead of itself if they are actually going to try and convince game developers to adopt ray tracing as their primary rendering option. (We should clarify that we don't actually know that is happening, but all signs point that direction.) I recently wrote a news piece about Intel's purchase of Project Offset, a game engine developer, and Havok, a physics API, and theorized that Intel might be making its own game engine to give away or sell to developers to push ray tracing adoption.

If that's the case, NVIDIA and AMD will have a battle on their hands but luckily it is one they are used to fighting. They just aren't used to fighting the 800 lb gorilla in the discrete GPU market and that might require a very different strategy. NVIDIA's stance is that rasterization is not inherently worse for gaming than ray tracing is if only because all the years of work and research that has gone into up to today. They seem willing to adopt ray tracing support on their cards in terms of programmability with CUDA and let the developers decide which option will be right for the industry.

My thanks go out to Dr. David Kirk for taking the time to answer our questions and amuse us with theories on the future of gaming and graphics. Also, thanks goes to our own Josh Walrath for stepping up with detailed questions for our interview. Stay tuned to PC Perspective for more information and research in the world of ray tracing and rasterization!

There is no doubt that Nvidia is one of the leading brands in the manufacture of graphics cards and the plus point is its performance. It definitely offers amazing performance and graphic rendering. I played “Crysis” “Resident evil” “Batman Arkham Asylum” all with an Nvidia graphics card. Great performance, thanks.contact windows support