Real Time Ray-Tracing in your Pocket

We have written a lot about real time Ray-Tracing on this Intel blog, but so far it might have come across like this technology is out of reach of most consumers. That’s because until recently, we have demonstrated Ray-Tracing at high resolutions, using the most powerful consumer platforms available. These systems had 8 powerful cores worth of the most advanced PC architecture available, running at extreme speeds, and carrying some extreme power budgets.

However, since Intel aims to meet the requirements of many diverse market – from Extreme Performance, all the way down to Extreme Mobility – the Intel research labs are now ready to show how Ray-Tracing can scale to the complete opposite side of the spectrum.

Ultra mobile devices have become very popular in the last several years, and some of them such as the Nintendo DS and Playstation Portable have grown hugely popular in the gaming segment. Gaming on Ultra Mobile PCs (or UMPCs) is a newer concept, and Intel has been investing in technology that will allow the productivity and gaming capabilities of a PC to fit in the palm of your hand – or the pocket of your shirt. The Sony VAIO UX Micro PC is one such example, and at the Game Developers Conference in San Francisco, Daniel Pohl is showing how Ray-Tracing can scale to even the smallest of personal computers.

How is this possible, you might ask?

It’s because Ray-Tracing draws a scene in 3D by tracing rays of light from the pixels on the screen, to the surfaces of objects in view. And in the case of a UMPC, when one is viewing 3D space from the viewable area of a 4.5” LCD screen, fewer rays are required, and hence, the CPU requirements are substantially less. For example, you might prefer viewing a high definition (1280×720 resolution) display on your PC, but with the much smaller viewable area on a Sony VAIO UX Micro PC, smaller resolutions may be quite acceptable (such as 480×272, for example). Using this lower resolution, it would only require 8% of the CPU requirements that had been needed to render in high definition. To put this into perspective, a 480×272 screen is two and a half times the resolution of the Nintendo DS (per display, at 256×192).

But is scaling the CPU requirements down to 8% going to be enough to fit within the computational capabilities of an ultra-mobile CPU? Well, it just so happens that it is. At this year’s GDC, Daniel Pohl has shown that the Sony VAIO* UX Micro PC is capable of 25-45 frames per second of performance when rendering the same demo of Quake IV that he showed at last year’s Intel Developer’s Forum. Keep in mind that last year’s Quake IV demo ran on an 8-core PC, running at 3.0GHz, and delivering almost 100 frames per second. Now, this same technology can be delivered to a single core ultra-mobile CPU running at 1.2GHz, while still allowing playable frame rates.

Now, keep in mind that this is just the graphics engine running, and the technology still needs to develop to the point where we can run multiple rays per pixel within a sensible compute budget, because that will allow us to add the kinds of lighting effects, per-pixel correct shadows and reflections, and complex geometry that gamers expect in leading edge games. But don’t worry, because thanks to Moore’s Law, as well as breakthroughs enabled by the Intel Research Labs, computational capabilities and software optimizations will allow both UMPCs and Extreme Gaming platforms to be better suited to real time Ray-Tracing over time.

Moore’s Law working in our favor

Moore’s Law works in favor of Ray-Tracing, because it assures us that computers will get faster – much faster – while monitor resolutions will grow at a much slower pace. As computational capabilities outgrow computational requirements, the quality of rendering Ray-Tracing in real time will improve, and developers will have an opportunity to do more than ever before. We believe that with Ray-Tracing, developers will have an opportunity to deliver more content in less time, because when you render things in a physically correct environment, you can achieve high levels of quality very quickly, and with an engine that is scalable from the Ultra-Mobile to the Ultra-Powerful, Ray-Tracing may become a very popular technology in the upcoming years.

A larger issue: Scanline rendering can be done on multicore chips too. I’m surprised that Intel hasn’t produced a scanline renderer for its quadcore (or higher) CPUs, allowing computer manufactuers to ditch the 3D accelerator on the video card (or at least use a cheaper one).
I worked on speech recognition in the early 90’s and saw DSP cards (for speech recognition) vanish as Intel put floating point into 486s. It’s the same thing.
PS – I’m doing my own rendering in my own game, but it certainly isn’t real-time (for various reasons).

Gabor,
Ray Casting is something quite different. It correlates the y-axis of the screen to the up-down axis of your world data in order to vastly reduce the amount of calculations necessary, which is why it was possible back in the early 90’s.

keep in mind that this is just the graphics engine running, and the technology still needs to develop to the point where we can run multiple rays per pixel within a sensible compute budget, because that will allow us to add the kinds of lighting effects, per-pixel correct shadows and reflections, and complex geometry that gamers expect in leading edge games.

Today I maybe had the idea to do exactly that, reduce the cost by a really big percentage but it is crazy and complicated so I first need to calculate the cost of the technic before I can tell you more. But what I can tell you is, that with current hardware it could be a nice fit for he PC game market but would be useless for UMPCs. I will post again if I have positive results of the calculations and would like to make it a reality in combination with your Ray-tracing technics.
Regards, Dominik

Ray tracing is known to offer better quality in cases of crisp reflections and refractions, particularly for many inter-reflections. However, many of these classic ray tracing effects can already be faked by rasterization, i.e. what GPUs already do. Rasterization gives better quality for blurry reflections and refractions, due to the need to cast many rays in ray tracing.
I can’t think of many significant benefits of ray tracing for graphical quality, other than just for simplifying code for developers. To get cinematic quality, one either needs to use full path tracing, which is far more resource intensive than ray tracing, or do as many cinema companies do, and just write lots of very complex shaders.
It should be noted that many cinematic companies use RenderMan, which divides up models into polygons, and rasterizes the polygons, so production quality movies are often created with rasterization, the exact same technology as current GPUs use. In fact, many companies do pre-production renderings on graphics cards, as they share the same rasterization technology.
See this article for a clarification on the quality differences between ray tracing and rasterization:http://www.beyond3d.com/content/articles/94/4
Intel’s research into real time raytracing may be because they’re looking for applications for Larrabee. This is definitely worth researching, but I thought the caveat should be put out that ray tracing currently doesn’t offer quality much above rasterization.

Perhaps I was overly pessimistic about the possibilities of real-time ray tracing in my last post. Other advantages of ray tracing are that it may scale better for large models, and that hybrid rendering methods, partly rasterized and partly raytraced, may also be of use.
John Carmack also gave a comment on ray-tracing, he likes the idea of having octrees built into hardware:http://www.pcper.com/article.php?aid=532&type=overview