Keep in mind that the RTX showcases were handcrafted, small toy scenes with a few objects. RTX has nothing to do with reflection quality, it just speeds up any ray casts. This could also bring better reflection quality and detail for Enscape, but it's not the holy grail feature that automatically makes all reflections beautiful, fast and realistic.

Right now my best card is the Titan V, but even over-clocked it can't quite perform well enough to safety run my models in VR. My customers just won't accept less detailed or segmented models. I have thought about trying the RTX 6000, but it is expensive. Unless the RTX Quadro can provide a 40% gain over the Titan V, it wouldn't make sense.

Keep in mind that the RTX showcases were handcrafted, small toy scenes with a few objects. RTX has nothing to do with reflection quality, it just speeds up any ray casts. This could also bring better reflection quality and detail for Enscape, but it's not the holy grail feature that automatically makes all reflections beautiful, fast and realistic.

I wouldn't call this a toy scene though (granted it's still a part of a highly optimized game).

Sounds like to take advantage of the AI denoising (that's still necessary with realtime raytracing despite the 10x speedup in raycasting from the previous non RTX generation of cards) Enscape may need to submit a library of images for the neural nets to learn from (though general examples from other render engines may suffice, since I know neural nets need huge content libraries to train effectively, ie. thousands or millions of renderings)

"keeping in mind that rasterization is a hack, it’s good to periodically look at what that hack is trying to achieve and whether that hack is worth the trade-offs.

Or to put this another way: if you’re going to put in this much effort just to cheat, maybe it would be better to put that effort into accurately rendering a scene to begin with?

Now in 2018, the computing industry as a whole is starting to ask just that question. Ray tracing is still expensive, but then so are highly accurate rasterization methods. So at some point it may make more sense to just do ray tracing at certain points rather than to hack it. And it’s this train of thought that NVIDIA is pursuing with great gusto for Turing."

At this article here I found interesting that renderings can be divided in two parts - ray casting and shading. RTX can be used for both, but also only for one. My hope is that alone the raycasting could help a lot to solve important visual problems.

Just a query about these new cards. Is Enscape going to make use of the ray tracing & AI capability offered with these cards? Especially on the VR side of things. I realize the AI has to be built pretty deeply into software but it would be good to know if Enscape could utilize this technology and how long it would take to develop.

In VR environments running Enscape on Ultra settings creates a much more realistic view. If I was able to run ultra for my clients that would be great. At this stage the 1080 cards are still a but laggy on ultra settings so I avoid exposing new VR users to it.

Also might pay to have a think about what comes after Ultra settings as the tech is moving so fast.

RTX support is currently in active, but still early development. Which we means we can't announce a specific release date yet, but RTX raytracing support will come this year. So if you're considering buying new hardware to use with Enscape it always makes sense to go with the latest generation to get the best performance.

It would be interesting to hear your experience in developing for RTX for those of us who know nothing about how that side of the business works. We just enjoy the pretty pictures...

What challenges are you faced with?

How is it different from the technology you currently implement?

Does the RTX tech provide you with variable levels of raytracing to balance the performance / quality or is it all or nothing? Meaning can you control the amount of time that is raytraced to maintain the real-time aspect of Enscape but also use more time when saving out the image for a final render?

It conveys just how complex Enscape already is, and the advanced rendering methods they've been using to achieve real time path tracing in gpus before RTX even existed. I consider myself relatively well versed in rendering lingo, and I didn't understand a good chunk of that article...

One can infer a few things though - the fact that Enscape already utilizes path tracing in some form would appear to set it up well to take advantage of the dedicated ray tracing cores in RTX, which are said to provide a 6-10x speedup over the last generation cards in ray tracing tasks. I'm sure it's a lot more complicated than that though.

"the limiting factor of path tracing is not primarily raytracing or geometric complexity... they mainly depend on the number of (indirect) light scattering computations and the number of light sources... the number of light scattering events does not depend on scene complexity... It is therefore thinkable that the techniques we use could well scale up to more recent games."

They're using some cheats as well though:

"However, while elegant and very powerful, naive path tracing is very costly and takes a long time to produce stable images. This project uses a smart adaptive filter that re-uses as much information as possible across many frames and pixels in order to produce robust and stable images."

It would be interesting to hear your experience in developing for RTX for those of us who know nothing about how that side of the business works. We just enjoy the pretty pictures...

What challenges are you faced with?

How is it different from the technology you currently implement?

Does the RTX tech provide you with variable levels of raytracing to balance the performance / quality or is it all or nothing? Meaning can you control the amount of time that is raytraced to maintain the real-time aspect of Enscape but also use more time when saving out the image for a final render?

Anything else you want to share?

Display More

TowerPower
already gave quite a good overview on the topic in his answer.

Here're some of the details I can share which you've asked:

One of the main challanges up to now was that we've got our existing rendering engine implemented completely based on the graphics API OpenGL. However Nvidia's RTX technology is only available for the recently introduced APIs DirectX12 and Vulkan, which differ greatly in the way they work compared to DirectX11 or OpenGL. To my knowledge we're one of the first/few developers that now have Vulkan RTX working in interoperability with an OpenGL engine.

The main difference is "just" how the ray intersection with your scene geometry is calculated, which is required for lighting calculations in a path tracing algorithm. But since this operation is the most expensive one when it comes to path tracing, ray intersections with hardware acceleration allows quite some speedup compared to previous implementations in software. It's basically the same step as when you still had real-time rasterization of 3d graphics done in software back in the 90s up until gpus were introduced with dedicated hardware for that.

RTX is just a tool to do ray-geometry intersections. What performance or quality get, is completely up to how the developer implements the rendering algorithm. As TowerPower
mentioned: We've already implemented a real-time capable path tracing algorithm - RTX won't change that much in the way an image is rendered with Enscape. But it will allow users with RTX hardware to benefit from a significant speed-up of indirect light computations. The result beeing that you'll have better frames per second in a scene with the same geometric complexity, or get similar performance you currently have, but in much larger or more detailed scenes.

So it's unlikely that the feature set of Enscape will differ much (if at all) for users with RTX to non-RTX users - you'll definitely still be able to render the same image quality with very competetive speed using Enscape without RTX hardware in the future.

So it's unlikely that the feature set of Enscape will differ much (if at all) for users with RTX to non-RTX users - you'll definitely still be able to render the same image quality with very competetive speed using Enscape without RTX hardware in the future.

I am in general quite far from understanding the topic from the technical side. And I only evaluate the tools by the abilities they give. So I am sorry, if my question sounds dull, but will Enscape be able (with the help of RTX) to fix the old problems like mirror reflections? I've seen the RTX on/off videos and there I could clearly see that problem to be solved.

I am in general quite far from understanding the topic from the technical side. And I only evaluate the tools by the abilities they give. So I am sorry, if my question sounds dull, but will Enscape be able (with the help of RTX) to fix the old problems like mirror reflections? I've seen the RTX on/off videos and there I could clearly see that problem to be solved.

As stated above, RTX is a mere tool which doesn't make "beautiful reflections" magically by itself. What it does due to it's hardware acceleration is speeding up the calculations so much, that we can spend more time computing more complex geometry for reflections, or use better lighting and texturing for reflections. Which then makes those reflections look more realistic.

Better looking reflections is something that'd be entirely possible right now without RTX technology, however it'd slow things down so much and eat so much memory that it'd be unusable for most users or for example crash for any slightly more complex scene. BTW you might like to try our latest Preview versions, we've already enhanced the reflection quality there, which now includes fully textured reflections on Ultra quality (enirely without RTX).

And please beware: What you're currently seeing in RTX on/off videos is usually handpicked marketing material (or from the Battlefield V game), which you really can't compare with the scenarios Enscape is dealing with.

As stated above, RTX is a mere tool which doesn't make "beautiful reflections" magically by itself. What it does due to it's hardware acceleration is speeding up the calculations so much, that we can spend more time computing more complex geometry for reflections, or use better lighting and texturing for reflections. Which then makes those reflections look more realistic.

Better looking reflections is something that'd be entirely possible right now without RTX technology, however it'd slow things down so much and eat so much memory that it'd be unusable for most users or for example crash for any slightly more complex scene. BTW you might like to try our latest Preview versions, we've already enhanced the reflection quality there, which now includes fully textured reflections on Ultra quality (enirely without RTX).

And please beware: What you're currently seeing in RTX on/off videos is usually handpicked marketing material (or from the Battlefield V game), which you really can't compare with the scenarios Enscape is dealing with.

Is this factoring in usage of the dedicated ai tensor cores? AI denoising and up-sampling (DLSS) have been advertised by Nvidia as the critical factors allowing ray tracing to be possible today rather than in 10 years. I'm pretty sure Enscape already uses some sort of denoising filter, but it's not AI accelerated, correct? Image training is the time consuming part, but it sounds like some offline render engines have had success just using Nvidia's algorithm out of the box without training it on their own images.

I'm curious about AI denoiser too but the way its been marketed for games seems like its trained just for that game. In my feeble mind that means the AI learns what images looks like in just that game since the variations in a closed sandbox like Battlefield there are only so many. For ArchViz the content is never the same and seems like AI just couldn't work the same way its intended for games.

I also assume this based on an article recently about AI taking the text graphics off of images by Nvidia. The AI can both understand what text is and separate it from the image but the trick is understanding what lies under the image to fill it back in with the correct color pixels. I don't know if this was learned from seeing the same images with out text or just similar images.

Tags

By using our website you accept that we use cookies to track usage and improve the relevancy of ads and may share this information with third-party services. For more details and opt-out options visit our Privacy page.