Hi, in the last couple of days I was working hard to create a custom ray tracing benchmark targeting RTX class GPUs. I managed to set up NVIDIA's Fermat rendering engine, it's based on CUDA and uses Optix Prime for ray-triangle intersection. Naturally, Optix supports Pascal and Maxwell, so we can compare the "prev-gen" GPUs with the RTX 20-series Turing based GPUs in ray tracing scenarios.

I added some cmake magic and made it to run on Maxwell and Pascal as well. For now, I only tested on Windows. Eager to see what kind of performance the new 2080(Ti) will deliver. Supposedly, Optix 5.2 and CUDA SDK 10 will have Turing support in it.