It's still quite uninteresting from a light transport standpoint, but the thing that makes it unique and interesting is that all surfaces are decomposed into microgeometry during rendering in a manner similar to the Reyes rendering architecture. This allows "perfect" rendering of curved surfaces, as well as displacements, both in a very memory-efficient way.

The technique I'm using is essentially the same as in the paper "Two-Level Ray Tracing with Reordering for Highly Complex Scenes" by Hanika et al., but with a couple of minor exceptions.

Psychopath is still in early stages, as I'm sure you can tell. There is still lots left to do, and lots of crappy temporary code still in there just to get things working. Eventually, once the core is stable and (relatively) complete, I'm hoping to experiment with which of the more advanced light transport techniques can be implemented in this architecture. One of the tricky things about it is that since the dicing rates are based on ray differentials starting from the camera, it would seem to preclude effective use of bidirectional path tracing. Also, since most of the efficiency of the architecture comes from tracing and reordering hundreds of thousands of rays at once, that would seem to preclude MLT. But I'm looking forward to seeing if I can find (possibly ad-hoc) ways to get around those limitations.

Still a ways off from that, though. Progress is slow due to limited free time.

The input for this image was the raw patches (no pre-subdivision or tessellation). All geometry is dynamically diced during rendering down to a sub-pixel level, so this is a pixel-perfect rendering of the patches.

I'm very interested in micropolygon rendering myself and am writing on a GPU-based REYES renderer for a while now. My renderer uses rasterization, but micropolygon path tracing is also a topic I'm very interested in.

How's the performance of psychopath? Does the subdivision and dicing of the surfaces take most of the time or are you primarily limited by ray intersections?

In general, it tends to render equivalent scenes about 3-5x slower than Cycles (Blender's pathtracer) right now. But I've really only put significant effort into optimizing the top-level BVH traversal (and even that can be optimized more, I'm sure).

Does the subdivision and dicing of the surfaces take most of the time or are you primarily limited by ray intersections?

That depends largely on the number of samples per pixel. If you're just taking 1 sample per pixel, then surface splitting/dicing definitely dominates. But even with as few as 16 samples per pixel, you're often spending more time ray tracing. Once you get up to 256 samples per pixel, ray tracing time typically dominates by a large margin.

It also depends a lot on the scene, though. I haven't done enough testing with enough variety of scenes to really know what exactly makes it tick, yet.

I'm very interested in micropolygon rendering myself and am writing on a GPU-based REYES renderer for a while now.

Ah! Cool! Reyes is my favorite rendering architecture. My original motivation behind Psychopath was basically to try to bring the benefits of Reyes rendering into a global illumination ray tracer.

The plan is for dicing rates to be determined by ray differentials, so it will work properly through distorted reflections/refractions/etc. Right now I'm using a much less robust "ray width" concept that basically treats individual rays as thin cones, but that's temporary while I'm focusing on other aspects of the architecture.

And how about the shading? If you have textures, for example, do you evaluate/filter textures during the dicing?

The plan is for shading to be handled on a per-ray basis, as in most path tracers. The exception, of course, being displacement shading, which is done immediately after dicing.

The rationale behind that is basically correctness. If you have a shader that varies over time, for example, I'd like it to render with proper motion blur. Or if you have a shader that varies based on e.g. incoming ray direction, or ray depth, or any other number of non-bakeable parameters.

Most shaders don't vary that way, of course, so perhaps eventually I'll make it an option to bake the shading into the microgeometry.

In any case, I don't actually have a shading system in place yet. It's all hard-coded gray lambert right now. I want to get the ray tracing kernel to a state I'm happy with first. The shading system should be comparatively straight-forward, as I'm planning to use OSL, so a lot of the hard work has already been done by other much smarter people than I.