Thank you for noticing and reporting that! The cylinders in the scene are instances and I've not yet completed the new instances code, which might probably explain why this is incorrect. Once I complete that code I will re-check this scene. Thanks!

Very interesting that mesh light can be mapped with a texture to produce light emission David, it is indeed a much needed feature that has got several applications, for instance simulating al kinds of displays and light signs

First of all, my "normal" job has been very intense since Christmas, so I had little time and energy at the end of the day for YafaRay, that has delayed things unfortunately

On the other side, I have been focusing in two main areas of work:* Stopped using the native Embree subdivision functionality, and I'm now exploring the possibilities with OpenSubDiv instead. The main reason for this big change is that Embree Subdivision API is... well... not nice. For example if makes so difficult to interpolate per-face varying date such UV data, but even per-vertex data is not easy to interpolate. Moreover, I could not find a decent way of generating samples from a subdivision surface in Embree ( So, doing all subdivision stuff with OpenSubDiv now is taking me a lot of time to investigate, etc. On the bright side, I hope that if I can make OpenSubDiv work properly in YafaRay, we can use all its advantages. However, there is still way to go...

* Refactoring all my v4 ugly "testing code" taking into account the changes above and also making it more "modular" so it allows different intersector kernels. For example I will use Embree and (in case Embree is not available or not desired for some reason) a much slower and limited intersector kernel called NanoRT. The main reason why I used NanoRT as a "fallback" was as a proof of concept to make sure the new YafaRay Intersector interface could handle other kernels in addition to Embree.

So, I hope my final v4 code is easier to understand and to maintain and makes more sense, allowing more expandability in the future.

I know that it's quite frustrating to see v4 is being delayed so much, but I'm really working on it. For you to get an idea of how much work I've made so far, I've made 421 commits over the last months on top of the last v3 available version. Of course, many of those commits were tests going back and forth, etc, and eventually the actual differences between v3 and v4 will not be that many, but to be honest I've not stopped much lately...

As you know this is also a learning process for me (both for coding and for YafaRay internals and render algorithms, etc), which also takes me time and effort.

thanks David, very interesting. I wonder whether we can still take advantage of embree for intersecting a mesh generated with Opensubdiv and also what would happen with native displacement mapping? One nice thing about Opensubdiv is that it is really an active project with releases almost every month or so. I also believe that implementing an interface for supporting several intersection kernels instead of a monolitic design is a good choice, you never know when corporations will axe these non-core projects.

The more I look into this the more I'm convinced that, in order to develop for GPU, you need to have lots of human and material resources to be able to develop and test for all OSs, graphic cards, drivers, etc.

I don't think I can waste any more time on investiting this and therefore for now I will abandon this GPU idea (again) and will focus on the current Intersector Kernels I'm using now (Embree and NanoRT) plus OpenSubDiv for mesh subdivision.

Hi David GPU was a strong trend some time ago, lot of renderer use this method ( Vray GPU, Octane, Redshift, Cycles...) but it need to get a powerful and expansive GPU ( Nvidia in most cases).I follow and use Corona Render and thanks to Embree, you can have faster results with CPU and viewport rendering like Cycles ( mostly with Corona for 3DsMAx).CPU can do any kind of rendering ( Photon Mapping, SPPM, Pathtracing,...) than GPU ( only Pathtracing except Redshift can calculate Photon Mapping with GPU). So I think a good CPU optimization is the better way for Yafa.I can't wait for the next gen Yafa Keep the good job

Thank you for your comments. I feel encouraged now to forget about GPU and keep focusing on Embree and CPU rendering

I have to admit that I feel frustrated myself that the v4 development is taking so long, but the changes are really fundamental. However, today I found out that Embree has launched their new v3-beta0 release with *huge* API changes that are not compatible with previous Embree versions.

Fortunately as I'm still in a YafaRay pre-alpha state, it's still a good time for me to test the new Embree and try to adapt my current YafaRay embree code for it. Hopefully when YafaRay v4 is released, it will be using the latest Embree v3 API to be more "future-proof"

I think I've managed to port most of the YafaRay v4 testing code to the new Embree v3.0.0-beta0 API. I have to redo the transparent shadows part of the code, though, as it's very different :-/

I've also done some tests with OpenSubDiv and a custom Displacement algorithm I made myself. Just for your info I have attached the results.

The scene is made of just 2 simple cubes on top of a simple plane. I applied an 8 level subdivision to both cubes, with sharp creases in the right and smooth creases in the left, and a displacement map made of two textures: a ring texture and a color voronoi. All illuminated with IBL for effect

Still too early (yes, I know, this is taking a lot of time!), but I start to like the results. I need to work on more optimizations before releasing it...

So i have very similar opinion about it. Seems that it slowly goes out of the picture unless different/hybrid/openCL or more complex techniques are used (Lux, RPR/baykal & Indigo). And GPUs got even more expensive than comparative CPUs (1080Ti VS Threadripper - on Maxwell are almost on pair speed wise, but not feature wise )edit: GPUs also consume more power

Compiling times are slaughtering the fundamental idea of fast animation renderings. Not to mention that complex, advanced shading with GPU is still 'not a place to go to' for the vast majority of users.

But on the other hand RT/game engines got faster, better, more feature rich... so IMHO it's futile to compete with these. For tackling, animating simple scenes the path is clear: GPU all the way.