So I wonder about how are the plans in the future for this renderer regarding ease of use...

Currently if you decide to use blender and more or less defaults, you get nice and simple workflow and mostly working solution for most cases.

However it is very easy for new user like me to start experimenting with different integrators and quickly end up with messy setup that sometimes works, sometimes doesn't.

You want bidir, you loose some AOV's and image filtering blurs the results way too much, you try metropolis then denoiser can cause artifacts etc etc... discrepancies, bugs and features that work in one setup don't in another and this makes me as a user frustrated and questioning what this renderer wants to do, what development will follow and what can I count on in the future.

With such a big range of features and their possible combinations I feel like some solutions migh end up as dead end logicaly.

Did you guys consider simply dropping some things that might not be usable in the future and instead focusing on the most versatile and feature supporting solutions?

I'm not suggesting anything but for example;
Drop tiled sampling, random sampling, lighting strategies(with dlcs being on off switch and an advanced tab), metropolis, etc... and instead focus on remaining things.

I understand that there are cases where some setups might be more efficient, but user then has to figure out if everything else that worked with another setup will work with another one, and then keep track of that.

And for a new user coming from a renderer that might not be as good in all possible scenarios, but was extremely streamlined it is just frustrating. The fact that bidir and metropolis can handle well extreme cases where corona for example fails, won't make anyone switch if they then have to figure out what settings work and which don't when they want to use bidir and metropolis.

I'm thinking about this as I see that certain problems keep reocurring with setups that might not be worth it. Like endless issues with tilepath, is it worth keeping this even as an option? Denoiser is another thing that is nothing but a timeconsuming problem constantly making everyone new questioning how and if it even works...
And then, you develop new and awesome feature and now you need to make it work with all the possible render setups even those that migh rarely be used and a shitload of problems need to be solved slowing everything down.

I know from corona and vray forums in past that if the devs wanted to drop support for some features it caused some people to freak out, but it was always worth it for the devs and users to get much more focused renderer that is not a jack of all trades but is master at what it can do.

This is not a feature request , nor another complaint about that denoiser, just a food for thought and a question on how you imagine luxcore to look and work in the future, cause to me it sometimes looks like a set of experimental dead end features all over the place.

The first and foremost project problem is finding more developers, any other discussion is both premature and academic. We can argue the best way to find new developers (better developers documentation, better project visibility, larger user base, money or whatever) but it is the first step before even thinking of doing anything else.

Any large and architectural endeavor is not viable with the current amount of resources.

The first and foremost project problem is finding more developers, any other discussion is both premature and academic. We can argue the best way to find new developers (better developers documentation, better project visibility, larger user base, money or whatever) but it is the first step before even thinking of doing anything else.

Any large and architectural endeavor is not viable with the current amount of resources.

I understand, I think what I'm missing here is a clear sense of direction of the project.
You mentioned to me in another thread that two most powerful "setups" are brute force opencl and for cpu bidir and metropolis.
However they are also useful in different areas and for example bidir+metropolis has lot's of limitations with aov's and denoiser not working properly etc... I know here you can say, more developers to fix bugs add compatibility etc..

But what developers are you looking for? Are you building scientific renderer? VFX renderer, architectural renderer? In a later stage one can turn into another or expand.

But especially now that the development is limited I think it is important to stick with a direction and then expand. Like I mentioned as a fresh user I don't know if this renderer is a combination of experimental features that needs a complex mindmap to see what feature works in tandem with another or where is the focus.

I do think that larger userbase can also bring more people in development and make the project more attractive, but what userbase will that be? Will it be animation/vfx oriented, archviz, science and simulations? Because everybody will have different expectations and priorities, one might need advanced "fake" features and all kinds of aov's, another precise unbiased computation, another ease of use and popular features.

I see lot of attractive features present for me, and also present in plans but I also see other that I'm not interrested in. So what takes precedence? Can I recommend this renderer for someone else in product/archviz, or is this area that is not prioritized currently. More developers don't answer these questions I think. Well, I might be wrong though

But what developers are you looking for? Are you building scientific renderer? VFX renderer, architectural renderer? In a later stage one can turn into another or expand.

Anyone with ability in coding to help even to add ten lines of code is welcome it is opensource.

Luxcore is build as a general purposes renderer with effort put in long term for more efficiency in most common rendering scenario.

Yes Denoiser have to be improve but you can fix it by using your current solution (mine is darktable). it is not a real roadblock the engine is only a beta mainly one core devs and one plugin dev ( so to my eye it is a pure miracle).

There is also another thing to do to bring more user as Devs ressources are currently hugely limited ( Teaching and doing the best you can with the current version).

About features and developement priority i think the current strategy is very smart and promising. After the solidification of LC1 the next should be LC2 and in my opinion Gi caching will catch more Devs and Artist eye when it happen. It will be a revolution as you say yourself.

Yes Denoiser have to be improve but you can fix it by using your current solution (mine is darktable). it is not a real roadblock the engine is only a beta mainly one core devs and one plugin dev ( so to my eye it is a pure miracle).

It is not fixed if user has to look for another solution, and cannot be fixed cause it is actually working as supposed.
It is fundamentally different to how other denoisers in popular renderers work and that will forever be a problem.

luxcore uses opencl, and as Dade has said, there is no plan to implement the ai denier from nvidia any time soon. just my thoughts though:

when Dade first started slg( which turned into luxcore) his idea was a brute force gpu monte carlo renderer. it didn't include a denoiser because the idea was to throw as many samples as you can at it until noise cleared up. we don't even use raydifferentials to decrease texture noise for that same reason.

from what I understand ai denoising would require some amount of gpu memory to compute, as the training data would need to be available to the algorithm for it to be useful. this would limit scene memory that could be loaded onto the gpu.

an aside: dades basically the only developer currently coding features on the core. that's one man doing the majority of the code that goes into this wonderful piece of software. yet every day he has to write several paragraphs of a response to what I can't help but read as constant complaints by you over the denoiser, or lack of nvidia denoising in lux.

imagine the features that could be put into the next release if he didn't have to constantly explain his reasoning and defend his choices.

im not trying to troll or anything, but every day I come back to the forum to find more complaints about the same thing from the same person. so maybe thank the developer (not developers as blender, nvidia etc have) for what we have now and politely move on. the denoiser may see some work in the future but don't let your wanting it now ruin the fun of this project for the other users and those giving freely of their time to try to improve it.

from what I understand ai denoising would require some amount of gpu memory to compute, as the training data would need to be available to the algorithm for it to be useful. this would limit scene memory that could be loaded onto the gpu.

While I don't know how a proper direct implementation would affect performance, standalone solution is almost instant and very lightweight even at very high resolutions, other renderers that use this denoiser also don't show much limitations, it runs even on realtime rendering. On other hand, resources required to even gather data for bcd limit max. resolutions on gpu a lot (can't render 4K on 6GB vram gpu) and for what results..?

...yet every day he has to write several paragraphs of a response to what I can't help but read as constant complaints by you over the denoiser, or lack of nvidia denoising in lux.

im not trying to troll or anything, but every day I come back to the forum to find more complaints about the same thing from the same person. so maybe thank the developer (not developers as blender, nvidia etc have) for what we have now and politely move on. the denoiser may see some work in the future but don't let your wanting it now ruin the fun of this project for the other users and those giving freely of their time to try to improve it.

Here is what I can do: Every time a release is made or a feature is implemented, I can say thanks and then quietly go use luxcore where it works, and where it doesn't I simply use another solution not giving a damn what will other people do or if a new user comes and looks at a complete mess of settings and bugs that need to be worked around cause everybody wants to pretend it's all nice... But how am I useful then?

Like the always mentioned denoiser. I do have the standalone version of nvidia denoiser I am using it constantly and I wouldn't do anything in luxcore if that standalone denosier didn't exist. Sharly says himself he uses darktable for that purposes. First thing new user complain about is how wrong denoiser output looks, but yet everyone here is willing to argue that bcd is a good and working solution...

What always triggers me to complain about the denoiser is the fact that every now and then someone brings out a ridiculous argument why it is good even if nobody uses it in production(and this is not about bugs or settings).

And the reason I tend to bring nvidia denoiser is not me being fanboy or something like that, it is because it is a solution I did test as standalone on luxcore output itself, and it is ready to be implemented(well I think it is ).

But of course, I'm not an idiot, I see that curent userbase is happy with how things work now, so I'll stay away from the topic of denoiser. I am too tired to hear same ridiculous arguments over and over again.

And the reason I tend to bring nvidia denoiser is not me being fanboy or something like that, it is because it is a solution I did test as standalone on luxcore output itself, and it is ready to be implemented(well I think it is ).

One problem of the Nvidia denoiser is that it requires CUDA and doesn't even have a CPU fallback.
By the way, from what I've seen it looks to me like the Nvidia denoiser can be integrated in the addon, without having to touch LuxCore itself. So the hurdle is a bit lower for a new developer to come around and add it.

And the reason I tend to bring nvidia denoiser is not me being fanboy or something like that, it is because it is a solution I did test as standalone on luxcore output itself, and it is ready to be implemented(well I think it is ).

One problem of the Nvidia denoiser is that it requires CUDA and doesn't even have a CPU fallback.
By the way, from what I've seen it looks to me like the Nvidia denoiser can be integrated in the addon, without having to touch LuxCore itself. So the hurdle is a bit lower for a new developer to come around and add it.

Well, that depends how imagepipeline works in luxcore. For example color aberration effect needs to be added after denoising otherwise the changes to image confuses denoiser. And to maximize potential there is some form of albedo pass missing. I'd rather not have it in there than have half working solution, standalone does that.
I always look at it as a temporary solution until someone actualy creates open source ai universal denoiser, or a one that has feature recognition capabilities etc...