If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

I hope that problems with BPT when you have a single GPU to display and render at the same time are taken into account. In my case some scenes with BPT become very heavy while viewport or image render, doing completely lag or hang my system.

Russian Roulette got committed after 2.79 branched, at this point in the release process only bug-fixes make it from master to the 2.79 branch but people are allowed again to check in more risky possibly compatibility breaking changes to master.

If the improvement in noise/time is important... why don't complete the adaptative sampling? some tech problems?

The new RR sampling hasn't given me any problems (in a number of diverse scenes), so it's not that there's issues with it.

The likely reason is that Cycles in the official 2.7x releases is not supposed to have major changes to the point where it breaks UI settings, tutorials, user expectations, ect... The breaking changes are being done now because 2.7x has seen its last release and will ultimately be absorbed into the 2.8 branch.

Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
Adventures in Cycles; My official sketchbook

This, alone, would save us many many render hours for our animations. If our bvh build time is 30 seconds out of a 2 minute render, and our renders usually go for 12 hours, it would shave 3 hours total off it (20 machines = 60 render hours)

This, alone, would save us many many render hours for our animations. If our bvh build time is 30 seconds out of a 2 minute render, and our renders usually go for 12 hours, it would shave 3 hours total off it (20 machines = 60 render hours)

While i agree this is a useful feature, the changes in ID and depsgraph i'm quite skeptical about. Such an update detection is something we are changing now in 2.8 branch, together with ownership of data and depsgraph. There is unlikely to be a need for update_id since depsgraph will more like per-render-engine. If we expose anything to API now, it'll be either extra legacy crap to support in the new design or it'll be yet another point of scripts failure. Either of this two things i am not looking forward.

Taking into account the fact, that update detection is rather weak (it exposes whole can of warms of intermediate render errors users will start encountering), perhaps it's not a bad idea to hold this off for until we get all the depsgraph, engines and tagging figured out first.

Implementing a feature that works well in some circumstances when used by people who know exactly what they're doing is easy, implementing a feature that works well when used for anything by any of the millions of Blender users isn't.

In this case, there actually are several bugs that I found (and fixed) since I updated that diff for the last time, and I'm pretty sure that there still are some.

Because writing code that does something, and writing code that is production-ready are two very distinct things. Code review and cleanup exist for a good reason. Blender's developers are stretched thin with the 2.8 project, so it's likely that patch review will remain slow for the foreseeable future. They can't just accept any useful sounding patch that someone tosses onto the tracker (even Lukas' excellent Cycles work). That's how Blender's code got to be the mess that it is today.

Long time 3D artist and member of the official Cycles Artists Modulehttps://www.youtube.com/user/m9105826 - Training, other stuff. Like and subscribe for more!
Follow me on Twitter: @mattheimlich or on my blog

Because writing code that does something, and writing code that is production-ready are two very distinct things. Code review and cleanup exist for a good reason. Blender's developers are stretched thin with the 2.8 project, so it's likely that patch review will remain slow for the foreseeable future. They can't just accept any useful sounding patch that someone tosses onto the tracker (even Lukas' excellent Cycles work). That's how Blender's code got to be the mess that it is today.

If the BF was able to hire several more developers, then we could perhaps see a situation where they had devs. dedicated to cleaning up and fixing patches after their inclusion in master (in order to sharply reduce wait times and encourage more volunteer work).

The caveat though is that this can only apply to patches in areas of Blender that these extra devs. have deep knowledge of (and there would still be the expectation that the patch submitter would at least do a decent job at making it work). That would mean patches with tons of bugs, crashes, and very messy writing still won't get in without additional work (but that would make the lack of inclusion more a product of the volunteer's effort than a lacking review process).

Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
Adventures in Cycles; My official sketchbook

it's not all about the human resources... some things need time, skill & evolution (linear processes). Doesn't matter if 100 people are controlling the stove to boil the water or one. & in case of cooking - one works best.

If the BF was able to hire several more developers, then we could perhaps see a situation where they had devs. dedicated to cleaning up and fixing patches after their inclusion in master (in order to sharply reduce wait times and encourage more volunteer work).

I would like to just step in here and say the wait times for commits is not an issue for bug fixes / necessities. I submitted a patch last night (to fix adaptive compilation of cuda with volumetrics), and it was committed within 30 minutes of me submitting the patch. What takes time is patches which touch on a lot of functionalities within blender that are iffy in their implementation / whether they fit with the direction of blender. The devs from my dealing shave been great for encouraging volunteer patches and helping people get code into master

If the BF was able to hire several more developers, then we could perhaps see a situation where they had devs. dedicated to cleaning up and fixing patches after their inclusion in master (in order to sharply reduce wait times and encourage more volunteer work).

There are thousands of more secure and better paid jobs, if you know how to code at an exepert level. Noone would choose a job where he only has to do code reviews.

I have never really been able to understand well about whether Clamp values (even safe values) can have many limitations or if this can be harmful (dynamic range and such things). I think I have read somebody out there saying that Clamping is not the ideal solution for the problems it tries to solve.
The thing is, I have noticed that from master new scenes are created with Clamp indirect = 10 by default. Is there a commit info or something that explains why that decision? Maybe to help denoiser?
I've also seen that clamping direct can also fix artifacts in very bright areas that appear when using denosier. Is there any safe value for Clamp direct without bringing many limitations on the final render?

All of these improvements are primarily for the regular Path Tracing Kernel (following through on the plan to start reducing the need for the Branched Path Kernel as part of unification).

Last edited by Ace Dragon; 20-Sep-17 at 14:46.

Sweet Dragon dreams, lovely Dragon kisses, gorgeous Dragon hugs. How sweet would life be to romp with Dragons, teasing you with their fire and you being in their games, perhaps they can even turn you into one as well.
Adventures in Cycles; My official sketchbook