Author
Topic: The most wanted feature? (Read 198187 times)

I'm so happy with Corona, but the following three features would make it unbeatable:

1) GPU/Hybrid - I'm now starting to feel that if Corona isn't announcing plans for the future here I'm getting worried. I'm due to building a new machine next year or so and spending my hard earned money on GPUs seems to be a better bet than multi core CPUs

2) Completely automatic import of Vray models and scenes. Still ways to go.

3) Fstorm LUTs - not sure what they do with their color mapping, but I want it! ;)

I would not expect the devs to jump on the GPU vagon now, esp. since they never wanted it, and now Vray is focusing more and more on it.

They did promise at least off-loading the framebuffer&post onto it in some way, didn't materialize yet in any way either. But Optix is kick-ass ;- ).

Would be nice though, let's see how fast Vray is going to be once RTX embargo is out and Vlado can disclose some benchmarks.

Its true that they never wanted it. But lets be honest, Its becoming more and more apparent that its the cheaper way of operating and scaling your operations.If i wanted a new render node id have to shell out a few thousand for a dual xeon node. Or i can pay around a thousand for a new gpu and pay less in terms of power draw too.I just think with fstorm becoming so popular since they won the lawsuit and large chunks of free models being released exclusively for fstorm and with vray showing more gpu rendering is it not a worry that eventually the cpu renderers would get left behind just because the cost of setting up a gpu farm can be alot less?

Nvidia are really keeping their cards tight to their chest with the embargo on RTX but i kind of agree with what some people are saying that this could be like PhysX all over again where it never really became an integral part of the gaming experience. However it would be interesting to know what impact tensor cores could have on rendering applications if any.

Yup, GPUs definitely have better future in rendering and it's much easier to scale it.

But then again...look at what nVidia introduced... massive overpricing. To get the most out of GPU rendering, you would need Quad (two pairs of NV-Link to get memory pooling) 2080ti= 5200 Euros + price of workstation.So majority of people will not be able to afford it either it's just that at certain high-price point, GPU overcomes CPU in price/effectivity a lot.

Truth be told I thought cpu rendering could be dead by now...but imho 32-core 2990WX is much better progress than 2080ti itself. In next two years, I fully expect 48-64core 3rd/4th gen 3990/4990WX getting another massive 100perc. speed boost while I only expect 7nm nVidias to cost even more for even less speed-up. So the competition is strangely equal right now and people should stick with what they have right now and see how it unfolds.

But since most of us have at least one GPU in each PC, it's shame we can't use it to anything.

If you read our announcements carefully, you can see we are getting more and more open about the possibility. If you look retrospectively at GPUs and technology available 8 years ago, you will surely agree that doing CPU rendering then would be unfeasible. Yet even then people claimed CPU rendering is dead and GPU future is right behind corner.

Since then GPU rendering got better and better, but with smaller, incremental steps, there is no clear "switch now" moment. I am lot less nervous about this now since we have large GPU team inside Chaosgroup that I can consult on the feasibility and how-to of GPU rendering. This might be the biggest benefit of RL-CG merger for the future.

If you read our announcements carefully, you can see we are getting more and more open about the possibility. If you look retrospectively at GPUs and technology available 8 years ago, you will surely agree that doing CPU rendering then would be unfeasible. Yet even then people claimed CPU rendering is dead and GPU future is right behind corner.

Since then GPU rendering got better and better, but with smaller, incremental steps, there is no clear "switch now" moment. I am lot less nervous about this now since we have large GPU team inside Chaosgroup that I can consult on the feasibility and how-to of GPU rendering. This might be the biggest benefit of RL-CG merger for the future.

Interesting. Do you guys feel there would ever be that switch moment?Perhaps when GPU prices settle back down to where they actually should be rather than a good 30% above what the RRP should be?

If you read our announcements carefully, you can see we are getting more and more open about the possibility. If you look retrospectively at GPUs and technology available 8 years ago, you will surely agree that doing CPU rendering then would be unfeasible. Yet even then people claimed CPU rendering is dead and GPU future is right behind corner.

Since then GPU rendering got better and better, but with smaller, incremental steps, there is no clear "switch now" moment. I am lot less nervous about this now since we have large GPU team inside Chaosgroup that I can consult on the feasibility and how-to of GPU rendering. This might be the biggest benefit of RL-CG merger for the future.

Interesting. Do you guys feel there would ever be that switch moment?Perhaps when GPU prices settle back down to where they actually should be rather than a good 30% above what the RRP should be?

30% is not enough to switch, it would have to be lets say 10x. Or solving some outstanding issues of GPU programming, such as lack of function pointers, bad scaling with complexity/divergent code, etc. Or just going step by step. We are in contact with VRay GPU team and we are discussing this from time to time.

When/if Corona would add GPU rendering - it would be ideal if it would be a seamless experience. I mean just add another CPU and another GPU and watch your rendering gets faster and faster. No Corona split in two like Corona CPU and Corona GPU.

And if the GPU would be hard or unfeasible to program in a way CPU is, then the GPU should be used at least for the operations in which GPUs excel. So some things would be done by the CPU and simultaneously the GPU would start doing those other things which comes easy for a GPU. And I don't mean just the post effects, but maybe some parts of the shading or lighting or I don;t know, cause I have no idea :)

That is another topic. There is not a single renderer on the market that sucessfully switched from CPU to GPU rendering. There are probably some reasons behind this. One might be that you not be able to get full feature set of a CPU renderer on GPU. At this point I think it would then make more sense to remove the same features from CPU version than keep 2 renderers with 2 separate feature lists. And I think most people would still not like it...

that was also never successfully done. The overhead and synchronization just makes it not worth it. You would have to transfer tens of gigabytes of data per second between CPU and GPU to make this work, and you would be limited by the slower component of the two