3d Rendering - To clock tweak or not - that's just one of the questions.

I generally use benchmarks to help me to tweak my systems to achieve their best performance. Since I'd rather produce content more than to just merely enjoy that created by others, even my gaming experiences have other motives at play. I try to learn what the gaming author did best and what I should try to avoid doing in game production using Unity 3d. I favor the Cinebench series of benchmarks (10, 11.5 and 15) because they have real world application to me. My favorite 3d software is Cinema 4d. It has been true that if I tweaked my system to get the best Cinebench score, I've also gone a long way in tweaking it to give me the best performance in Cinema 4d. In the past, that tweaking has been, almost 100%, at the CPU level. Over time, I've observed that as far as game play is concerned, that OpenGL performance in Cinebench will be the greatest by using the top of the line ATI/AMD cards. However, with the advent of GPU rendering and the paucity of current OpenCL rendering engines that has meant that I have had to place many of my eggs in the GTX CUDA's nest as far as content creation is concerned. Since this thread is under the hierarchy of video card overclocking and with the ascendency of the GPU over the CPU for parallel compute tasks, I hope that I'm not straying to far out of line to point out to those who might actually use Cinema 4d that using GPU rendering engines such as Octanerender (GPU only - mostly unbiased renderer), Thea render (CPU and now GPU - biased renderer) and RedShift (GPU and CPU - biased renderer that, of these three, seeks to overcome the limitation of vram size on the video card) can all improve actual Cinema 4d rendering performance more than by reliance placed solely on CPU rendering and tweaking. Tweaking a GPU to best achieve what the Cinema 4d application is designed to ultimately produce, i.e., an animation, from my experience is enhanced more by finding the sweetspot for memory overclocking. This is particularly so if you have SuperClocked (SC) GPUs. A little GPU core overclocking might also help, but its been my experience that performance gains peter out more quickly with core speed increases, and even more quickly if you have SC GPUs.

If you're into 3d rendering but haven't tweaked either your CPU(s) or GPU(s), why not? If you're into 3d rendering and have tweaked your CPU(s) and/or GPU(s), why did you tweak them, what did you use to tweaked them, how did you tweak them, and by what degree did you tweak them? What was the outcome(s)?

Re: 3d Rendering - To clock tweak or not - that's just one of the questions.

First of all... dat hardware list is SO strong x-).

While I don't personally do anything in terms of 3D rendering, I would seem to think that if your software supports GPU acceleration based on the architecture you use that you would need to do several different rounds of performance tuning on the GPU(s) as well as the CPU(s) to determining where the best happy medium is located: no sense in overclocking just for the sake of overclocking if the performance benefits you gain don't match the effort put into the overclock as well as the increased operating costs (minimal as they may be). Using benchmarks that pertain directly to your application certainly helps, and using those benchmark scores you can see patterns starting to develop as you push your silicon further and further.

Eventually you can and will get to a point where the performance gains you net aren't worth the effort it took to get you there or the uncertainty of system instability occurring in the middle of a heavy rendering session. If it were me doing the tweaking for a machine like that I would honestly only tweak it minimally, that way I can keep the system running strong but quiet as well. Performance for the sake of performance starts to sound not entirely unlike a top fuel dragster after a while, and that's just not enjoyable to work around even if you DO have headphones on and are listening to music (which will probably run like balls once the CPU cores get fully loaded with render tasks).

To relate to what I have observed in overclocking for my system (in my case for gaming and for hardware reviews), I have definitely noticed that as I tune using PCMark, 3DMark, Cinebench, WinZip and others, there will come a point that, while I might have gotten more out of the hardware, the scores start to predictably, though exponentially, fall off and eventually hit a sort of asymptotic point where you just aren't going to get anything more out of it without moving to extreme measures, and that's just not worth it. For a machine of the nature you speak of, or even a simpler machine like mine, it should be functional in many ways, not just for doing it's intended purpose. Personally, I feel like if you can get anywhere around a 10% performance increase out of any of your hardware then you're solid for a workstation: focus on keeping it cool and quiet from there.

Re: 3d Rendering - To clock tweak or not - that's just one of the questions.

Originally Posted by Mgutierrez33

First of all... dat hardware list is SO strong x-).

While I don't personally do anything in terms of 3D rendering, I would seem to think that if your software supports GPU acceleration based on the architecture you use that you would need to do several different rounds of performance tuning on the GPU(s) as well as the CPU(s) to determining where the best happy medium is located: no sense in overclocking just for the sake of overclocking if the performance benefits you gain don't match the effort put into the overclock as well as the increased operating costs (minimal as they may be). Using benchmarks that pertain directly to your application certainly helps, and using those benchmark scores you can see patterns starting to develop as you push your silicon further and further.

Eventually you can and will get to a point where the performance gains you net aren't worth the effort it took to get you there or the uncertainty of system instability occurring in the middle of a heavy rendering session. If it were me doing the tweaking for a machine like that I would honestly only tweak it minimally, that way I can keep the system running strong but quiet as well. Performance for the sake of performance starts to sound not entirely unlike a top fuel dragster after a while, and that's just not enjoyable to work around even if you DO have headphones on and are listening to music (which will probably run like balls once the CPU cores get fully loaded with render tasks).

To relate to what I have observed in overclocking for my system (in my case for gaming and for hardware reviews), I have definitely noticed that as I tune using PCMark, 3DMark, Cinebench, WinZip and others, there will come a point that, while I might have gotten more out of the hardware, the scores start to predictably, though exponentially, fall off and eventually hit a sort of asymptotic point where you just aren't going to get anything more out of it without moving to extreme measures, and that's just not worth it. For a machine of the nature you speak of, or even a simpler machine like mine, it should be functional in many ways, not just for doing it's intended purpose. Personally, I feel like if you can get anywhere around a 10% performance increase out of any of your hardware then you're solid for a workstation: focus on keeping it cool and quiet from there.

There's not one of your observations about which I can reasonably disagree. Using Macs has also made me mindful of the beauty of silence. And getting 10% more frames rendered per time unit would be very satisfying. The reason I'm into GPU rendering is because the gain is not just 10%, but more in the neighborhood of 1000% before the tweaking begins. A top of the line Tesla card or a top of the line GTX card (which is even faster at 3d rendering than a Tesla card) can render 10x to 50x faster than a CPU only based system. That's where I want the 10% or 1.1x tweaking factor applied. The 1.1x addition tacked on to 1,000% is 1,100%, which is nothing to sneeze at. I'd like to get the rendering capability of computer systems used for movies such as Toy Story or Lion King nestled into <25 systems.