Hello, I was sent a message from a fellow cruncher about a very important question that I want everyone to know the answer to:

Hi, can I ask you how is possible to have on same GPU, 1080Ti as I have so much different time to calculate in comparison to different guys? How much difference is about fast and slower CPU? How much part of WU is calculated by CPU and how GPU? Because I have Dual Xeon E5-2696 V4 and 1080TI, overclocked but have much longer time than you or guys from first ten position. You have your GPU on water on on air and on how frekvencies gpu and ram you are atable for GPU? Can you send me your setup?
Thank you for explain it.

Jirka Chvatal

Hello Jirka,

The way this project's application is unfortunately written, it benefits highly from very high single threaded speed. It communicates every calculation the GPU does over the PCIe BUS to do double precision on the single CPU thread. This means that if there is a slower CPU or limited PCIe bandwidth, you will not be able to saturate a fast GPU like the 1080ti. Certain WUs (work units) require more CPU than other, which is why some WUs will run at near 90% and some at 70%. My 1080ti is an air-cooled rog strix 1080ti that I've overclocked to 1987mhz core and 1425mhz*8 (11400mhz effective) and my CPU is a 6800k at 4.4ghz @ 1.35v. There are some software tricks you can do to increase GPU usage, that I'd do before moving to a new mobo and CPU. SWAN_SYNC is an environment variable that you can make the CPU wait for the GPU's command with no latency. This will increase your GPU usage, leading to shorter WU times. Windows inherently has something called WDDM which allows the system to keep running if the driver crashes. This overhead lowers GPU usage. Operating systems like linux do not have this overhead and will have higher GPU usage.

Also I'll explain that if there are other CPU WUs running at the same time, they may interfere with the thread dedicated to the GPU, thus increasing latency and lowering GPU usage. If anyone has any questions, feel free to speak up.

I was crunching with a GTX1050 Ti Mobile GPU on a Nvidia Optimus laptop, in which I used Intel GPU as primary GPU and Nvidia for crucnhing only.When I cruch only GPUgrid GPU usage was about %90 - 92. But if I cruch World Community Grid CPU jobs too, it rose to %98 even though WCG jobs don't use GPU.Or I needed to enable CPU turbo mode to higher frequency and it gave some higher GPU utilization too.

But now in order to run my second monitor, my notebook HDMI outout seems to hardwired to Nvidia GPU, so I removed Bumblebee from my system and desktop etc. everything runs on Nvidia GPU. Yhis way GPU utilisation during crunching is %98 even without CPU jobs. But now I can't watch videos with Nvidia GPU accelerated video applications. Videos freeze, shows squares like from VCD, DVD era. :D