ABP1 CUDA applications

Well, it seems that my piddling little 256MB Nvidia card (though big enough to run 2 big fat monitors) isn't enough to run the new app. Meanwhile, I'm not getting any CPU units either. Are all the new units CUDA only?

Firstly, if you have a utility that displays the GPU temperature, that's a very good indication of how much the GPU is doing. My GPU is running MUCH cooler running E@H than when it's running any other application.

Secondly, you can compare the run-time of CPU and CPU/GPU workunits of the same application (Einstein, SETI, and Milkyway all offer the same app in both CPU and GPU versions). Depending on your hardware, your GPU is usually 10 to 20 times faster than your CPU, so you should expect the GPU WUs to finish in only 5% o 10% of the time it takes the same CPU WU to finish (which represents a 1000% to 2000% speed increase).

On Einstein, everyone is seeing cold GPU temps and only about a 33% increase in speed -- while using hardware that's physically around 2000% faster.

Or, let me put it this way: If it normally takes your machine about 10 hours to complete an E@H WU, if it was making good use of the GPU, it would finish WUs in around 30 to 60 minutes, give or take.

To put things in perspective, if I run Milkyway@home on my CPU, it take a bit under four hours to run. M@H pushes my GPU harder than any other app, as evidenced by higher temperatures. When I run M@H WUs on my GPU, they complete in slightly under 4 minutes. A 6000% increase. Compared to the 33% increase I'm seeing on E@H.

Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

First let me say I'm crunching CUDA numbers just fine for both SETI@Home and Einstein@Home.

I only have one comment. SETI@Home (which also feeds CUDA tasks) breaks their CUDA tasks down into much smaller "chunks", i.e., up to about 30 minutes. Because of the longer tasks coming from Einstein@Home this will capture the GPU which essentially means it doesn't play well with others.

Right now SETI is going through some problems keeping tasks fed but once these issues are sorted out I suspect that the longer tasks from Einstein will mean that everyone that participates in both programs will either have to do some very fine tweaking of the percentages allocated to each project or abandon accepting CUDA tasks from one of the projects. Keep in mind that you cannot tweak CPU project use separate from GU project use.

Firstly, if you have a utility that displays the GPU temperature, that's a very good indication of how much the GPU is doing. My GPU is running MUCH cooler running E@H than when it's running any other application.

Secondly, you can compare the run-time of CPU and CPU/GPU workunits of the same application (Einstein, SETI, and Milkyway all offer the same app in both CPU and GPU versions). Depending on your hardware, your GPU is usually 10 to 20 times faster than your CPU, so you should expect the GPU WUs to finish in only 5% o 10% of the time it takes the same CPU WU to finish (which represents a 1000% to 2000% speed increase).

On Einstein, everyone is seeing cold GPU temps and only about a 33% increase in speed -- while using hardware that's physically around 2000% faster.

Or, let me put it this way: If it normally takes your machine about 10 hours to complete an E@H WU, if it was making good use of the GPU, it would finish WUs in around 30 to 60 minutes, give or take.

To put things in perspective, if I run Milkyway@home on my CPU, it take a bit under four hours to run. M@H pushes my GPU harder than any other app, as evidenced by higher temperatures. When I run M@H WUs on my GPU, they complete in slightly under 4 minutes. A 6000% increase. Compared to the 33% increase I'm seeing on E@H.

I agree that something needs to be done to optimize the CUDA tasks. Because of this long processing time for GPU targeted tasks it is critical that something be done about the size of the GPU tasks coming from Einstein@Home. If using the GPU is not that much more efficient in processing tasks then one might as well not allow CUDA for Einstein@Home.

One of my computers is a laptop using a NVIDIA 9600M GS.
E@H refuse to upload Cuda WU on this laptop because the display driver 190.38 is requested.
After checking on NVIDIA web site, it appears that this driver 190.38 is not for laptops or M graphics cards series. The latest display driver available for GPU M series is the 186.81.

So if you're using a laptop with NVIDIA GPU, don't waste your time triying to run E@H cuda WU. it is not possible right now.

I only have one comment. SETI@Home (which also feeds CUDA tasks) breaks their CUDA tasks down into much smaller "chunks", i.e., up to about 30 minutes. Because of the longer tasks coming from Einstein@Home this will capture the GPU which essentially means it doesn't play well with others.

The problem isn't the length of the WU. S@H's WU are actually longer (i.e., more computation is done) than Einstein's. On my computer, SETI's WUs take about 10 hours vs. 6 hours for Einstein. The difference is that those same identical work units, run on my GPU, drop to 10 minutes for SETI but only 4 hours for Einstein.

For both SETI and Einstein, the same WUs are sent to both CPU and GPU computers. If you look at the results for both projects, you'll see this -- for any of your GPU WUs, chances are likely that the other computer ran it on the CPU.

It's not the size of the WU that's the problem -- it's that the GPU is barely being used.

As for BOINC scheduling of short vs. long tasks on the GPU -- that *should* be fine. "Should* is the important word there. I don't actually know how well the BOINC client schedules GPU tasks. But how it's supposed to work is clear: If the work percentages are the same, two projects should each get 50% of the GPU time, regardless of the size of the WUs. More of the shorter WUs should run, as compared to the longer WUs.

Oh, I want to make something clear. While it's painfully obvious that the E@H application isn't utilizing the GPU very well, I am in no way implying that the people who wrote the application did a poor job. Some problems simply do not lend themselves to being solved on a massively parallel computer (i.e., a GPU). It may be that Einstein is one of those problems, and it just can't be run efficiently on a GPU.

Want to find one of the largest known primes? Try PrimeGrid. Or help cure disease at WCG.

OK, but if you come to a point where it's obvious that it makes no sense to continue a project like this disastrous CUDA application, I think someone should have the courage to stop it, instead of firing it out to the public with might and main, just to be able to say "look here, we have a CUDA app, too!".

I always say that BOINC is not a one man show. If you create a new application you should always keep in mind that there is a world outside your lab and you have to share the resources, that your volunteers are donating to you, with other projects.

And hey, don't tell me that I can leave the project if I don't like it. This is the most antisocial attitude I've heard of. So if you don't like to share the resources of this planet with others, maybe it's time for you to leave it!

...And hey, don't tell me that I can leave the project if I don't like it. This is the most antisocial attitude I've heard of. So if you don't like to share the resources of this planet with others, maybe it's time for you to leave it!

i agree with that !!

No one told anyone to leave the project. XJR-Maniac was just quite rude without reason.

If you don't want Einstein CUDA tasks, deselect them in your preferences. Nothing easier than that.