I run 15 boinc projects. I am building another box and I have decided to run 2 projects on it. On it - I am going to run the GPU project that I am "ranked" the lowest in for credits (this one), and then the CPU project that I am "ranked" the lowest in for credits (worldcommunitygrid). Anyway - because I am trying to contribute more to these projects - I want the GPU workin *only* on seti GPU WUs...and then I will have another computer that I'd prefer only work on seti CPU WUs (but that is flexible). The first computer I purely want to work on CPU credits for....the project that does not have GPU enabled. Anyway - I see there are three projects SETI is running. Maybe someone could enlighten me as to how I could achieve what I am trying to do. I know it's going to involve like a "work" and "home" setup.......what I need to know is which of these crunches which types of WUs:

Yes, create a special venue (e.g. Work or Home, etc.) to put this dedicated computer into. Then, go into your Project preferences and make sure to uncheck Use CPU.

All three types of workunits (they're not projects) have GPU equivalent executables, so there's no need to specify one over the other. However, it is worth pointing out that SETI@home Enhanced is going to be deprecated real soon, so the only two left will be SETI@home v7 and AstroPulse v6. Again, both have the ability to be crunched on the GPU, so you can leave them both enabled.

Is it even worth it for me to crunch CPU WUs if I can crunch GPU ones? Are there some WUs that *need* to be processed by a CPU, or are those WUs only made for people without GPUs essentially? Basically are these resources better spent just going towards a CPU only project (and might they take away what SETI gets on the GPU?). Just - is there honestly a reason for me to crunch CPU WUs for SETI if I can crunch GPU ones as well?

Is it even worth it for me to crunch CPU WUs if I can crunch GPU ones? Are there some WUs that *need* to be processed by a CPU, or are those WUs only made for people without GPUs essentially? Basically are these resources better spent just going towards a CPU only project (and might they take away what SETI gets on the GPU?). Just - is there honestly a reason for me to crunch CPU WUs for SETI if I can crunch GPU ones as well?

Yes, it is.

You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing.

These tasks are the so-called VLARs (marked as such in the task name): it stands for 'Very Low Angle Range'. That comes about because they are recorded when the Arecibo telescope is looking at a single point in the sky for an extended period. Arguably, if that point source is a star with planets, the extended recordings might be the best bet for achieving success in the project's search for ET. So yes, please run VLARs, and for that, you need to enable crunching on your CPUs.

But enough do, and enough complaints were received (including people saying would stop running the project if their computers kept running so sluggishly) that the decision was taken to cut off the supply.

Also, given the value placed on credits round here, the low rate of return (same credits, longer processing time, equals lower RAC) leads some people to feel it's not worth it.

and enough complaints were received (including people saying would stop running the project if their computers kept running so sluggishly) that the decision was taken to cut off the supply.

People threaten to leave unless they get their way all the time. Personally I doubt if enough people would have left to notice. It was enough of a problem
that the project admins made the correct call.

It's just not accurate to say or even infer that everybody did have
problems.

Also, given the value placed on credits round here, the low rate of return (same credits, longer processing time, equals lower RAC) leads some people to feel it's not worth it.

Considering that cuda is much closer to the hardware than OpenCL is, i would imagine it should possible to configure the cuda apps so its possible to process VLARs on Nvidia cards as well.
Its like we would only process zero blanked astropulse units on GPU`s.
Evenso time and credits are no valid arguments in this case because this project is about science, nothing else.

Just my point of view tho.With each crime and every kindness we birth our future.

I feel some of the commentary above may have missed the point regarding your first post at the top of this thread.

If you wish to increase your credits for seti (as you implied above), process seti on GPU only. Look at processing MB and AP WUs. MB WUs give around 60%-70% of the recognition that AP WUs give, so best to process AP WUs above MB WUs. Availability of AP WUs is periodic in that there are periods where they are available and periods when they are not available. Also, due to the fact that AP recognition is higher, they are in greater demand when available.

Processing seti MB on CPU is not worth the effort due to the low level of recognition received. You are better off assigning a project to the CPU that does not have a GPU application enabled (as you thought above).

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.

All cards struggle - the crunching time is disproportionately high.
But only 'some' notice a difference in the day to day running. I've known rigs that became virtually unusable when the GPU got VLAR and I've known rigs where you'd only notice a slightly increased display response time, if at all.

So, just because YOU don't notice anything doesn't mean it's not there. You're an absolute minority there. And mind you it's not even card specific, it depends on the whole system makeup.

Edit: and btw the reason why Richard is advocating putting a bit of CPU on the project is that somebody needs to mop up the VLARs that don't go to NV.A person who won't read has no advantage over one who can't read. (Mark Twain)

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...SETI apps news
We're not gonna fight them. We're gonna transcend them.

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...

IMHO current optimal solution would be small checkbox in project preferencies "Process VLAR on GPU". Some projects like Prime-numbers finding one have dozen of such checkboxes for many algorithm flavours. We have VLAR differ from "usual" AR already so such ckeckbox should be technically possible. It should be "opt in" instead of usual "opt out" approach so anyone who want to try VLAR on own GPU has ability to do that. Quite simple and user-friendly.
Why not to do this?...

I have said a few times I wish out project preferences were more like that of PrimeGrid. Able to select select CPU, NVIDIA, OpenCL for each type of data separately. Also they added "CPU SSE3 (normal), CPU SSE2 (slower), & CPU AVX (faster)" under CPU for some types now. So advanced users can simply select the correct the correct type for their system. Seems like something along those lines for CUDA and OpenCL versions would be a good idea here for the default apps.SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the BP6/VP6 User Group today!

"You have a NVIDIA GeForce GTX 660 GPU. There is a class of WUs which NVIDIA GPUs find very, very hard to deal with - so much so, that the project will never send them to an NVIDIA card: there were too many complaints that it became almost impossible to use the computer for normal day-to-day activities while these tasks were processing."

I see no differentiation between "some" or "all" in referring to Nvidia cards there.

All cards struggle - the crunching time is disproportionately high.
But only 'some' notice a difference in the day to day running. I've known rigs that became virtually unusable when the GPU got VLAR and I've known rigs where you'd only notice a slightly increased display response time, if at all.

So, just because YOU don't notice anything doesn't mean it's not there. You're an absolute minority there. And mind you it's not even card specific, it depends on the whole system makeup.

If I notice no lag, then there's no lag to worry about. While my cards may be a minority they can still crunch VLARS with no noticeable lag and no errors. That means any statement that infers or says that all Nvidia cards can't process VLARS is incorrect.

Edit: and btw the reason why Richard is advocating putting a bit of CPU on the project is that somebody needs to mop up the VLARs that don't go to NV.

Yes, I got that. It has nothing to do with what I said. And just a thought,
doing VLARS on my gpu would free up cpu compute capacity for other projects that have no ability to run on a gpu. And I'm not the only one that can run VLARS on their gpu with no problems, although we are in the minority.

Yes, I got that. It has nothing to do with what I said. And just a thought, doing VLARS on my gpu would free up cpu compute capacity for other projects that have no ability to run on a gpu. And I'm not the only one that can run VLARS on their gpu with no problems, although we are in the minority.

I'd agree with that. And I'd also agree with Raistmer's suggestion a few posts back that users should (in an ideal world) be given an opt-in preference allowing them to run VLARs on NVidia (and Intel) GPUs if their particular circumstances make it a viable option.

But until we reach that happy nirvana, there are two flies in the ointment.

1) Any extra options require two thing to happen. Some human being has to write web code to add the extra option controls to the preferences page. And some human being has to write some scheduler code to read and act upon those preferences. PrimeGrid may have spare human beings they could lend us, but the last I heard they were in short supply here.

2) For the time being, VLAR tasks are being issued by the scheduler to CPUs and ATI GPUs. The servers keep track of that information, and assign credit appropriately when the task is returned. You've stated that you, personally, don't crunch for credit (bravo! nor do I), but I do feel that everyone - including the 90% 'silent majority' who never post here - deserves a fair and accurate credit calculation if they want it. It's been asserted that 'rescheduled' tasks - tasks not computed by the compute resource they were allocated to - get awarded distorted credits: not just for the person doing the rescheduling, but for their wingmates too.

Until our CreditNew hounds (hint, hint) achieve the implementation phase of their quest, it sounds as if (I haven't tested the assertion for myself) people who have their own idea of how the project should be run - and run it that way, unilaterally - may be interfering with the job satisfaction of a second group of users. If you say you are not the only one doing this, then I suggest you invite the others in your group to come here and discuss the pros and cons of their actions with the rest of us. I'm not saying there's a 'right' and a 'wrong' here: just constraints and consequences.

The CreditNew hounds are currently in special training, so as not to become well bruised dinner. One of us is hiding under the sofa periodically, and the other is busy weeing on every shrub it can find. Which is which, I often wonder."Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.