Then you should define what "most" is/means
because my Nvidias from the GTX/GTS 460/450
on up have always been able to run them.

Not as fast as some might like, but work is work.

Unless one is into cherry picking. Don't know what
else it can be called. Flushing some work units that can
be crunched because they run slower than other work units
is cherry picking in my book.

If they cause errors or cause too great a lag
so that it interferes with what the PC is normally
used for, then one has a valid reason to abort them.

Just because they run "slow" is not one of them,IMHO.

to elaborate on what Mike is saying about "most" nVidia GPU hosts not receiving VLARs, i believe that the few nVidia GPU hosts out there that do occasionally (or regularly) receive VLARs are receiving them in error. you see, the server is coded to take angle range into account and prevent VLARs from going out to nVidia GPU hosts. i don't know if the error lies within the server or the handful of nVidia GPU hosts that accidentally receive VLARs.

*EDIT* - Bill, out of curiosity, when was the last time you saw a VLAR running on one of your nVidia GPUs?

My GTX560 Ti & dual GTX550 Ti rigs don't mind the VLARs at all but overnight my triple 9800 rig started on its lot with very bad results eventually locking it up while I slept, so they all got aborted this morning on that setup.

Why? And yes I did receive some for the Nvidia gpu during the last server hiccup. Did those too on the gpu. Run times were posted in an earlier post.

b/c a number of us haven't seen any VLARs crunch on our GPUs since the "fix" was put in place. in the Bug in server affecting older BOINC clients with NVIDIA GPUs thread, someone was asking troubleshooting questions a few days ago. after solving his problem, he made mention of some VLARs, which briefly led Erik Corpela to think that the VLARs were broken again. but it was quickly pointed out that his VLARs occurred on August 11th, before the fixes were put into place. so it seemed like a false alarm, but then you mentioned having VLARs on your nVidia GPU, so now i'm wondering again...

For anyone whom I might have confused, I have received
zero vlars for my Nvidia gpu since Eric's last fix on the
server bug. I will and do transfer some to my gpu to keep
it busy if I run out of regular work units and cannot get
any from the servers.
My apologies If I gave any other impression.

Ok, I'm stumped. What's with the nvidia AUTHORS/COPYING/COPYRIGHT, etc stuff? Also: What the heck is the last line there? (the .dll)

Looks like the new AP-for-Nvidia stock application was released today. You are getting all the files necessary to run it. That's a first-time only event for a new application. You are running stock, so you get it automatically if your prefs are set for "use Nvidia" and AP6.Another Fred
Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop.

i'm not sure i understand what you're saying...the r1316 build didn't appear until 7/6/12, long after the most current Lunatics installer v0.40 was released (which included no OpenCL apps, let alone any apps on the r1316 build). if Keith is running the latest Lunatics installer, then he has an app_info.xml and is running anonymous platform. won't that prevent his host from automatically DLing OpenCL AP tasks for his nVidia GPU, even if APv6 are checked in his web preferences? doesn't this new OpenCL AP app have to be manually placed in the SETI@Home data directory, and isn't an entry required in the app_info.xml file in order to reference this new app and receive work for it?

Keith, see Raistmer's first post in THIS thread for a link to the Lunatics webpage that has the new OpenCL app available for download. that thread i linked you to also have a sample app_info.xml entry for nVidia GPUs in the 2nd post.

I did not know that. I only saw the News announcement of new Nvidia OpenCL AstroPulse apps to be made available shortly after the ATI ones were sent out as stock apps. Would have been less confusing if someone, somewhere had mentioned that the stock 6.04 app was r1316.

Maybe in future the project admins decide to send out .vlar WUs to Fermi and later GPUs - and still not to pre-Fermi GPUs (if this is possible with the server software).
If the upcoming stock S@h v7 CUDA application calculate very well this kind of WUs on Fermi+ GPUs.

Maybe in future the project admins decide to send out .vlar WUs to Fermi and later GPUs - and still not to pre-Fermi GPUs (if this is possible with the server software).
If the upcoming stock S@h v7 CUDA application calculate very well this kind of WUs on Fermi+ GPUs.

Even though Fermi and later don't seem to have to many problems with them, their increased computation times (3-5 times longer than average) really throw the DCF out a lot, so I can't see the point in allowing them back ATM.

I let run CUDA WUs since the project admins decided to send out a CUDA application.
Immediately I bought only because of SETI@home NVIDIA GPUs. ;-D

The project admins decided not to send .vlar WUs to NVIDIA GPUs (this are SETI@home Enhanced (MultiBeam) WUs with < 0.12 AR).
Because they slow down (other daily work on) the whole machine.

bill- If this is a problem then yes, the offending work units should not be crunched on the gpu.

I think maybe because of: The member could think: "Hey, what's up?" - and will never again crunch SETI@home WUs.

bill - If those people are so easily dissuaded from crunching for SETI
without even investigating as to why this is happening and how to fix the problem then it's probably for the best that they don't do any Boinc project work at all because they will eventually run into problems with any project they choose.

The most members of SETI@home (maybe 95 %) let run the stock applications.

The stock 6.08_cuda up to 6.10_cuda_fermi applications calculate very bad/long this .vlar WUs.

bill - Yes the work units take longer. That is not bad unless pc sluggishness
or errors are a problem. Back in the beginning of the project all work units used to take a long time. The work still needs to be done though.

bill - Can't agree. The vlar will take even longer to work on a cpu than a gpu.
To me crunching a work unit faster is better. If your gpu is not busy doing any other work units, why not let it do vlars if they cause no problems.

Maybe in future the project admins decide to send out .vlar WUs to Fermi and later GPUs - and still not to pre-Fermi GPUs (if this is possible with the server software).

bill - That would be excellent

If the upcoming stock S@h v7 CUDA application calculate very well this kind of WUs on Fermi+ GPUs.

bill - Hopefully. I have no objection to aborting vlars, or any other type of work unit, if they cause problems
on the client computer. Taking longer to run is not a problem in and of itself,
though.