I guess something wrong happened during the weekly server maintenance.

I wrote an EMail to the admins that they are informed about this.

I am getting lots of .vlar's but none are going to my NVIDIA GPU, they are all for the CPU (when I am able to connect to the server)

[edit] well scratch that, my number two computer just got a schedule of a bunch of .vlar's some of which are scheduled for the NVIDIA card. As before, they just do not download....... [/edit]
____________
SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours

Checking it out, two of the .vlar tasks I had in my task list completed successfully using CUDA and was uploaded.

There may be some interesting things found at times in these type of tasks. If it could be done in a faster way, results possibly could be obtained.

But, of course, the error rate could be high and definitely success is not always guaranteed doing it this way.

I got some VLARS sent to my GPU too today (7Aug 2012) after the weekly scheduled outage. Two of them completed successfuly but took over 8,100 elapsed seconds; two of them terminated with Time Limit Exceeded at 10,161 seconds. Four more were in progress, with over 1hr:40mins elapsed and about 1h:05m estimated remaining but since Estimated remaining was continually Increasing (thus the elapsed + estimated would eventually exceed 10,161 sec ~2hr:48min), I manually aborted them. I'm running a GTX580 with v301.42 nVidia drivers, with count= 0.25 (thus GPU can process 4 concurrent workunits). Using BOINC v 7.0.28, and Lunatics optimized MB applications.

Checking it out, two of the .vlar tasks I had in my task list completed successfully using CUDA and was uploaded.

There may be some interesting things found at times in these type of tasks. If it could be done in a faster way, results possibly could be obtained.

But, of course, the error rate could be high and definitely success is not always guaranteed doing it this way.

This one time you are getting the polite version of my reply, because other people, especially those fairly new to the project, may not know why a 'no VLAR to NVidia GPU' policy and code was established:

VLAR run incredibly slow with stock (6.08/6.09/6.10) apps. They can run so badly that the whole system freezes or outright crashes.
That was true with drivers two years ago - I don't think anybody ever established whether new drivers cope better and I'm pretty sure nobody wants to really try.

So at some point code was introduced into the scheduler to mark tasks below a certain AR as VLAR and not to send them to NVidia GPUs.

BTW, for the past few releases and RC x41z, optimised apps have not had that problem (system freeze) - but as has been shown in thread VLAR still run slower (a lot slower/too slow) - iirc at the other end of the processing spectrum from VHAR/shorties there just might be more to precess.
Since we hope v7 MB is still on the agenda and thus the GPU app will eventually become x41z, getting rid of a whole host of frist generation app problems including -12. We hadn't made up our collective mind yet if we had a case to lift the policy - you'd need production hosts testing real life performance on VLAR for a good statistical data basis.

ATM I'd rather have the restriction back - better to err on the side of caution.

NB If you have receieved VLAR and you start getting -177/-197 best to run Fred's rescheduler to extend the time limit, provided you are happy to let them process at such a slow pace.
____________
I'm not the Pope. I don't speak Ex Cathedra!

IIRC the no vlar is a switch in the sheduler, that might have become lost when Eric upgraded - we've had that happen before.

Richard is working on 'how to get the server to resend the VLAR to the CPU' instructions, which he will post once he's confirmed the procedure works reliably.
____________
I'm not the Pope. I don't speak Ex Cathedra!

Richard is working on 'how to get the server to resend the VLAR to the CPU' instructions, which he will post once he's confirmed the procedure works reliably.

OK - as the Lady says...

First, these instructions are a first draft, and pretty telegraphic. They assume you're already familiar with the terminology, you know where to find the various BOINC files, and you know the rules for making changes to them. That's what we used to call ADVANCED USERS ONLY.

That's the only warning you're going to get. Read the instructions through carefully: check that you understand every point, and how to do it. If you're at all uncomfortable, don't even start. You're on your own from here.

Ensure you have a CPU application active for MB tasks

Unset 'Use NV GPU' (web preferences)

Set 'Use CPU' (web preferences)

Set 'No new tasks' (BOINC Manager)

Update project (BOINC Manager - if needed, some versions will report work immediately when NNT is set)

FOr now I'd suggest getting the BOINC rescheduler and have those VLAR's rescheduled to your CPU. You'll still have the WU's onboard and they won't have to be resent by S@H when they fail on your Nvidia card.
____________
In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope

FOr now I'd suggest getting the BOINC rescheduler and have those VLAR's rescheduled to your CPU. You'll still have the WU's onboard and they won't have to be resent by S@H when they fail on your Nvidia card.

But you are likely to mess up the server's averaging and credit-granting records. I can't be bothered to work out whether you're likely to request too much credit for yourself (only to be dragged back down by your wingmate), or to request too little and drag your wingmate down with you.

The purpose of my 'resend' recipe was to get the server records updated to show the tasks allocated to CPU - that way, runtime and credit should be accurate.

I noticed that Eric (or someone) added a "Use ATI GPU" preference to the project preference page, probably for the new AP - for - ATI application. I don't have an ATI GPU, but it was set to on. I turned it off and haven't got a VLAR for Nvidia on the last 5 successful gpu work requests.

Can't be certain there's cause and effect here, but if you don't have an ATI card, you might as well turn it off.
____________
Another Fred
Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop.

It's in the standard BOINC web code, but it's hidden (because presumed useless) until there's an ATI application available for stock download. It will have appeared automatically when Eric added the ATI AP app last night (see news), but it's a good point about checking that the default values on your account are right for you.

This one time you are getting the polite version of my reply, because other people, especially those fairly new to the project, may not know why a 'no VLAR to NVidia GPU' policy and code was established:

VLAR run incredibly slow with stock (6.08/6.09/6.10) apps. They can run so badly that the whole system freezes or outright crashes.
That was true with drivers two years ago - I don't think anybody ever established whether new drivers cope better and I'm pretty sure nobody wants to really try.

I did, My GPU's (470's) will run VLAR's at 1 WU per card without any problems but is slow, 2 WU per card causes lag and 3 WU per card causes major problems.

Hmmm. My recipe worked very nicely while I only had a few VLARs, and they'd all arrived in a neat contiguous block. But now I've got a boatload more, and they're all dotted around individually in ones and twos.

Does anyone know of a nice automated way of finding/deleting a block like this?