Hmmm. My recipe worked very nicely while I only had a few VLARs, and they'd all arrived in a neat contiguous block. But now I've got a boatload more, and they're all dotted around individually in ones and twos.

Does anyone know of a nice automated way of finding/deleting a block like this?

Well, use either \s{4} for to replace those 4 spaces or simply 4 "normal" spaces (which make it IMO better readable in case you need to adjust something). There are two lines with 8 spaces in it according to my client_state, there you would use \s{8}.

But that's more just something to start with, I don't know if that's something only for CPUs, but I have for example a fpops_cumulative tag (and of course no plan_class and windows_x86_64 as platform). So others might have some other tags or values in there as well..

I noticed that Eric (or someone) added a "Use ATI GPU" preference to the project preference page, probably for the new AP - for - ATI application. I don't have an ATI GPU, but it was set to on. I turned it off and haven't got a VLAR for Nvidia on the last 5 successful gpu work requests.

Can't be certain there's cause and effect here, but if you don't have an ATI card, you might as well turn it off.

Ive changed the settings to disable ATI GPUs and Ive not received any more vlars for GPUs...
(Sadly, Ive not received vlars neither for CPU, so I cant know if the change made the trick or it is just that there are no vlars available to send...)

Well, use either \s{4} for to replace those 4 spaces or simply 4 "normal" spaces (which make it IMO better readable in case you need to adjust something). There are two lines with 8 spaces in it according to my client_state, there you would use \s{8}.

Well, the "four literal typed spaces" was the brute force approach that confirmed that it was the indenting which caused the problem.

Looking at Horacio's tutorial, example (20) suggests \s* might do the trick. I'll work on it some more in the morning - I think I have enough non-VLAR CUDA tasks to last me until then.

My GTX5xx Fermi cards seem to handle the vlar's fairly well at around 3x the time they need for a midrange task.
My Linux pc with a GTX260 ends all vlar tasks after 1h50m with a -177 error (estimated processing time is now at 13 minutes). I suspended all 19 GPU vlar tasks on that pc for the time being.

I've just had it resent to CPU, but the wingmate is still showing the original sent time of 11:09 UTC today, and the WU demonstrates that they are (or were then) still being sent to both stock and anonymous platform hosts.