So, the estimated times to finish WUs for both CPU MB and AP as well as GPUs is totally off, being far too high. At first I thought it might have been VLAR WUs on the 9500GT being responsible, but I haven't spotted any. The DCF sat around 4.8, so I re-adjusted it to 1.0. At first, that made all the estimates pretty sane and correct, but THEN the DCF converged to ~7.5 pretty fast. I don't know why.

It assumes that a normal CUDA WU would finish in like 4-6 hours, when in reality, the 285 GTX does it in 10-15 minutes, and even the slow 9500GT does it in 1-1.5 hours. Same for the CPU for both MB and AP, estimates way off. I heard you shouldn't set any FLOPS values anymore in the configuration for some reason, and re-adjusting the DCF doesn't do anything for more than a few hours at best.

Any idea of what I should reconfigure to get the estimates correct, and get enough CUDA WUs to make it through the downtime from Tuesday to.. like Friday evening in GMT+1?

btw., the computing prefs are set to fetch enough work for 10 days and connect every 0.01 days.

I'm sorry if the information I seek is already available on the forums, I just couldn't find anything but DCF and FLOPS values... and some people were saying "don't use FLOPS settings anymore".. So I wonder what I can do.

Thank you for any help you might be able to provide!
____________3dfx Voodoo5 6000 AGP HiNT Rev.A 3700 prototype, dead HiNT bridge

Hmm, I would assume it would do that by the FLOPS estimation (708 GFLOPS for the 285 GTX and 88 GFLOPS for the 9500GT), no?

But even if it would use only the 9500GT for the estimation, it's far too high even for the slower GPU... and even for the CPU. So that I fail to understand.
____________3dfx Voodoo5 6000 AGP HiNT Rev.A 3700 prototype, dead HiNT bridge

Now, the machine has no more CUDA WUs already, and DCF has gone down to 2.7. Seems to fluctuate a lot. But even with DCF down to 2.7, estimates are roughly double the actual crunching time. But I suspect too, that's the GPUs which mess it all up for whatever reason. Now, with only the CPU crunching it seems to slowly reach a more sane level.

Should I probably repost this in the number crunching forum? Not sure if it would be the right place though. I'd love to fully utilize both GPUs in that machine, since that's their sole purpose now. :)

Edit: I searched the number crunching forums, and found a few threads, but no real solution. Most people seem to have this kind of problem, because they're using faster customized/optimized apps instead of stock ones without the (correct) FLOPS settings applied..
____________3dfx Voodoo5 6000 AGP HiNT Rev.A 3700 prototype, dead HiNT bridge

I just wanted to add: I tried different things now (all but using anything else but stock apps), to no avail.

So, I had to disable the weaker GeForce 9500GT card by setting <use_all_gpus>0</use_all_gpus> in cc_config.xml, and after just a few hours (!), everything went back to normal. Both the estimates for CPU WUs and also GPU WUs are now very precise, and I'm getting a lot more CUDA WUs despite the lesser actual GPU performance.

So, I suppose, BOINC simply can't handle multiple GPUs which are on different performance levels? At least when they're working on the same project (never tried multiple projects with multiple GPUs).

Would sure be nice if that was fixed somehow, because currently I can do better with the 9500GT disabled, simply because I can survive the weekly downtime, when actually performance "could" be higher, if DCF/FLOPS were estimated correctly for both the GeForce 285 GTX and the GeForce 9500GT, so my queues would get filled up nicely. This seems to work perfectly on systems with multiple identical GPUs, like my box with the two GTX 480s.

I hope, some future BOINC version will be able to do that for different GPUs in the same system. :)
____________3dfx Voodoo5 6000 AGP HiNT Rev.A 3700 prototype, dead HiNT bridge

Ok, I simply could not let it rest! I installed optimized apps to be able to set <flops></flops> in app_info.xml. But the damn DCF kept slipping out of control still.

Sooo, I downloaded ActiveState Perl (Perl language available for Windows x86_32 and x64 for free), and wrote a little in-place search&replace oneliner to simply keep the DCF down at 1.000000 automatically (i don't like to have to edit stuff manually all the time). Like this:

I placed the oneliner in a Batch script that is then being called by the Windows task scheduler every 10 minutes.

I am not sure how safe in-place editing really is on client_state.xml, I hope there will be no inconsistencies when BOINC also writes to the file. So far no problems, though I will definitely have to observe a bit more. BOINC seems to realize the new DCF whenever there is a workunit status change, no need to manually reload the configuration.

This is very dirty though, but I don't know how to do this any "cleaner"..
____________3dfx Voodoo5 6000 AGP HiNT Rev.A 3700 prototype, dead HiNT bridge

I set the DCF to 10.000000 while BOINC was running, and waited for one CUDA WU to finish, just to test the stuff. After that all the estimated times skyrocketed. But now that I have observed a bit more, I am not fully sure that this was the reason. I found, that when setting BOINC to a DCF of 1.000000 while the client is not running, then starting BOINC and letting it finish one WU, it writes a new DCF of around 4.5, and estimates are in the same area that I have seen in my test with 10.000000.

So you might be right actually. BOINC might keep the actual DCF only in memory, writing it to client_state.xml only for the sole reason to have "correct" estimates after a client restart.

Dammit. If that is true, I have no way to actually influence this behaviour online... There has to be some way to just FIX the DCF to 1. That would solve all problems, since I can then just fine-tune using the <flops> tags...
____________3dfx Voodoo5 6000 AGP HiNT Rev.A 3700 prototype, dead HiNT bridge

Perhaps you don't know what DCF stands for, but it's (Task) Duration Correction Factor. It can't be stuck at the same value, as no task here at this project runs for exactly the same amount of time as the next. Even minor differences of seconds is a difference already. Nothing said about the variation in angle range, which will make variations in the duration of the tasks - even with the VLARs now banished to the CPU only.

And while it has to be around 1 to be accurate for showing time remaining on tasks, you also have to give it a chance to get to the correct value for your computer. It's how BOINC learns how long work takes. By constantly changing the value yourself, you're making it impossible for BOINC to learn how long these nasty things take.

So here's a challenge for you: reset DCF to 1, then sit back and just watch things happening for 2 weeks. And then post back here with the DCF value your system has come up with by then. Remember, no tinkering. Just let BOINC figure it out for once.
____________Jord

That is exactly what I did in the very beginning, with stock apps. And it went totally wrong. The estimates where like 5 to 10 times higher than what it should have been, and I don't know why.

I only know that it pretty much stops when deactivating the second GPU, the 9500GT. But even though that second GPU takes somewhat over 1h to complete a task, DCF makes it estimate completion times of several hours.

So, I have a Dualcore CPU at 1.6GHz doing an MB WU in like 4-5h (with optimized workers), the 285 GTX doing the work in like 15mins, and the 9500GT doing the work in like 1-1.5h.

But estimates are like 4-5h for GPU after some time. DCF reaches values beyond 10. Estimates for MB CPU reach values like 20 hours and according to what I've seen stay there. This just can't be right.

So the machine was starving already for some time while all other machines were supplied with plenty of work, especially my most powerful one (i7 + 2 480 GTX cards).

So, what I want to do is to ensure that the machine doesn't run out of work during the regular weekly downtimes, that's all. So far I have been unable to do that... With my current <flops> settings, all estimates are pretty much perfect at a DCF of 1.000000, so I would like it to stay there. But it just doesn't.. It jumps to 4, then 7, then 10.... I either have to make sure that the DCF stays at 1, or I need to dynamically adjust <flops> settings.. Whatever it takes to make the pipeline stay full... On all my other boxes this works nicely. It's only the machine with two different GPUs in it that has this problem. It's just that I don't wanna keep the 9500GT idle, would be a waste of resources. I want both GPUs to work. But if this results in the whole machine being in idle state for like 4 days a week, it's not very good..

The funny thing: If DCF would be pushed up so far that the estimates would match the slowest crunching processor in the system sorted by type of processor, it would make sense. Let's say, it would push CUDA estimates up to match the speed of the 9500GT. 1.5h per MB WU. Yeah. Would make "some" sense. But it pushes the estimates FAR beyond the time that the SLOWEST crunching processor in the system takes to complete a task. 4h? 5h? How come? The GTX 285 does it in 15min, the 9500GT in roughly 90min. Where do the 4-5h CUDA estimates come from?

<vlar> Override default VLAR value.
<vhar> Override default VHAR value.
<dcf_min> Minimum duration_correction_factor. If the read value in the state file, is less than this value, this value is used to replace the existing duration_correction_factor.
<dcf_max> Maximum duration_correction_factor. If the read value in the state file, is more than this value, this value is used to replace the existing duration_correction_factor
You need to set both <dcf_min>and <dcf_max>values! A value of 0 may NOT be used.
"