The server needs to try each compatible version (gather statistics), to determine which is best. This should converge on Cuda5 for those after many tasks. If it converges on the wrong one, there will be a reset statistics button (at some stage)."It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

The server needs to try each compatible version, to determine which is best. This should converge on Cuda5 for those after many tasks. If it converges on the wrong one, there will be a reset statistics button (at some stage).

So a select version option in the app_config.xml could be a good ideia...

That will be a long night/day for you guys... hope you all have a good beer & coffee stock to help.

I am running stock now, and doing exactly what Jason suggests. It will all balance out in the long run, and with a project that has the potential for going long past my expected life time, I am happy. It will all balance out on it's own. Then the tweaking begins......

The server needs to try each compatible version, to determine which is best. This should converge on Cuda5 for those after many tasks. If it converges on the wrong one, there will be a reset statistics button (at some stage).

So a select version option in the app_config.xml could be a good ideia...

That will be a long night/day for you guys... hope you all have a good beer & coffee stock to help.

Forcing an application version can already be done with app_info.xml, as the installers will do. From the server perspective it needs to have your knowledge about versions, but is a blank slate. For credits to dial in, and your APRs correctly confirm what you already know (or break horribly), best to let it run stock for a while & see if it works out sensible numbers, or David & Eric need to be locked in a small dark room together until they work it out :D

Jason"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

I am running stock now, and doing exactly what Jason suggests. It will all balance out in the long run, and with a project that has the potential for going long past my expected life time, I am happy. It will all balance out on it's own. Then the tweaking begins......

The server needs to try each compatible version (gather statistics), to determine which is best.

We could save it a lot of trouble if that were an option.

As per previous post, yeah you already have that option with app_info.xml.
In the short term, It's more about dialling in credits, which will probably be all over the place for some time."It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

In the previous thread, there were some comments about VLARs going to Nvidia GPU's now. I"m still on v6 for another 10 hours while draining my cache, and I notice that I have 3 VLARs and a non-VLAR now running on my 670 with x41zc, Cuda 5.00. I don't see any adverse effects except that run times will be longer than normal and there is a little lag in the system (responsible for any typos in this post!) :)

But the odd thing is that my gpu temperature is more than 10 degrees below normal. No downclock and gpu utilization is at a constant 99%. CPU usage is below normal for 6.10 tasks. Why would these run cooler? I expected the opposite.

Another Fred
Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop.

In the previous thread, there were some comments about VLARs going to Nvidia GPU's now. I"m still on v6 for another 10 hours while draining my cache, and I notice that I have 3 VLARs and a non-VLAR now running on my 670 with x41zc, Cuda 5.00. I don't see any adverse effects except that run times will be longer than normal and there is a little lag in the system (responsible for any typos in this post!) :)

But the odd thing is that my gpu temperature is more than 10 degrees below normal. No downclock and gpu utilization is at a constant 99%. CPU usage is below normal for 6.10 tasks. Why would these run cooler? I expected the opposite.

I see the exact same thing. My guess is that since VLARs don't do well parallelizing, you can't keep as many cores busy in the GPU. However, Precision X reports high CPU usage, especially with another task running on that same GPU. Less cores in use mean less heat.

I can see this is going to be a problem since not only does the VLAR run much longer compared to a normal work unit, but it degrades the other jobs running on that same GPU. I don't know if there is a workaround to this except for only running a single tasks at a time on all my GPUs. That's doesn't seem to be a very efficient use of GPU resources. I would much rather the GPU stay on the CPUs which do pretty well with them. I wouldn't care if all my CPU tasks were VLARs.

In the previous thread, there were some comments about VLARs going to Nvidia GPU's now. I"m still on v6 for another 10 hours while draining my cache, and I notice that I have 3 VLARs and a non-VLAR now running on my 670 with x41zc, Cuda 5.00. I don't see any adverse effects except that run times will be longer than normal and there is a little lag in the system (responsible for any typos in this post!) :)

But the odd thing is that my gpu temperature is more than 10 degrees below normal. No downclock and gpu utilization is at a constant 99%. CPU usage is below normal for 6.10 tasks. Why would these run cooler? I expected the opposite.

I see the exact same thing. My guess is that since VLARs don't do well parallelizing, you can't keep as many cores busy in the GPU. However, Precision X reports high CPU usage, especially with another task running on that same GPU. Less cores in use mean less heat.

I can see this is going to be a problem since not only does the VLAR run much longer compared to a normal work unit, but it degrades the other jobs running on that same GPU. I don't know if there is a workaround to this except for only running a single tasks at a time on all my GPUs. That's doesn't seem to be a very efficient use of GPU resources. I would much rather the GPU stay on the CPUs which do pretty well with them. I wouldn't care if all my CPU tasks were VLARs.

I have something weird going on with my 2 machines, they were assigned VLAR but they are showing up as suspended by user and I did not suspend them.

Is there a need to change preferences for amount of work and additional work with V7 ? I seem to remember they are now backwards, or have I read too many posts.
____________
Dave

Thinbk you're remembering the difference in work fetch settings between BOINC 6(and earlier) and BOINC 7 where you have to swtich the settings. Unless you upgrade BOINC, there's no need to change those settings. You do need to make sure SETIatHome v7 is selected in your website project preferences.

Update on my earlier VLAR comment. Lag time got out of hand - had trouble making that post. Dropped down to 3 at a time (2 VLARS) and it is still bad. Took 10 seconds to open this thread. VLAR's on Nvidia aren't going to work for me. Also has adverse impact on other gpu tasks - their run time is abnormally long.Another Fred
Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop.

...Update on my earlier VLAR comment. Lag time got out of hand - had trouble making that post. Dropped down to 3 at a time (2 VLARS) and it is still bad. Took 10 seconds to open this thread. VLAR's on Nvidia aren't going to work for me. Also has adverse impact on other gpu tasks - their run time is abnormally long.

Don't send those things to AMDs either. I tried a few on my 6850 with MB7_win_x86_SSE_OpenCL_ATi_HD5_r1817.exe. They work, but, the computer has 'spikes' of unresponsiveness. You can actually see it in the SIV CPU meter as a clear line every 30 seconds or so. That is with the period of iterations set at 32. Not to mention they took ~40 minutes to complete. The 6850 does an unblanked AP in less time. The GPU temp was lower, so was the credits...

In the previous thread, there were some comments about VLARs going to Nvidia GPU's now. I"m still on v6 for another 10 hours while draining my cache, and I notice that I have 3 VLARs and a non-VLAR now running on my 670 with x41zc, Cuda 5.00. I don't see any adverse effects except that run times will be longer than normal and there is a little lag in the system (responsible for any typos in this post!) :)

But the odd thing is that my gpu temperature is more than 10 degrees below normal. No downclock and gpu utilization is at a constant 99%. CPU usage is below normal for 6.10 tasks. Why would these run cooler? I expected the opposite.

It's surprising these class of GPUs didn't show pressure here under Beta test with the expected new 2 task per GPU optimum. If the reported experiences match the general consensus on these cards, I would request review of either:

- Removing VLARs from being sent to these GPUs, OR
- a change in default settings, OR
- an Opt-in/Opt-out feature. [e.g. My own aging Core2Duo with GTX 680 happily crunches them while watching the Starship Troopers Trilogy, I'd like to crunch them because they are longer & should hopefully get more credit]

In General there are a few things to be aware of (VLAR or not):
- V7 does new processing (Autocorrelations) that changes the dynamics quite substantially, including making all task times longer, not comparable to V6.
- If you were running 3, 4 or more tasks on the same GPU before, that is quite likely too many under V7. Autocorrelations are very memory intensive, reduce it to 2 at once per device. This is the 'main' reason for running cooler.

VLAR in particular:
- will be noticeable if you have too many running at once. If you experience any display lag with these, reduce the # of instances from 4 or 3 to 2.
- If problems persist, suspect 'system overcommit'. try the following settings in the empty supplied cfg file for the app: