Hi, for some reason some of you are having problems with this WUs in the new application and we've moved them to the beta queue to have a proper look. I've also just sent 50 WU under the name KLEBEbeta with a much simpler configuration file. These simulations are really important, and fixing this bug will also help for future similar projects in drug discovery. Please report any problems you might have on groups KLEBEs and KLEBEbeta.

Thanks for moving to beta to try to fix the problems!
I'm noticing a "new" problem with the NOELIA_KLEBEbeta tasks, though.

Essentially, they get assigned to a GPU, and then when they try to "get going", BOINC shows that they run for about 15-40 seconds, and then the task resets back to the beginning (with Elapsed back to 0 seconds), and it retries.
It just keeps retrying until failure.
Additionally, if the user closes BOINC, the acemd.800-55.exe process for that task does not close properly (it still remains in the Task Manager's process list, even though all other related BOINC processes have exited normally).

Also, looking at stderr.txt for one of the tasks that I aborted (http://www.gpugrid.net/result.php?resultid=7221709), said the following lines that might give a hint as to what's happening:
swanMemset failed
Can't acquire lockfile - exiting
FILE_LOCK::unlock(): close failed.: No error

I have not seen this behavior before today, so I think there is at least 1 new bug here.

This happens both on my GTX 660 Ti, as well as my GTX 460, in Windows 8.1 Preview x64.

The current task exhibiting this behavior is:
109nx4-NOELIA_KLEBEbeta-0-3-RND0846_0

I hope this information helps you to track it down to correct the problem(s) quickly, as right now my GPU is spinning in circles and doing no work. Are you able to reproduce the problem in your testing?

One of my maschine failed this, and i got exactly this wu to my next machine where it stucks O.o saw it now after 2 hours. Puh earliy enough before weekend ^^
____________
DSKAG Austria Research Team: http://www.research.dskag.at

I tested suspending one of these KLEBEbeta tasks, and it caused a driver reset. So, the problem still persists.

Can you please look into it more closely? The issue has to deal with how the KLEBE tasks are exiting - it seems they are not releasing the GPU in a timely fashion, as compared to every other GPU task I run (across all my GPU projects).

Maybe compare the exit logic of a KLEBE task, versus the exit logic of other GPUGrid task types?

It happens whenever a NOELIA task (especially KLEBE) is suspended for any reason, including:
- BOINC set to Snooze
- BOINC set to Snooze GPU
- BOINC set to Suspend
- BOINC set to Suspend GPU
- BOINC set to Suspend due to exclusive app running
- BOINC set to Suspend GPU due to exclusive GPU app running
- GPUGrid project set to Suspend
- NOELIA KLEBE task set to Suspend
- BOINC exited with "Stop running tasks" checked

Something in the KLEBE exit logic has been causing driver resets and watchdog timeouts, for several months, for many of your Windows users. I sure hope you guys can work together to get a handle on it!

Note: I do use the "Leave application in memory when suspended" setting, but so far as I know, that is irrelevant to GPU tasks. When a GPU task is suspended, BOINC has to remove it from memory, regardless of that user setting. It treats GPU tasks differently because there's no PageFile backing the GPU RAM.

Thanks for looking into this. It's my biggest problem across all of my 20 BOINC projects.

I have two NOELIA KLEEBEbeata on my 770 and they start, then when 0.021% complete, no more progress but they keep running, 2h57m16s elapsed and 0h0m0s remaining. This was in app 8.00 and I have now aborted these WU's and try the new 8.01 app.

I have two NOELIA KLEEBEbeata on my 770 and they start, then when 0.021% complete, no more progress but they keep running, 2h57m16s elapsed and 0h0m0s remaining. This was in app 8.00 and I have now aborted these WU's and try the new 8.01 app.

Would anyone with a cc 1.3 card - Geforce GTX 200 series - please try some of the current acemdbeta v801 Noelia-KLEBE WUs and report back here?

MJH

Ok i started one with 8.01. But this can take some time even on my 670mhz 285gtx..I normaly dont run gpugrid on this anymore. It will need about 33hours. The short run 8.00 was ok on this card. I dont think anybody still uses a powerhungry 200series on long runs O.o
____________
DSKAG Austria Research Team: http://www.research.dskag.at

I noticed the following:
On the 770 I have this one: 063px79-NOELIA_KLEEBEbeta2-0-3-RND678_0
MEM use: 1003MB
Clock: 1097MHz (however I have set the clock to 1060MHz!)
GPU load: 87%
Temp: 65°C
7.5% done in 40 minutes

I now these are not the same WU's and the GPU's are not the same as well. But it is strange that the WU can manage to get the clock higher, or this must have been the result of the faulty WU that I aborted and not reboot afterwards.

Due to the difference in memory load it can also be that cards with only 1MB can not do these WU's as before. That may result in some comments ;-)
____________
Greetings from TJ

Would anyone with a cc 1.3 card - Geforce GTX 200 series - please try some of the current acemdbeta v801 Noelia-KLEBE WUs and report back here?

MJH

Ok i started one with 8.01. But this can take some time even on my 670mhz 285gtx..I normaly dont run gpugrid on this anymore. It will need about 33hours. The short run 8.00 was ok on this card. I dont think anybody still uses a powerhungry 200series on long runs O.o

Oh and when somebody started one too on 200series, plz tell me, got my energybill today, so i would love to stop it the next hours when not needed :p
____________
DSKAG Austria Research Team: http://www.research.dskag.at

Suspended the WU. When I resumed it, 5min later, I got the error message,
"Display driver nvlddmkm stopped responding and has successfully recovered".

When I checked my Windows logs I saw,
"A request to disable the Desktop Window Manager was made by process (4)" - listed 2sec before the driver crash/restart entry. The driver log entry was made after the driver restarted rather than when the failure was triggered.

The WU again continued 'running' without progressing. I aborted it but now the stderr has nothing of any use,

I have noticed that, using the 8.01 app on a NOELIA_KLEBEbeta task, on my GTX 660 Ti, the process does not utilize a full CPU core (like other GPUGrid tasks normally do for that GPU). It's like SWAN_SYNC is not set correctly. Though I'm still getting good (85-91%) GPU utilization for the task.

I have noticed that, using the 8.01 app on a NOELIA_KLEBEbeta task, on my GTX 660 Ti, the process does not utilize a full CPU core (like other GPUGrid tasks normally do for that GPU). It's like SWAN_SYNC is not set correctly. Though I'm still getting good (85-91%) GPU utilization for the task.

Oh and when somebody started one too on 200series, plz tell me, got my energybill today, so i would love to stop it the next hours when not needed :p

If it is still running then that's plenty long enough to demonstrate that all is well, thanks. You can kill it off.

Matt

Ok it ran one hour, was at 3,3%, 95% gpu load, used 515MB VRAM, cpu was busy working on LHC and still computed normal. Thx Aborted it. ^^
____________
DSKAG Austria Research Team: http://www.research.dskag.at

I have noticed that, using the 8.01 app on a NOELIA_KLEBEbeta task, on my GTX 660 Ti, the process does not utilize a full CPU core (like other GPUGrid tasks normally do for that GPU). It's like SWAN_SYNC is not set correctly. Though I'm still getting good (85-91%) GPU utilization for the task.

Is this behavior new? Also, is it expected?

I thought NOELIA's never used a full CPU core, that's the way it's always been. We've talked about it before in different threads.

:) Yeah, thanks. I've helped Einstein fix a bug, MindModeling fix a bug, GPUGrid fix a couple things, Test4Theory fix a bug, Rosetta fix their app, SETI fix a GPU estimate problem, got nVidia to fix a monitor-sleep issue, and more. And I also do alpha/beta testing of the actual BOINC software, and have worked directly with the BOINC devs.

Regarding this particular case, I believe I was aware of some tasks "not using a full CPU core on my Kepler card", but I did not know it was NOELIA ones. I'll try to keep that in mind.

I still consider this different CPU load as a malfunction.
However, with this low CPU load the GPU load is still above 95%, so we can turn this question the way around: is it sure that the other tasks need a full CPU thread to feed a Kepler GPU?

I still consider this different CPU load as a malfunction.
However, with this low CPU load the GPU load is still above 95%, so we can turn this question the way around: is it sure that the other tasks need a full CPU thread to feed a Kepler GPU?

On the Folding forum, there have been extended discussions of Nvidia CPU core usage under CUDA. It contrasts to the case of AMD cards running OpenCL, which typically require only a few percent of a CPU core.

As I recall, Nvidia provides the option to the developers to reserve a full CPU core when running under CUDA using spin states, which I don't understand anyway. If the application developers want to ensure that they have enough CPU support, they can reserve it, even though typically not all of it is actually in use

So maybe the other tasks don't really require a full core, except that it may be useful to reserve it for stability or performance or whatever.

EDIT: To further complicate matters, Nvidia cards running OpenCL always require a full CPU core; there is no option not to.

I still consider this different CPU load as a malfunction.
However, with this low CPU load the GPU load is still above 95%, so we can turn this question the way around: is it sure that the other tasks need a full CPU thread to feed a Kepler GPU?

On the Folding forum, there have been extended discussions of Nvidia CPU core usage under CUDA. It contrasts to the case of AMD cards running OpenCL, which typically require only a few percent of a CPU core.

As I recall, Nvidia provides the option to the developers to reserve a full CPU core when running under CUDA using spin states, which I don't understand anyway. If the application developers want to ensure that they have enough CPU support, they can reserve it, even though typically not all of it is actually in use

So maybe the other tasks don't really require a full core, except that it may be useful to reserve it for stability or performance or whatever.

EDIT: To further complicate matters, Nvidia cards running OpenCL always require a full CPU core; there is no option not to.

Watching two different third-party developers working on SETI (one specialising in CUDA, the other in OpenCL), we get the opposite outcome: OpenCL on ATI is inefficient unless a spare CPU core is available, but CUDA on Nvidia requires very little CPU.

I'm not a developer myself (at least, not at the level these guys program), but from the peanut gallery it looks as if CPU usage is very much down to the skill of the developer, and how well they know their platform and tools.

But I'm interested by the OpenCL on Nvidia point. That does seem to be a common observation - I wonder if it has necessarily to be so? Or maybe Mvidia didn't port some of their synch technology from CUDA to the OpenCL toolchain yet?

8.02 app running the Noelia Beta WU's (one on the CUDA4.2 and the other on the 5.5 app).

When I use snooze the driver still restarts. I have the driver timeout set to 20sec, and it takes 20seconds for the driver to crash/restart.

When I suspended the WU's individually they didn't cause a driver restart.

However when I suspended both at the same time the driver restarted, again after 20sec. (These situation driver restarts/or lack of restarts are repeatable).

I noted that the 5.5 WU kept running (progressing) for about 4seconds after I suspended it.

But I'm interested by the OpenCL on Nvidia point. That does seem to be a common observation - I wonder if it has necessarily to be so? Or maybe Mvidia didn't port some of their synch technology from CUDA to the OpenCL toolchain yet?

The GK104 cards are supposed to be OpenCL 1.2 but the drivers are only OpenCL1.1, which means the toolkit can't be 1.2.
AMD/ATI supports OpenCL1.2, Intel supports OpenCL1.2, NVidia says it's GPU's are OpenCL1.2 but their drivers prevent the cards from being used for OpenCL1.2.
____________FAQ's

On my Linux systems I have the STABLE Repository drivers (304.88), supposedly only CUDA 5.0.
However I'm presently running a CUDA 5.5 NOELIA Beta WU (12h in 3 to go).
I thought CUDA 5.5 would only be used if the system had the correct drivers?
____________FAQ's

Watching two different third-party developers working on SETI (one specialising in CUDA, the other in OpenCL), we get the opposite outcome: OpenCL on ATI is inefficient unless a spare CPU core is available, but CUDA on Nvidia requires very little CPU.

I'm not a developer myself (at least, not at the level these guys program), but from the peanut gallery it looks as if CPU usage is very much down to the skill of the developer, and how well they know their platform and tools.

That is quite true from my own experience also (as a user only), but I think we are talking about two different things. Neither ATI on OpenCL nor Nvidia on CUDA require a CPU core unless the project developer requires it. And usually CUDA can be made more efficient with CPU usage. Certainly that is the case with Folding with their separate OpenCL core_16 (for AMD cards only) and CUDA core_15 versions (obviously for Nvidia cards only); the CUDA one is much better (less than 1 percent verses maybe 20 percent or more).

But I'm interested by the OpenCL on Nvidia point. That does seem to be a common observation - I wonder if it has necessarily to be so? Or maybe Mvidia didn't port some of their synch technology from CUDA to the OpenCL toolchain yet?

All I know is that on Folding with their newest OpenCL core_17, which runs on both AMD and Nvidia, the situation is reversed. It requires only 1 or 2 percent on AMD cards (e.g., my HD 7870 on an i7-3770), whereas on an Nvidia card it reserves a full core (e.g., on my GTX 660 Ti). The question has been asked on the Folding forum as to whether that is necessary, and the answer is that Nvidia has not implemented the option in OpenCL to use less than a full core. Apparently they could if they wanted to, but maybe for performance reasons (so the speculation goes) they want their cards to perform the best they can, so they just grab the whole core. It helps solve the problem you mentioned above, where users don't always know to leave a core free I suppose.

But I'm interested by the OpenCL on Nvidia point. That does seem to be a common observation - I wonder if it has necessarily to be so? Or maybe Mvidia didn't port some of their synch technology from CUDA to the OpenCL toolchain yet?

All I know is that on Folding with their newest OpenCL core_17, which runs on both AMD and Nvidia, the situation is reversed. It requires only 1 or 2 percent on AMD cards (e.g., my HD 7870 on an i7-3770), whereas on an Nvidia card it reserves a full core (e.g., on my GTX 660 Ti). The question has been asked on the Folding forum as to whether that is necessary, and the answer is that Nvidia has not implemented the option in OpenCL to use less than a full core. Apparently they could if they wanted to, but maybe for performance reasons (so the speculation goes) they want their cards to perform the best they can, so they just grab the whole core. It helps solve the problem you mentioned above, where users don't always know to leave a core free I suppose.

That was my suspicion too. In trying to pass messages between the two developers - apparently the new CUDA way is to use 'callback' rather than 'spin' synch - I was invited to refer to the NVidia toolkit documentation to find examples for the OpenCL implementation. I couldn't find any.

If there are any unbiased developer observers of this thread, it would be useful to hear if there is any factual basis for our observations - and for the rumour I've heard that NVidia might pull away from OpenCL support entirely. That would be a shame, if true - both NVidia and ATI (as it was then) were founder members of the Khronos Group in January 2000. It would be a pity if competition drove out collaboration, and we returned to the days of two incompatible native-code development environments.

Or maybe Mvidia didn't port some of their synch technology from CUDA to the OpenCL toolchain yet?

That's what I suppose as well, without being a GPU developer. Over a year ago nVidias performance at POEM OpenCL was horrible, but they only used ~50% of one core. A driver update doubled performance but since then they're using a full CPu core.

To me it seems like "just use a full core" was a quick fix. And now they don't want to push OpenCL any further than they have to and just stick with this solution.

It can be checked in the Windows Task Manager: look for the acemd.80x-55.exe (or acemd.80x-42.exe) on the "Processes" tab. If its CPU usage is 1-2%, then it's not using a full core, otherwise the CPU usage is 100/the number of your CPU's threads (12-13% on a 8-threaded CPU, 8% on a 12-threaded CPU). You can check the past workunits' CPU usage at your hosts' task list: if the "CPU time" (almost) equals the "run time", then the task used a full core, if the "CPU time" is significantly less than the "run time", then it didn't use a full core.

Just note that the NOELIA_KLEBE WU's don't use a full CPU core/thread - Never have.
My Boinc scheduler has them at 0.595 CPU's, but actual use is less than that (2 or 3% of the entire CPU, which means <=0.25 CPU threads).
____________FAQ's

I have overridden the CPU requirements, via app_config.xml. Because I have 2 GPUs that do GPUGrid (1 Fermi, 1 Kepler), I had set cpu_usage to 0.5 for all GPUGrid app types, so that when both cards are working on GPUGrid, BOINC reserves 1 total CPU core for them, keeping the CPU slightly above saturation. I've since changed my logic a bit so as to slightly undersaturate the CPU; I accomplished that by changing cpu_usage to 1.0 for all GPUGrid app types, so a logical CPU core is reserved for each, which I think is what you guys always recommended anyway.

Long story short, I used Process Explorer to confirm that NOELIA_KLEBE units strangely do not use a full CPU on my Kepler card, whereas it seems to me that every other GPUGrid task does use a full CPU on my Kepler card. It matters to me since they are "mixed in" with other tasks in the "long" app, and my cpu_usage setting now applies to some tasks that won't use a full core. In a perfect world, and if I were an admin, I might consider placing "strange types" like this in a separate app queue, maybe.

Thank you very much for confirming this is "normal" for NOELIA_KLEBE on Kepler.
Jacob.

It can be checked in the Windows Task Manager: look for the acemd.80x-55.exe (or acemd.80x-42.exe) on the "Processes" tab. If its CPU usage is 1-2%, then it's not using a full core, otherwise the CPU usage is 100/the number of your CPU's threads (12-13% on a 8-threaded CPU, 8% on a 12-threaded CPU). You can check the past workunits' CPU usage at your hosts' task list: if the "CPU time" (almost) equals the "run time", then the task used a full core, if the "CPU time" is significantly less than the "run time", then it didn't use a full core.

Thanks Zoltan,
This is what I thought and in this way I look at task manager. With Noelia WU, the one we have now and in the past use 1-3%. Rosetta is using 13% per core.
I have also seen Nathans not using less then 13% and Santi's that use not 13% all the time. It was fluctuating from 2% steady to 11% for seconds and then back to 2% again. But I am not watching task manager a lot.
____________
Greetings from TJ

Just a note: there are also NOELA_KLEBE WUs on the acemdbeta queue. Somewhat confusingly, those are test WUs for the beta app and aren't part of this batch. If you have problems, please check the application that was used and report it over on thread about the beta application if appropriate: