It has come to our attention that some users have been setting their BOINC clients to auto abort work units from specific applications. Doing this sends an error results back to our server which then causes some work units to be unable to validate. Essentially, it prevents some of our hard working crunchers from getting their due credits. The proper way to prevent yourself from getting work units from a specific applications such as our beta applications N-Body or Modified Fit, is to go to your account page on our website (http://milkyway.cs.rpi.edu/milkyway/home.php). Under the Preferences section please select the link for your preferences for this project. There will then be a link to edit these preferences on this page. Halfway down your preferences, there will be some check boxes in the "Run only the selected applications" section. You will only receive work units for the applications you have check marks next to. For reference: Milkyway@home is our flagship application and is considered stable and in its final released state; Milkyway@home N-body Simulation is our beta version N-body simulation and orbit fit program; Milkyway@home Separation is an, as of now, unused application; Milkyway@home Separation (Modified Fit) is our beta version separation code testing new models for both streams and background in the Milky Way Halo. As usual if you have any issues with this method or questions about it please post them here. We appreciate your cooperation and understanding in this.

Thank you,

Jake W.

TL;DR: If you are auto-aborting work units please stop and use the method above to prevent users from losing credits and to prevent problems in our algorithms.

so one gpu is running 4 WU at a time. Then no down time. Particular machine has two cpu cores but I have 4 virtual cores. Again, no cpu down time. 4 cpu wu and 4 gpu wu which is about 10% more work than letting them cycle down. Also have constant fan speeds and more stable temperatures.

so one gpu is running 4 WU at a time. Then no down time. Particular machine has two cpu cores but I have 4 virtual cores. Again, no cpu down time. 4 cpu wu and 4 gpu wu which is about 10% more work than letting them cycle down. Also have constant fan speeds and more stable temperatures.

Holler if any wants my 2 gpu xml files.

Hey, this is exactly what i want, but it dosent work. I got a i7-3930k CPU with 6 Cores and 12 Threads & two GTX Titan.
If i let run MW@Home without any app_configs or something, all 12 Threads and both GPUs are working.
If i add the app_config, only GPUs are working correctly (2 WUs per GPU), but the CPU does nothing... BUT if i drag n drop the app_config.xml file out of my MW@home folder and restart BOINC, all works fine (12 CPU WUs and 2 GPU WUs on each card) for around 10min! After this 10 minutes, the GPUs automatically stops the additional WUs and keeps processing one WU on each card.
How can i make all or even 8 to 10 threads working while using the multiple WU App_config?

After setting the value <max_concurrent>4</max_concurrent> to <max_concurrent>14</max_concurrent> inside the app_config.xml and <ncpus>4</ncpus> to <ncpus>14</ncpus> inside the cc_config.xml it works fine.
4 GPU WUs (2 WUs/GPU @ 0.25 CPU/GPU-WU) and 10 CPU WUs are active. Hope it will hold longer than 10 minutes :D

Edit: With optimized app_config and cc_config, i can run 23 WUs at the same time. 12 on GPU (6 per card with Double Precision enabled) and 11 WUs on CPU. Each GPU WU take around 2 minutes to complete, CPU WUs run between 1-2 Hours.

It seems to me the project has a problem and we users are being blamed for it, and the project is NOT helping to solve the problem!! Label the choices as to the units you are sending out and I WILL uncheck them!!! Until then deal with the problem, just like I am!!!

Any runs named _separation_ are coming from Milkyway@Home and runs named _modfit_ are coming from Milkyway@home Separation (Modified Fit). This run may have a slightly more complicated data set so it might actually just take longer to run them. Those are Jeff's runs and I am meeting with him in 10 minutes. I will let him know about your problem and see he thinks is going on.

Hey AMueller91,
glad you figured it out. I have not seen any benefit beyond 4 tasks per gpu. Key is no down time and the chances that 4 tasks finish at the same time is minimal.

If they are running in tandem, just pause one then start it back up. All you need for cpus is set logical cores = physical cores plus 1.

Exactly :)

After it starts working fine with 6 tasks per GPU, i tested the maximum number of WUs to my Titan Cards. So without Double Precision, they can only handle a maximum of 3 WUs per Card to get a GPU load of 99%. But with Double Precision enabled, i get a maximum of 8 WUs per Card (16 GPU Tasks simultaneously) at a 99% GPU load. I let it run for around 5 minutes, finished nearby 30 tasks but also the card heat up to 90°C.

So im fine with 6 WUs/card. It runs stable, without errors and temps around 85°C.

Any runs named _separation_ are coming from Milkyway@Home and runs named _modfit_ are coming from Milkyway@home Separation (Modified Fit). This run may have a slightly more complicated data set so it might actually just take longer to run them. Those are Jeff's runs and I am meeting with him in 10 minutes. I will let him know about your problem and see he thinks is going on.

Sorry,

Jake W

I really don't see the difference, they BOTH say Milkyway@home!! Are you trying to say you are getting units from a 3rd party supplier, putting the MilkyWay@home name on them, and are still not responsible if they are bad or don't work?

Today I got a message from MW saying the driver I am using, the AMD 13.10 Beta, is not supported here. Okay that's fine, but I can't find a list of which ones ARE supported here? Is this a trial and error thing until I stop getting the message, or am I just not seeing the list of approved drivers somewhere?