From what I have seen, opencl task takes same amount of vide memory as the cuda task. So from purely technical angle:
560ti with 1.3 GB can hold 3 or 4 tasks
GTX660 with 2 GB will hold 8 tasks
____________

While the memory of a given card may be able to "hold" a number of WU, the processor (on the GPU) can only efficiently work on two or three at a time.

If you want to try it, start with processing one WU at a time, allow the card to run through for several hours, noting the processing time for each, now repeat for two WU, three WU and 4 WU. Provided the random sample of WU you have been fed are all much the same you will see a slight increase in processing time between 1 and 2, 2 and 3 and a very large increase between 3 and 4.

You do of course need to have a feed of WU, so don't try this while the servers are "having a holiday", which they are as I type (and this holiday may well continue until about 18:00UTC (19:00 your local). I was about to start to try this on my new cruncher, but I think I'll leave that set at 1 per GPU (its a 690 which has two GPU on one card)
____________
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

I did try running with one core feeding the GPU about 6 months ago (with the current hardware and software set up) and the overall throughput (total tasks per hour)was less with one core set aside for feeding the GPU than having the default settings where there is a "free for all". I have no doubt that for other CPU+GPU configurations the results would be different.

(I have run GPU-X on a few occasions, and the load time (time when the GPU is essential idle, is only a few seconds - between 5 and 10, not a lot when the run time is way over a thousand).

Thats just because 1 core is not enough on AMD machines.

I finnish 2 APs in less an hour.
Thats ~3000 seconds.
You finnish in about 13000 - 15000 seconds.
Do you see the difference ?
I`m running APs over 2 years now and am fully aware of each conditions.

Claggy has also a 460 and does faster than my ATI.

My GTX460 (which a factory overclocked variant) does AP Wu's in about 35 minutes (2100 secs), while my HD7770 (again a factory overclocked model) does AP in about 1 hour (3600 secs), both one at a time

For Both the Nvidia and AMD/ATI AP apps, you're absolutely got to reserve a core,
for the Nvidia app it's because of a change in the 270.xx drivers, after those drivers Raistmers Nvidia OpenCL apps fully utilise a core to feed the app (aka the 100% usage Bug/feature),
if you don't reserve a core the app is not fed as fast and takes a lot longer to finish,
you can downgrade to 26x.xx drivers to get around the 100% usage Bug/feature, but then the app isn't as fast as with a free core,

For the AMD/ATI app it is the same but opposite, if you don't free a core sometimes the Wu's take two or three times as long, with a very low GPU usage,
freeing a core guarantees the app will proceed at fully speed, with low CPU usage,

Claggy, for those that don't know how to, could you post a "Janet and John" on reserving a core.
____________
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

You do of course need to have a feed of WU, so don't try this while the servers are "having a holiday", which they are as I type (and this holiday may well continue until about 18:00UTC (19:00 your local). I was about to start to try this on my new cruncher, but I think I'll leave that set at 1 per GPU (its a 690 which has two GPU on one card)

Just a clue, talking about MB app only, I did not not try the new AP Cuda (a DL of a single AP WU takes hours here), on the 690, 2 WU at a time apears to be the best value (total of 4 WU on the 2xGPU) but keep at least 1 core free per GPU to feed them (my system are all Intel, on an AMD i belive 2 is better). But each system is unique you need to test.

You are running x41g (the normal Lunnatics optimized app) the 690 runs perfect on the new x41zc and gets a performance gain of 10-15% with cuda5 so you must try that, DL in the Jasons site: http://jgopt.org/download.html

And don´t forget to keep an eye on the temps, specialy if you use x41zc (more performace=few more heat) i use EVGA precision to keep the fan running faster than normal and that keeps the 690 at low 70´S.
____________

Claggy, for those that don't know how to, could you post a "Janet and John" on reserving a core.

There's two ways of doing it,

eithier set in your computing preferences 'On multiprocessors, use at most' to the percentage of cores you want to use, ie, for an 8 core CPU where you want to free just one core, set the precentage to 87.5%
You can eithier do this in the local preferences, or set up a new location/venue with just that host at that location (you have four locations, default, home, school and work available)

Or you can do it in your app_info automatically, by changing the <avg_ncpus> and <max_ncpus> values for the OpenCL AP, if you're only going to run a single instance of AP at a time, set it to the following:

<avg_ncpus>1.0</avg_ncpus>
<max_ncpus>1.0</max_ncpus>

This will mean every time an OpenCL AP Wu starts it'll have a core reserved for it, and will be returned for CPU use once it completes,

If you're going to be running two OpenCL AP Wu's at a time (or have two GPUs), the above will free two cores when enough OpenCL AP Wu's run, so only want to free a core set them to 0.5 instead, or 0.25 for two Wu's on two GPUs,
the problem with this method is you might find you're only running one AP Wu, the rest of the Counts being filled with Seti_enhanced work, so you haven't got any cores freed, so best to just go for the top method.
(I use a combination of the two, % of CPU usage set to 87.5%, with both ATI & Nvidia OpenCL apps set free a core)

One thing i've noticed while looking at Cliff's, Rob's and spitfire_mk_2's AP results is they are all running with the default parameters,
these parameters where set like that so the app can be run on low end GPUs like 8400GS and HD5400's, and won't utilse a GTX460 or GTX660 fully,

Before you start running Multiple instances, tune the app first, this will improve GPU usage, and increase memory usage, then worry about running multiple instances,

For a mid Range GPU like a GTX460 or a HD7770, -unroll 10 -ffa_block 6144 -ffa_block_fetch 1536 is suitable,

rather than putting it in your app_info, put it in the ap_cmdline_win_x86_SSE2_OpenCL_NV.txt or ap_cmdline_win_x86_SSE2_OpenCL_ATI.txt file instead,
this has the advantage that you can change the parameters without having to restart Boinc,

Claggy, for those that don't know how to, could you post a "Janet and John" on reserving a core.

Easiest way is in boinc manager preferences.
On multiprocessors use 76%.
This will reserve 2 cores on a 8 core CPU.

If we're doing "Janet and John", you should perhaps point out that there are two possible places to set this value:

1) Via the computing preferences page for your account on this website.
2) Via "Computing preferences..." on the Tools menu in BOINC Manager itself (look at the bottom of the 'processor usage' tab).

People should choose one of these locations for setting preferences, and stick with it. If you even make one single change directly in BOINC Manager, you will 'lock-in' all the other settings on the first three tabs of the BOINC Manager preferences dialog, and any later changes on the website will be ignored.

That's because BOINC gives priority to the local settings. If you're not sure which preference set you're using at the moment, look for the line

Reading preferences override file

in your message/event log when BOINC starts up (just below the list of projects you're attached to). If you find you've inadvertently set up a local override file, but prefer to use website settings, you can use the 'clear' button in the BOINC Manager preferences dialog.

Thank you all for the information. I'll be waiting for a linux OpenCL AP and a x41z version of linux MB.

I'll definitely reserve a core for the GPU AP too.

As of now I'm running 6 of 12 cores doing CPU MB/AP and the other six feed the cuda MB tasks. The 6 CPU tasks run at 99,6% according to 'top' and the 6 feeding the GPU use from 6% to 10% each running on its own core.

The reason to run with these setting is that I think the i7-3930K has only 6 FPU/MMX/AVX units and when in HT the CPU processes would have to share them causing context swithing and register file store/loads and would stress the cache and memory bus.
____________

Is it possible to stop seti from doing Astropulse units without aborting them ? I need to be able to use my computer and Astropulse units make it impossible to use while it's doing them .PLUS I"M LOOKING FOR ET NOT PULSERS if i wish to find pulsers i'll do a diffent progect. I have allready burt 1 video card out and stoped doing seti alltogeather for a few years i thought you fixed this problem.
____________

You can stop S@H from sending you Astoplulse quite simply.
Go to your account web page, select "SETI@home preferences", then "edit default preferences".
Deselect the two Astopluse entries, and the "allow other applications", then hit the "update" button.

____________
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

You can stop S@H from sending you Astoplulse quite simply.
Go to your account web page, select "SETI@home preferences", then "edit default preferences".
Deselect the two Astopluse entries, and the "allow other applications", then hit the "update" button.

You can stop S@H from sending you Astoplulse quite simply.
Go to your account web page, select "SETI@home preferences", then "edit default preferences".
Deselect the two Astopluse entries, and the "allow other applications", then hit the "update" button.

It appears the nVidia preferences are broken. I'm having the exact same results with preferences set to SETI@home Enhanced: yes; AstroPulse v6: no; If no work for selected applications is available, accept work from other applications? no. The scheduler keeps sending me nVidia AstroPulse instead of Multibeam. My old nVidia card doesn't do APs very well, it's much better with MBs. It's probably the same bug that is causing the NVIDIA GPU SETI@home Enhanced tasks to be vaporized instead of being resent as a 'lost task'. This is the second time my nVidia 609 tasks have been timed-out instead of being resent along with all the ATI & CPU 'lost' tasks. Someone needs to fix that.