While an increase in the number of WU distributed to an individual cruncher might sound like a good idea it wouldn't help anything part from our egos.
With the way the weekly outages have gone of late there is actually little need to increase per processor, what would be good however is BOINC to correctly identify multi processor Nvidia cards are being more than one processor. Why the chuff chuff does BOINC decide that my GTX690 is only a single processor, when it is reported as [2] on the accounts page??Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

But it isn't - its two devices on one board, which may, or may not, be connected by an internal SLI link (mine are unlinked). One part of the system reports it as TWO devices - take a look at the details for yourself : http://setiathome.berkeley.edu/show_host_detail.php?hostid=6890059
It is interesting to note that GPUGRID treats the GTX690 as being TWO GPUs as witnessed by the fact that a few minutes ago it was running one instance of a GPUGRID task, plus three S@H tasks at the same time, and will run 6 S@H tasks with a setting of 0.33/GPU - if it were a single GPU it would not be capable of doing either of these sets of operations.Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

While an increase in the number of WU distributed to an individual cruncher might sound like a good idea it wouldn't help anything part from our egos.

I disagree. Everytime I run out of work during the outages, I load work from a "B" project, and that crunch time is lost to SETI. I suspect the faster crunchers run out more than I do. That lost crunch time means that less results flow to the science databases, and that hurts the project. It is wasted capacity. Give us enough to withstand a 48 hour outage.Another Fred
Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop.

While an increase in the number of WU distributed to an individual cruncher might sound like a good idea it wouldn't help anything part from our egos.

I disagree. Everytime I run out of work during the outages, I load work from a "B" project, and that crunch time is lost to SETI. I suspect the faster crunchers run out more than I do. That lost crunch time means that less results flow to the science databases, and that hurts the project. It is wasted capacity. Give us enough to withstand a 48 hour outage.

No it doesn't, I'm afraid. At the moment, with the project running absolutely flat out, it means that somebody else grabs the tasks and runs them for you.

But it is.
Just as i pointed out in my first post- 8 or 16 or 32 cores in a CPU still counts as 1 CPU.
2 or 4 or 8 GPUs on a single board still counts as a single video card.

I'm not sure what happens in the case of 2 CPUs or 2 physical video cards. If the limitation is per device, then you'd get 100 WUs for each CPU & each video card. If the limitation is per system, then you'd still be limited to 100 WUs for all CPUs & 100WUs for all video cards no matter how many the system has.Grant
Darwin NT

While an increase in the number of WU distributed to an individual cruncher might sound like a good idea it wouldn't help anything part from our egos.

I disagree. Everytime I run out of work during the outages, I load work from a "B" project, and that crunch time is lost to SETI. I suspect the faster crunchers run out more than I do. That lost crunch time means that less results flow to the science databases, and that hurts the project. It is wasted capacity. Give us enough to withstand a 48 hour outage.

No it doesn't, I'm afraid. At the moment, with the project running absolutely flat out, it means that somebody else grabs the tasks and runs them for you.

Just don't see it that way. If I can't get work during an outage, they can't either. And if I still have B project work left over when work flow resumes and someone runs tasks I would have run, one set gets run instead of two. Larger caches would allow the proect to run flat out, which it is not doing when we're out of work. Another Fred
Support SETI@home when you search the Web with GoodSearch or shop online with GoodShop.

That is the whole point that Richard sensibly makes. They are doing their best, and yes there doesn't seem to be enough work for everybody. You have to remember that the infrastructure of this project was scoped out 10 years ago when we didn't have quad and hex core processors, nor GPU cards all crunching away. The fact that they have kept pace with technology as well as they have done, on a limited budget, is a credit to them.

But it is.
Just as i pointed out in my first post- 8 or 16 or 32 cores in a CPU still counts as 1 CPU.
2 or 4 or 8 GPUs on a single board still counts as a single video card.

I'm not sure what happens in the case of 2 CPUs or 2 physical video cards. If the limitation is per device, then you'd get 100 WUs for each CPU & each video card. If the limitation is per system, then you'd still be limited to 100 WUs for all CPUs & 100WUs for all video cards no matter how many the system has.

It doesn't matter whether you have 1 CPU or 2 you still only get 100 w/u's and the same goes for a GPU capable machine and that number also stays at 100 w/u's no matter how many which means that unless you run some really early version of BOINC the limit for any machine crunching on both CPU and GPU is 200 w/u's.

It doesn't matter whether you have 1 CPU or 2 you still only get 100 w/u's and the same goes for a GPU capable machine and that number also stays at 100 w/u's no matter how many which means that unless you run some really early version of BOINC the limit for any machine crunching on both CPU and GPU is 200 w/u's.

That exactly why i sugest a "little increase" in the limit of the GPU WU, 100 WU of CPU task is far enought for a 1/2 day of work, even on the fastest CPU´s (if i´m wrong someone could show why please), but 100 WU of GPU is not, even in a single 690 hosts, normaly a WU will crunch (not a shortie of course) in less than 7 minutes, it´s about 34 WU per hour, so a 100 WU caches last for less than 3 hours, not enought for the 3-6 hours outages. A 200 per GPU limit will get us far enought work in a double or triple GPU hosts, even for a large normal outage, not when a unsheduled things happening of course, but that will be a good beginning and does not produce to much new load to the databases.

I noticed something else (out of this thread focus), the cricket show almost 100% of bandwidth utilization, but all the AP splitters are out and MB spiting is in "slow" mode (only 3 splliters are workin) and we have a lot of ready to send MB units (more then 300K) and everything apears to work fine, lets see what happening when the AP splitters returns to dutty.

While an increase in the number of WU distributed to an individual cruncher might sound like a good idea it wouldn't help anything part from our egos.

I disagree. Everytime I run out of work during the outages, I load work from a "B" project, and that crunch time is lost to SETI. I suspect the faster crunchers run out more than I do. That lost crunch time means that less results flow to the science databases, and that hurts the project. It is wasted capacity. Give us enough to withstand a 48 hour outage.

No it doesn't, I'm afraid. At the moment, with the project running absolutely flat out, it means that somebody else grabs the tasks and runs them for you.

While an increase in the number of WU distributed to an individual cruncher might sound like a good idea it wouldn't help anything part from our egos.

I disagree. Everytime I run out of work during the outages, I load work from a "B" project, and that crunch time is lost to SETI. I suspect the faster crunchers run out more than I do. That lost crunch time means that less results flow to the science databases, and that hurts the project. It is wasted capacity. Give us enough to withstand a 48 hour outage.

No it doesn't, I'm afraid. At the moment, with the project running absolutely flat out, it means that somebody else grabs the tasks and runs them for you.

Emphasis added by me.

That last part should be in a FAQ or a sticky or something.

The fact remains, that when I cannot cache enough work on my fastest crunchers to ride out even the weekly outage, less work gets done for the project.
Granted, this may not have a tremendous impact on the project overall, but it is fact that when I run out of WUs, I am not crunching Seti on my best resources. The CPUs never run out of work due to their slower speed. But the multiple GPU rigs burn through things pretty quickly.
Especially when the 100 task allotment consists of shorties."The secret o' life is enjoying the passage of time." 1977, James Taylor
"With cats." 2018, kittyman

While an increase in the number of WU distributed to an individual cruncher might sound like a good idea it wouldn't help anything part from our egos.

I disagree. Everytime I run out of work during the outages, I load work from a "B" project, and that crunch time is lost to SETI. I suspect the faster crunchers run out more than I do. That lost crunch time means that less results flow to the science databases, and that hurts the project. It is wasted capacity. Give us enough to withstand a 48 hour outage.

No it doesn't, I'm afraid. At the moment, with the project running absolutely flat out, it means that somebody else grabs the tasks and runs them for you.

Just don't see it that way. If I can't get work during an outage, they can't either. And if I still have B project work left over when work flow resumes and someone runs tasks I would have run, one set gets run instead of two. Larger caches would allow the proect to run flat out, which it is not doing when we're out of work.

But the project is already running flat out. How does
increasing the limits make the project run any more
flat out than it already is?

At what point does the increased number of work units
in the field cause the database to crash? Which is why
limits were put in, isn't it?

At what point does the increased number of work units
in the field cause the database to crash? Which is why
limits were put in, isn't it?

That's the million dollar question.
I have been told that the actual DB usage is only at, I think it was like 60% of capacity. The limit may be on the server's capacity to process and maintain the DB."The secret o' life is enjoying the passage of time." 1977, James Taylor
"With cats." 2018, kittyman

Yes, but the project never asked anybody to build
such super crunchers, and the project is not time
critical. If it takes an extra year to find those
little green men that is not a problem for the project.

Yes, it would be nice to have all the WU's you want, but
if it causes even bigger problems, why do it?