Slow steady progress since the last tech news item. The science database continues to be massaged into shape from the past month of nastiness. It's working, but some indexes are still missing, and some queries are taking longer than we'd like. Sometime, probably next week, I'll turn the science status page updates back on - until then the numbers are old and/or flat out wrong.

We're narrowing down the cause of our data recorder woes to either the SATA card or the system itself. We're trying the former first. A new one is on order and we'll have to get it configured remotely (which is a lot easier than configuring a whole new system remotely).

We're also finding that we don't have the processing power we'd like. It seems like we lost a lot of active users over the past few months. I blame the recession. You could also blame Astropulse, I guess. In any case, we need more people. We're hoping the 10th anniversary buzz will help. And speaking of that, Jeff and I are putting all focus on the NTPCkr, just so we have something fun/new/interesting to present in time for any p.r. blitz. That means very little effort in systems/upgrades/etc. for the next 5-6 weeks. Simply don't have the time/manpower.

Sorry about the lull in tech news items. I was on vacation visiting 23 relatives. Many are under 5 years old, which meant a lot of them have colds, which meant I got sick immediately upon my return, earlier in the week.

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

Lost a bit of power from me; electric bill got too high and the summertime heat is coming up quickly in the Windy City.

If you guys are speculating that the advent of AP is causing a loss of users, do you plan on digging further into this, or is there a plan to do something about it? Personally, I like AP and I would have hoped that MB could be phased out eventually (perhaps by looking for broadband and narrowband in the same WU with a single application, perhaps at the detriment of longer crunching time).

We're also finding that we don't have the processing power we'd like. It seems like we lost a lot of active users over the past few months. I blame the recession. You could also blame Astropulse, I guess. In any case, we need more people.
- Matt

You could consider issuing the optimized astropulse r112 as stock. It increases the crunch speed considerably. Most windows machines should have a mimimum of SSE3, and those that don't probably should not be running AP anyway. Releasing a new stock AP would also fix all those clients that got corrupted AP.exe in March and are still erroring out every midnight.

We're also finding that we don't have the processing power we'd like. It seems like we lost a lot of active users over the past few months. I blame the recession. You could also blame Astropulse, I guess. In any case, we need more people.
- Matt

You could consider issuing the optimized astropulse r112 as stock. It increases the crunch speed considerably. Most windows machines should have a mimimum of SSE3, and those that don't probably should not be running AP anyway.

Sounds like a new BOINC scheduler/server feature for the BOINC development team: Scheduler can choose science applications not only based on architecture but also processor features (req'd list of features). Obviously, sticking with simple things such as MMX, SSE2, SSE3 would be appropriate for an initial implementation.

If you like the idea, I'll head over to the development site and create a Trac ticket.

One of the reasons I dropped out is the way the CUDA application could trash the system ... and sending VLAR units to me. I spend 5-6 times as long on them as for normals ... sorry ... I get paid better at GPU Grid ...

The other reasons are legion, and as they are all well known if you have been paying attention ... well ... you do the math ... you get what you pay for and loyalty has to be earned ... and once you have abused people enough, well, loyalty is a a cast-iron <female dog> to regain ...

Is there a place with statistics so we can see the amount of data that has been processed over the past day/week/month? If people could see the amount of data that has been processed this might increase Setis data through put. I'm aware that there is a piece on wu completed in the last hour.How many work units a there on each tape? Knowing this I can work out how many tapes are completed in an hour or what % of a tape has been completed in the hour.

Is there a place with statistics so we can see the amount of data that has been processed over the past day/week/month? If people could see the amount of data that has been processed this might increase Setis data through put. I'm aware that there is a peice on wu completes in the last hour.

Well, from here, other threads, and scarecrow's graphs, it looks like CUDA is a net negative for the project (decreasing computational capacity overall). Maybe those CUDA wu's should be backed off a bit, if that's possible. Same goes for the AP wu's, since that has been chaotic as well, frankly. I guess I'm suggesting limiting the number per day available until the systems can be stablized, and then turn the other sub-projects on 'slowly'.

Badly behaving servers infuriate users; so in this perspective Paul Buck is correct.

Admin/Matt are you aware that tape 18mr09aa is 0.00 GB in size, or is this 0.00 GB due to the boinc mysql database crash last night? Thankyou to the person/people that repaired the mysql database last night.

One of the reasons I dropped out is the way the CUDA application could trash the system ... and sending VLAR units to me. I spend 5-6 times as long on them as for normals ... sorry ... I get paid better at GPU Grid ...

One of the reasons I dropped out is the way the CUDA application could trash the system ... and sending VLAR units to me. I spend 5-6 times as long on them as for normals ... sorry ... I get paid better at GPU Grid ...

That's why the VLAR autokill mod is so great.

I like this one much better: MB_6.08_mod_CUDA_V11_VLARKill_refined.rar

No more -6 errors and no more backoffs for VLAR kills, This V11 is good or at least I haven't seen any in a while, Although I did have one CUDA WU that clocked out at just above 5 hours.My Amazon Wishlist / TV

PhonAcq said.....
Well, from here, other threads, and scarecrow's graphs, it looks like CUDA is a net negative for the project (decreasing computational capacity overall). Maybe those CUDA wu's should be backed off a bit, if that's possible. Same goes for the AP wu's, since that has been chaotic as well, frankly. I guess I'm suggesting limiting the number per day available until the systems can be stablized, and then turn the other sub-projects on 'slowly'.

Badly behaving servers infuriate users; so in this perspective Paul Buck is correct.

This can not be done because the same work units that run on the CUDA application also run on cpu application as well so you can not back it off unless you shut down the whole project.

And for limiting work units well at the moment i think its set at 100 per cpu and 400 per CUDA unit per day so yes that could bee reduced but i don't thing it will help.