Happy Monday everybody. I guess I should move on from the January thread title theme (odd little towns/places/features in southern Utah which I've been to during many nearly-annual backpacking/hiking adventures in the area - easily one of the best parts of the U.S.).

We did almost run out of data files to split (to generate workunits) over the weekend. This was due to (a) awaiting data drives to be shipped up from Arecibo and (b) HPSS (the offsite archival storage) was down for several days last week for an upgrade - so we couldn't download any unanalysed data from there until the weekend. Jeff got that transfer started once HPSS was back up. We also got the data drives, and I'm reading in some now.

The Astropulse splitters have been deliberately off for several reasons, including to allow SETI@home to catch up. We also may increase the dispersion measure analysis range which will vastly increase the scientific output of Astropulse while having the beneficial side effect of taking longer to process (and thus helping to reduce our bandwidth constraint woes). However, word on the street is that some optimizations have been uncovered which may speed Astropulse back up again. We shall see how this all plays out. I'm all for optimized code, even if that means bandwidth headaches.

Speaking of bandwidth, we seem to be either maxed out or at zero lately. This is mostly due to massive indigestion - a couple weeks ago a bug in the scheduler sent out a ton of excess work, largely to CUDA clients. It took forever for these clients to download the workunits but they eventually did, and now the results are coming back en masse. This means the queries/sec rate on mysql went up about 50% on average for the past several days, which in turn caused the database to start paging to the point where queries backed up for hours, hence the traffic dips (and some web site slowness). We all agreed this morning that this would pass eventually and it'll just be slightly painful until it does. Maybe the worst is behind us.

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

I hope that Josh and Eric don't think that AP 5.01 is ready for release yet. It cannot have been tested yet, because the processing time on a Windows core2 quad has gone from <40 hrs to ~120 hrs.
As it is now less than 6 days since it was released, Josh's announcement 28 Jan 2009 1:52:29 UTC, then only those hosts that run Beta 24/7 and allow Beta at least one cpu core will have returned any results yet.
Also due to the limited posts on the subject at Beta I assume there are, relatively speaking, very few people testing this app, just the normal hardcore testers.

I hope that Josh and Eric don't think that AP 5.01 is ready for release yet. It cannot have been tested yet, because the processing time on a Windows core2 quad has gone from <40 hrs to ~120 hrs.
As it is now less than 6 days since it was released, Josh's announcement 28 Jan 2009 1:52:29 UTC, then only those hosts that run Beta 24/7 and allow Beta at least one cpu core will have returned any results yet.
Also due to the limited posts on the subject at Beta I assume there are, relatively speaking, very few people testing this app, just the normal hardcore testers.

Lets not have another CUDA disaster.

According to Joe Segur posting at Lunatics (and he's good on this sort of thing), they only split 202 WUs at Beta for this test run. I don't think that enough for a valid test. I'm crunching one on my fastest machine, and it's only at 65% after 3 days 16 hours.

Is it at all viable to test these 'new' algorithms using a dedicated supercomputer before releasing to beta and then to 'us'? From personal experience, supercomputer time is available for small projects at the national centers.

I'm not quite sure if it's good to make the APs even longer.
Many of those are already aborted because they take much longer than the 'usual' WUs. As a consequence it takes ages to get two valid results together (and 'pay out'), so RAC is falling and people opt out.
Not to mention those half done WUs lingering on the server.

Is it at all viable to test these 'new' algorithms using a dedicated supercomputer before releasing to beta and then to 'us'? From personal experience, supercomputer time is available for small projects at the national centers.

It's OK, don't panic.

Debug Beta builds are often compiled without optimisation - makes it easier for the developers to track what's happening (allegedly).

The next Astropulse release, once optimised, will maybe run 50% longer than the current version (whether in stock or Lunatics versions) - and that is because of genuine additional searching ('negative DM', in the jargon). Not a problem.

According to Joe Segur posting at Lunatics (and he's good on this sort of thing), they only split 202 WUs at Beta for this test run. I don't think that enough for a valid test. I'm crunching one on my fastest machine, and it's only at 65% after 3 days 16 hours.

That was just the first run after 5.01 install at Beta. They've split more as needed, total 1806 so far.

The next Astropulse release, once optimised, will maybe run 50% longer than the current version (whether in stock or Lunatics versions) - and that is because of genuine additional searching ('negative DM', in the jargon). Not a problem.

Should I infer that the wu's already processed will need to be reprocessed to apply the 'genuine' additional searching?

According to Joe Segur posting at Lunatics (and he's good on this sort of thing), they only split 202 WUs at Beta for this test run. I don't think that enough for a valid test. I'm crunching one on my fastest machine, and it's only at 65% after 3 days 16 hours.

That was just the first run after 5.01 install at Beta. They've split more as needed, total 1806 so far.

Joe

They probably were not needed. It was just a perception by BOINC based on DCF, est_flops etc. Mine arrived with estimate to completion of ~32 hrs so it downloaded three tasks. But with Beta's resource share and actual processing time one task was OTT. Had to adjust things so that first task can be returned in reasonable time frame. Set to NNT and I will await news before processing the other two tasks .

However, word on the street is that some optimizations have been uncovered which may speed Astropulse back up again. We shall see how this all plays out. I'm all for optimized code, even if that means bandwidth headaches.

Thanks for the endorsement of optimized code.

But ..... ahem, how to put this delicately? Matt, you really ought to get out more - specifically to Number Crunching.

The optimized code of which you speak has been on public release since 10 October 2008. That was Astropulse v4.35, of course: optimised v5.00 applications were made publicly available on 21 November 2008, just a couple of days after that version was launched by the project.

I think you'll find quite a lot of optimised work in your database already - all fully validated against the stock application, of course, otherwise it wouldn't get into the database.

I think what Matt is talking about is upcoming changes to the AP application, that uses revised radar blanking code, and does FFAs on negative dispersion as well as positive (current only serches on positives AFAIK).

Optimized code for this is well on its way, I think that is what he is talking about.