Some news. Yesterday we had our usual weekly outage, and shortly after the floodgates opened again bruno (the upload server) crashed. Except we quickly found it didn't actually crash. It was turned off. By the web-enabled power strip. For no apparent reason. We turned it back on and everything was okay, but now it seems like we have a flaky web-enabled power strip on our hands. It is interesting to note that this power strip was plugged into the same breaker as thinman - the previous webserver system that died during that last unexpected power issues. So maybe some funky voltage clobbered this strip as well. Well, we have a spare one which works so no big shakes there. And yes, we ruled out foul play.

As for the crashy desktop machines, I may have fixed one. The theory being, oddly enough, too much thermal grease was employed thus reducing the effectiveness of the heat sink. Oops. Well, I'm not quite convinced that was the problem, and we're burning it in now. If it survives a week without crashing, great. The other system is not doing as well. I think we're aiming to get insurance money from the university to cover the cost of these systems killed or injured during these outages. Meanwhile, we're operational, so no real disaster.

In better news, georgem is now not only hosting all the workunits and running some backend BOINC services and scientific analysis processes, but it's also hosting all the data (~13TB) from a recent survey of the galactic center collected at Green Bank Telescope. Several grad students will be processing this data on georgem itself.

Also paddym has been cleared to finally reformat all its drives into a giant RAID10, and we can now start the process of duplicating the whole SETI@home informix science database on oscar over there. As well it's already actually serving a mysql database containing Kepler data, also collected at the GBT, which we're soon to use old SERENDIP code to analyze in-house.

Oh yeah we also found a bug that had been causing a lot of Astropulse splitters to fail, thus reducing the amount of AP workunits being sent out. This has been fixed, and so expect more AP work.

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

Speaking of AP, can those last one or two completed v505's be manually kicked and cleared so the status page can reflect v6 statistics, or is there more to it than that?Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)

As well it's already actually serving a mysql database containing Kepler data, also collected at the GBT, which we're soon to use old SERENDIP code to analyze in-house.

- Matt

i guess , we, people, wont process Kepler candidates at all ?

We will, just no rush. They have to rewrite the multi-beam and astropulse code to accept the different data format from GBT. That needs cash to pay the programmers. Then it rolls out on the Beta project.

Thank you for the updates and interesting news on the new servers.
Hope the insurance angle works out and the project gets reimbursed for some of the damage caused by the power problems!
Meow!Always remember.....kitties are all Angels with fur.

As well it's already actually serving a mysql database containing Kepler data, also collected at the GBT, which we're soon to use old SERENDIP code to analyze in-house.

- Matt

i guess , we, people, wont process Kepler candidates at all ?

We will, just no rush. They have to rewrite the multi-beam and astropulse code to accept the different data format from GBT. That needs cash to pay the programmers. Then it rolls out on the Beta project.

To add onto this a bit, PaddyM's drives were shipped off to the GBT (their second trip to the telescope) and came back nearly filled with data. That's 21, 2TB drives of data.

We'll process GBT on our PC's eventually, but I just wanted to give our users a scale of the data being processed in house vs archived for eventual S@H volunteer processing.

As well it's already actually serving a mysql database containing Kepler data, also collected at the GBT, which we're soon to use old SERENDIP code to analyze in-house.

- Matt

i guess , we, people, wont process Kepler candidates at all ?

We will, just no rush. They have to rewrite the multi-beam and astropulse code to accept the different data format from GBT. That needs cash to pay the programmers. Then it rolls out on the Beta project.

To add onto this a bit, PaddyM's drives were shipped off to the GBT (their second trip to the telescope) and came back nearly filled with data. That's 21, 2TB drives of data.

We'll process GBT on our PC's eventually, but I just wanted to give our users a scale of the data being processed in house vs archived for eventual S@H volunteer processing.

42TB, round about five and a half million* astropulses. At one and a half astropulses a day that would keep me crunching for bit :)