Had our weekly maintenance outage today, including the usual chores. I took the opportunity to replace a failed drive on one of our administrative file servers. I also issued the long-overdue final "shutdown" command on another administrative server, kang, which we no longer use. Many years ago, during the early days of SETI@home, several Sun representatives came by one day to discuss our progress. We thought it was just an informal touching-base kind of meeting, but they told us at the end they were going to donate a whole rack full of 6 state-of-the-art Sun servers and 2 disk arrays. Sun has always been nice to us, but this was completely unexpected. We eventually dubbed this the "k-rack" as we named every server after a sci-fi character starting with "k" (kang, kodos, kosh, klaatu, kryten, koloth). Well, kang, was the last one to go - the end of an era. We're still using the rack itself, though - very useful.

Network bandwidth woes continue, moreso now that we're coming out of the weekly outage. Lots of discussion about this in the previous thread - let me see if I can wrap up all the major points quickly. There are three potential solutions to our bandwidth limitations that we are actively entertaining/researching with the related parties. They are: 1. get a full 1Gbit link up to our server closet (pros: zero migration, cons: time/cost - about $80K in parts/labor), 2. collocation on campus (pros: minimal cost/migration, cons: almost impossible nuisance having to administer from a distance), 3. have a third-party entity host/administer everything (pros: we can ditch sysadmin for once and get back to work, cons: major cost, major migration). Each of these solutions requires a major amount of "getting ducks in row" (due to equipment policies, contract terms, general scheduling issues, etc.) - it's hardly just a money issue. Of course there are other options, too, like putting all efforts into final data analysis and shutting down SETI@home. One major issue is that our server closet (roughly 100 CPUs, 100 TB disk, 200 GB RAM) operates atomically - it's all or nothing. We can't just move one piece somewhere else. It's long and complicated - please don't make me explain why unless there's a free pitcher of beer involved.

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

Seeing the constant issues with bandwidth, I was wondering if the SETI data packets could be compressed (to reduce size for transfer), then BIONIC decompress for processing.

I know it would take more CPU time to compress the packets, but it would reduce bandwidth use.

Just an idea, thought I would share.

- Wol

File compression works based on the fact that the data in those files (word processing, databases, etc.) are not entirely random. A "flat file" database may compress 90% because one filler character appears over and over.

Common bytes get shorter codes, uncommon bytes longer, and the average number of bits/character goes down. (gross oversimplification)

Binary data consisting almost entirely of noise is going to be equally distributed across the whole range, so they aren't very compressable.

Free beer...tell me where and when and this old navy veteran will join you for whatever you want to give away. Darn I even will buy. GO ...GO...US Navy.http://boinc.mundayweb.com/teamStats.php?userID=14824

hello from France
i'm happy than the outage was issued, you did a great job.
infortunately i can't get SETI wus, i can only get Astropulse wus and they are too big to crunch with my old PC (about 650 h announced !!!).
do you think i'll can get some in a few hours ?
thanks for you patience
Patrick from "l'Alliance francophone" team.seti1 was pretty good, seti2 will be better ?

Seeing the constant issues with bandwidth, I was wondering if the SETI data packets could be compressed (to reduce size for transfer), then BIONIC decompress for processing.

I know it would take more CPU time to compress the packets, but it would reduce bandwidth use.

Just an idea, thought I would share.

- Wol

File compression works based on the fact that the data in those files (word processing, databases, etc.) are not entirely random. A "flat file" database may compress 90% because one filler character appears over and over.

Common bytes get shorter codes, uncommon bytes longer, and the average number of bits/character goes down. (gross oversimplification)

Binary data consisting almost entirely of noise is going to be equally distributed across the whole range, so they aren't very compressable.

To put some rough numbers on it: Astropulse WUs compress less than 2% because the data is sent as true binary and the xml header information is relatively short. S@H Enhanced WUs compress around 27% because the data is packed 6 bits per byte, there are line feeds inserted after each 64, and the xml header is larger. That's gzip compression, IINM the download servers can be configured to apply that and libCurl can then ungzip the received data. If so, the effect might be similar to adding nearly 15 Mbits/sec of download bandwidth (assuming vader and bane wouldn't choke doing the extra calculations).

I'd say the best out of the three solutions is the 1Gbit link up to the server closet.

The processing power of all of today's computers would be a reason why some WUs finish so fast. We need another array of radio telescopes to keep a constant WU flow to the super fast computers (like data from the Allen Telescope Array).What if Fiction was Fact and Fact was Fiction and vice versa?