There is no "prize" at the end of a fictitious line here, so if there is down time, people should understand that, take a deep breath and wait for it to clear. Being #1 cruncher gets you what, bragging rights on a message board!? Matt is in the most awful position here by having to obey orders, even if he disagrees, or has better ideas that are falling on def ears. It would be great for another 80k, but with today's financial rout, that will be just as hard as have a problem free seti@home day. BTW, I'll be in the top 50 of my class tomorrow, first milestone!
____________

This seems to cause the most anger amongst crunchers because the uploads are blocked, and when that happens some hosts run out of work and can't download any more WU's. Users then increase cache size to try to hold enough work to tide them over, which just makes the problem worse.

Why not restrict bandwidth of the (just the) download servers to 80-85Mb/s?

This would obviously lengthen the time of max DL bandwidth, but leave enough bandwidth for the uploads to get through. This in turn would:-
1). Reduce storage requirements for 'In progress' work
2). Allow crunchers to get new work because they can upload completed work. (reduce frustrations)
3). Reduce need for very large cache.

Theoretically everybody would get some (enough) work, and large caches would slowly fill over time, faster as demand reduced.

Is there a flaw in my logic?

I made more or less the same suggestion in another thread. However, I thought that if they could prioritize the uploads, without a static restriction, that would help solve the problem. Nobody seemed to understand my suggestion, or it is a bad one.

I don't think reducing cache size matters too much, but reducing storage requirements by being sure the uploads succeed does make sense, even at the inefficient use of bandwidth.

A while ago, seti was running on 1/2 the band width and pegging it. Something happened and the bandwidth doubled, and overnight it was pegged again. Since the number of users didn't change so quickly, and this was before the baraCUDA, I think procedure in the back-office must have changed to choke the bandwidth chicken.

From a different perspective, like any other 'free' resource, bandwidth will be used until it is used up. The fact that the bandwidth use is pegged so frequently is no surprise. However, the boinc system (including mysql) just doesn't seem to be very solid or at least doesn't seem optimized. I'd rather see some change in procedures or evolution of boinc, before investing in yet more hardware and the like. It's a matter of trust, I suppose.

Are there any "spare" servers, down the hill or elsewhere on/off campus, belonging to the Space Sciences Lab that could be used as a download mirror for data distribution to clients, to relieve pressure on the normal servers?
Nothing complicated - this would be only for downloads of a large reservoir of pre-split WUs at busy times. All processed data would be returned by clients to the normal servers up the hill. It could be one of the student's jobs to swap in a new drive of pre-split data every few days.
____________

I was just about to suggest using mirrors. Looks like Einstein@Home use them.

The mirror server(s) could be anywhere, there's 900mbit of unused capacity in that gigabit line to upload splitted wu's to mirror server(s). Of course the wu's can be splitted at the mirror. The current Seti servers could do the other jobs and maybe some of the download traffic.
____________

I don't speak about WLAN, I guess the distance would be too far. (WLAN max. 100 MBit/s ?)

How long is the distance?

But in Germany some cities and villages use wireless DSL for to save the costs for to open the streets and bury the cables.
One transmitter on a high building - maybe church, town hall or what ever and in the houses the receiver.
I guess it's like radio.

I don't know how the bandwidth would be, maybe more transmitter and receiver needed to reach 1 GBit/s?
Or maybe 'only' double the current to 200 MBit/s?

I don't know if this would be cheaper and more possible as to bury a big cable in the bottom.

Also, don't laugh.. ;-) ..what about SAT DSL? It would be looking very crazy if on the campus would be one/some (depend of the bandwidth) satellite dish which don't 'look' to the sky..
..but if it would help..

I have built the "heart" of the statistics of our team. I realize my "problems" are of a much smaller magnitude compared to Berkeleys, but I do think there are similarities.

One MySql table, the user history, has been trouble for a while now.. probably due to it's size of almost 8 Gb.. (like: data corruption => complete restore from backup, sql queries getting stuck => detection of inactivity, restarting history generation with detection as to "were was I")..

To fix this I first tried an other engine (innoDB instead of MyISAM), no joy (table got much larger and processing got much slower).

Next we'll try to upgrade to a newer MySql engine.. but I fear that will not (completely) fix the problem.

and for the 'read queries' use a MRG_MyISAM structure.. that way I could write to the needed table based upon an id 'greater then .. AND less then ..' and keep the reading simple with the merge table.
This might help in:
- easier backup and restore (less massive backup files, partial restores if needed)
- less fragmentation due to smaller tables => more stability
- faster due to smaller index tables => more stability

The potential unhandy part is that I discovered that altered data in the underlying table isn't pushed to the merge-table. In an other table structure I used this to fix that:

ALTER TABLE `merge_table` UNION=(`table_1`,`table_2`....);

I haven't done any testing what this query does on a huge table... (it worked like a charm on a couple of 50k rows tables though :D)

Is there merit to this idea and could "table splitting" be an idea for the Berkeley tables as well?
____________The SETI@Home Gauntlet 2012 april 16 - 30| info / chat | STATS

How about sending more of the AP 5.05 and less Seti@home 6.03. More time out in the field processing with less contacts to the sever for more work. Days worth work at a time instead of hours.

Sure the initial change might be a little rough, but I think it will smooth out once you get enough "larger" workunits out in the field.

Once production of more larger work units is up to par, then re-examine the workunit compression idea. See if it would now be worth it due to the amount of AP units has been increased.

Is it possible to produce workunits for an offsite mirror while the system is down for weekely Tues. Maint.??? Stock up while things are cleaned up? This may reduce the startup stress on the servers when it all comes back online. Let the offsite mirror take the load and give the main server time to build up some workunits.

How about sending more of the AP 5.05 and less Seti@home 6.03. More time out in the field processing with less contacts to the sever for more work. Days worth work at a time instead of hours.

Sure the initial change might be a little rough, but I think it will smooth out once you get enough "larger" workunits out in the field.

This is what many people (including myself) would like. The problem is that the AP and MB WUs are split from the same input datasets. To make a long story short, the AP splitters chew through the input files FASTER than MB WUs are being processed. Two or three weeks ago we had the situation where the AP splitters had processed every single one of about 100 input files while the MB splitters were busy working on the first 10 - 20 input files! The AP splitters were basically shut down for a couple of weeks to let MB catch up on the backlog.

Now I remember.. also.. TV channel have trucks if they make a live report from special places.
IIRC.. this small satellite dish transmitter are with (a kind of) microwaves.
Of course - video and audio need more bandwidth as only radio.. but maybe this technic is well for internet traffic.

Matt,
If this is really a science/research project, then there are only two solutions to the bandwidth problem.

1. Get more funding
2. Deploy the NTPCkr

I HIGHLY favor these two over introducing some non-productive or artificial latency. From these two goals, other ideas become possible.

I would really like to install a server-side instance of BOINC on my machine at home, but I need to make time and upgrade drive space. It would really help me understand how this all works, and it sounds like something cool to get into.

If SETI@home have members with big internet connections and big HDDs, maybe the SETI@home scheduler say at every work request where other member can download new WUs or upload the results.
If not for free, maybe for some money for the electriciy bill.

And this outsourced server at SETI@home members make at different times report/UL/DL to Berkeley.

Or maybe the scheduler at SETI@home tell every time an other outsourced server that this servers will also not have full load.

IIRC.. the software Skype use also user hardware/internet connection of many users at home for better voice quality.

..this is the last idea for today.. I don't want to SPAM the forum.. ;-D

As a scientific project the scientists in charge must keep extremely tight control of the data in order to preserve it's validity. It's not likely that they would move data to servers outside of their direct control because of this.

Even the data we are given is checked by a wingman before it's loaded into the data base. Constant checks and controls have to be maintained to maintain the validity of the science.
____________Boinc....Boinc....Boinc....Boinc....

Delegate the tasks to responsible people (of which there are thousands trying to help you)

Implement - Evaluate - Adjust - do it again.

I am sure you realize you are on the bleeding edge of technology You will eventually write the book (if you are not writing now) on distributed processing. I would think that several commercial organizations would be more than happy to review your work for a mere donation of money or equipment. You have ten+ years of experience going where no one has gone before (pardon the pun)

You and the guys at Berkley are making tremendous progress in computer science as well as Astronomy.

Like any other not for profit (not non-profit) organization, you have grown to the pont where you need additional infrastructure in terms on people skills to allow you to focus on your goals.

We are fully behind you! Take the step. Dare to be great, don't settle for good enough.