We still have a nagging customary problem and, of course, out here we can't really tell why.

You asked about "weirdness" and here is some (you have to read this whole paragraph): If we close and restart the project, *sometimes* we get faster downloads for a few seconds before they slow to a trickle again. When we get the faster server, okay, we get the faster server, BUT that's not what I'm reporting. I'm saying that if we get the slow server it seems to "start fast" and "trails-off." If you interrupt that and get the slow server again it will "re-start faster" then slow to a trickle again. I don't know what could account for that behavior unless that server is having trouble loading the files it serves and what we are experiencing "out here" is really an I/O constraint "in there."

I don't know what I'm talking about and I admit it, but "a bandwidth problem" between the server and us doesn't make any sense as a reason for that behavior. If it were the connection to us the download shouldn't start or restart faster, right?

Of course, I'm blind to what you may be doing to throttle one server's access to the outside world on purpose.

Thanks for what you are doing. I can only imagine that all of this is a constant thorn in your side. I hope you can get a new worry-free and bulletproof solution in place before you leave us for a month and can enjoy your time away.

Someone's probably already invented this "wheel", but I can't seem to find references to it. Also, my knowledge of the back-end algorithms is nil, so I'm making a few assumptions about the Download Servers. Those caveats aside, here goes:

Individual work units are sent out multiple times for processing, traversing the campus 100Mbit link each time. The outbound bandwidth utilization of the campus 100Mbit link could be lowered by telehousing the Download Servers in the rack across campus where the 1Gigabit link terminates (might be politically impossible on campus), or at the ISP (might be expensive).

If the Download Servers have high bandwidth requirements to/from the other Project servers (or need to be backed up), Cache Servers could be telehoused instead.These Cache Servers would be requested by their respective Download Servers to pass Work Units to requesters (with a confirmation back to the DS). If a WU was not in cache, the DS would pass it through to the cache server. There would be a slight additional delay for uncached units, but cached units would not be delayed by having to use the campus 100Mbit link for subsequent transmits to requesters.

There would also need to be some sort of FIFO scavenging algorithm on a CS to maintain cache storage at sensible levels. As the data is always on the download servers, CS machines need not be expensive, nor would they need to be backed up.

This may have been asked before, but why was the Hurricane Electric 1Gbit/sec line terminated down on the campus, instead of in the SSL, in the first place?

Hurricane Electric is the Internet Service Provider, not hardware. The plan allows up to 1GB data rates at an annual cost of $12000 IIRC. Getting the data to PAIX so it can connect to H.E. is a separate issue.

This may have been asked before, but why was the Hurricane Electric 1Gbit/sec line terminated down on the campus, instead of in the SSL, in the first place?

Hurricane Electric is the Internet Service Provider, not hardware. The plan allows up to 1GB data rates at an annual cost of $12000 IIRC. Getting the data to PAIX so it can connect to H.E. is a separate issue.

Joe

My question was: why is it down on campus, and not at the SSL? I know it's an ISP... another way of saying this is: Why does SETI need to use the 100Mbs line down to the campus, when the ISP's termination should have been on the hill all along?
____________
.

I think you'll find the answer to be "University politics". And there is nothing logical about them.
____________
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

My question was: why is it down on campus, and not at the SSL? I know it's an ISP... another way of saying this is: Why does SETI need to use the 100Mbs line down to the campus, when the ISP's termination should have been on the hill all along?

The end of HE connection is actually in a building in Palo Alto, across the San Francisco bay from the campus. Seti uses the Campus ISP to get the data from there into the campus IT building. If you check the router pages you will see there is more than one link between the campus and Palo Alto.

As to the ISP link being at the SSL, well, Campus is the Landlord and you would need the permission of the Landlord to string any cable across his property. Also the ISP might want to be paid to string that cable. Giga connectivity isn't going to run on any existing cable. So we could be talking about stringing a undersea cable from Palo Alto to Berkeley. I suspect that might run into some dollars. Or perhaps they might be able to get permission to string a cable on the Bay Bridge from CalTrans. In any case this isn't something that can be a jury rig. So this is out.

Campus may be being charged by their ISP based on the total data on their link. Allowing SETI to go full bandwidth would change that charge. Obviously in this era of Government funding cutbacks, that would have to be run by the Board of Regents. I doubt they would agree to it unless they were paid back by SETI. As you know SETI is rather short of funds right now.

Frankly I'd much rather see work on Ntpckr than bigger bandwidth with the limited resources available.

Probabily the wireless link will be to slow for the high demand of SETI but a couple of fast ADSL links will do the same work and sure will cost a fraction of that and requires no aditional wiring.
____________

You would still need permission from the Regent of Berkeley to put up the repeaters, and likely the permission to use part of the wireless spectrum as such distances.

Then there's the problem of wireless latency and interference. Not sure that's a good idea.

As a licensed HAM radio operator, I transmit minor amounts of data quite often with out any issue at all, over extraordinary distances, with the help of other HAM ops who act as repeaters.

The use of the wireless spectrum would be dictated by the FCC for mid-range data transmission.

latency is a hardware/load problem more common to multi user systems , consider that this system would be dedicated to SAH. And would logically also have a hard wired parity check for the transmitted/recieved data.

As for permission to put such a system in place , I can understand the red tape must be overwhelming. but SAH's usage of the campus network would be reduced to a tiny fraction ,as the only need for it would be the parity check I mentioned above.
Wouldn't that be enough justification alone ?

The antennas would probably be objectionable for cosmetic reasons unless there were already towers in place with space for rent.

The hardware requirements might be cost prohibitive as well. Though we've recently proven, with the right people in charge of donations and motivating said donations. This project can really get stuff done. ( /shameless plug for GPU users group < you folks are awesome! )

I do believe the airspace is owned by Berkeley, and such radio transmissions may disrupt other buildings. Therefore I do believe this would still be an issue that needs the permission of the Regents of Berkeley so that it can be cleared whether those concerns are real.