Matt was able to get us 1.6 TB of data (enough for a day and a half) by first transferring the data to Google storage, then to Berkeley. The problem appears to be somewhere in the internet2 route between Berkeley and Green Bank. We're pushing on campus networking to consider it a real problem (they keep asking if we've checked our cables, even though the problem exists on several machines).

More news tomorrow, we hope.

Problem might be upstream from campus and they don't fix that so they give the standard fix your cables.

I do remember one time where I was having issues reaching LHC@Home from my home machine but my work machine a dozen miles away but on a different ISP had no issues. A trace route showed a failed router at some network interchange point on the East Coast before the packets should have crossed the Atlantic, the other ISP used a totally different route. Didn't even have a clue who it belonged to so no way to pitch a bitch.

Well, the experienced campus network people should know their network topology and if there is an issue upstream should be able determine where the slowdown is happening. Then they have to raise a trouble ticket with that network tier level about the issue. Who knows how fast that link tech support will respond?

So anyone want to comment on my idea of just FedEx'ing some drives to Berkeley?Seti@Home classic workunits:20,676 CPU time:74,226 hours

That assumes the tapes are removable (like Arecibo) , or if they have extras to take their place. They might just read and dump down the pipeline then rewrite. Not to mention the cost of FedEx as opposed to cableline.

I would think the department has $50 in petty cash for the delivery costs. The only thing that I can see that might be a hindrance, as Zalster comments, is that they don't have any spare drives to move the data onto for shipment.Seti@Home classic workunits:20,676 CPU time:74,226 hours

I don't know in what format the Green Bank data is stored. If it is on physical drives, couldn't you just overnight them via FedEx? Assuming you have the correct hardware on your end to mount them and get the data into the splitters, wouldn't that keep us going for a few more days until campus networking gets off their Level One script reader response?

Unfortunately Breakthrough needs the drives for daily observations, so we can't pull and ship the way we did with Arecibo. (That and they are in a fairly large raid configuration, so we'd probably need to ship all of them). @SETIEric

Unfortunately Breakthrough needs the drives for daily observations, so we can't pull and ship the way we did with Arecibo. (That and they are in a fairly large raid configuration, so we'd probably need to ship all of them).

Thanks for that. I thought I had read that somewhere but didn't want to misspeak. We'll muddle through the best we can.

I am temporarily breaking my self imposed silence on the Seti boards, due to my concerns at the unfolding events regarding data from GBT.

The other day we were told that the outage was longer than usual because they had to run down the hill to the CoLo to change drives and reboot kit. Wasn't the whole idea of moving the servers from the closet to the Data Centre because of better cooling, UPS facilities, faster network links, and on-site staff to hot-swap drives and re-boot kit?

And it also seems that Campus networking staff are yet again not providing value for money, as it costs seti so much per sq ft to rent server space in the Colo. But also any chain is only as strong as it's weakest link, and we seem to have found ours with GBT data.

I can remember the heady days of our own dedicated fibre link up the hill to the lab and regular maximum 3 hour outrages. What happened to all that then, and was it a move for the better? Yes, it was only a 100Mbit link and not the gigabit link they have now, but they had better and faster control over the servers back then.

Eric said once that if another University offered to host Seti@ home, he would seriously consider it. Time to kick Berkeley into touch maybe?

The other day we were told that the outage was longer than usual because they had to run down the hill to the CoLo to change drives and reboot kit. Wasn't the whole idea of moving the servers from the closet to the Data Centre because of better cooling, UPS facilities, faster network links, and on-site staff to hot-swap drives and re-boot kit?

I wondered about that post too. Why wasn't the staff at the colo able to just swap drives and reboot. Aren't we paying for that 24 hour staff resource? Why didn't that happen?Seti@Home classic workunits:20,676 CPU time:74,226 hours

That would only work if the project had sufficient spare drives of the "right type" down at the coloc, otherwise someone has to "run down the hill" to re-stock the spares, remove the deads and do some general face to face stuff.Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?

We were told at the time that sufficient spare drives of the right type would be stored down there, and further that the 24 hr staffing was one of the prime movers for the change. Were we fed a load of old bunkum, or more likely has their eye been taken off maintaining the arrangement? Let's also realise that the Lab at the SSL does other work for UCB besides SAH, Eric's Hydrogen project for example.

Whatever, let's cut through the crap. I am quite happy to to put my money where my mouth is. If I can be told what spare drives they need at the CoLo and the cost, I'll see what I can do to help.

Meanwhile I expect Campus networking to continue to be pressured to do what we pay them for. Else a swift letter to the San Francisco press, and halting the monthly payment could be in order. Naming & shaming and withdrawal of funding always produces a timely response in these situations.

Yes, I remember that too. There were supposed to be ready spares on site at the colo. When you contract for server space and maintenance at server farms, you normally contract for the labor to maintain those server resources. That means ready spares, staff to replace drives and even have them ship the defective hardware off to the manufacturer for warranty replacement under either your shipping account or the contracted organizations shipping account.

That has been my experience in past jobs with robotic data library companies. So what services did we actually contract for at the colo?Seti@Home classic workunits:20,676 CPU time:74,226 hours

There may be problems with third parties maintaining our rather eclectic collection of bespoke servers. IIRC, it matters which order they're rebooted so that all the interconnect mount points can be established. Our staff may prefer to do it themselves, knowing that we're watching over their shoulders for the slightest glitch.

I think you've gone a bit off track there Richard. It was never intended that the CoLo staff would re-boot the whole collection of servers at any point, and certainly not on the Tuesday outages. That is why all of the kit was put on remote controlled power strips so that it could be done from the Lab, or at home at other times. The CoLo staff would be called in for those occasions where recalcitrant servers needed a physical kick up the rear end to come back to life.

I've made the points that I wanted to make i.e. that we are not getting value for money from the CoLo nor campus networking, and that we should be making more fuss about it. I'll see you sometime in March.