Not much news. Eric, Jeff, and I are still poking and prodding the servers trying to figure out ways to improve the current bandwidth situation. It's all really confusing, to tell you the truth. The process is something like: scratch head, try tuning the obvious parameter, observe the completely opposite effect, scratch head again, try tuning it the other direction just for kicks, it works so we celebrate and get back to work, we check back five minutes later and realize it wasn't actually working after all, scratch head, etc.

Thanks for all the suggestions the past couple of days (actually the past ten years). Bear in mind I'm actually more of a software guy, so I'm firmly aware that there's far more expertise out there regarding the nitty gritty network stuff. That said, like all large ventures of this sort the set of resources and demands are quite random, complicated, and unique - so solutions that seems easy/obvious solution may be impossible to implement for unexpected reasons - or there's some key details that are misunderstood. This doesn't make your suggestions any less helpful/brilliant.

Okay.. back to multitasking..

- Matt-- BOINC/SETI@home network/web/science/development person
-- "Any idiot can have a good idea. What is hard is to do it." - Jeanne-Claude

In reality, we have a 1GBit connection to the world via Hurricane Electric, but alas this is constrained by a 100 Mbit fiber coming out of the lab down to campus - it will take some big $$$ to upgrade that, which may happen sooner or later (as it would not only benefit us).

I mentioned how we seem to have a 60Mbit ceiling...

... We have gigabit all over our server closet (more or less - some older servers are 100Mbit going into the 1Gbit switch).

What the software guy Matt doesn't know and is out of his expertise is if that fiber is capable of giga bit speed, in which case a couple of [approved] boxes at each end is all that is needed. Crosses fingers.

As to the ticket, one wonders if some home made sweets to a worker bee in CNS might find out just what other fiber links there are and what boxes are in use and maybe even if the fiber is capable. Someone knowing that and adding it in the remarks section of a ticket might smooth things through.

If that fibre will take 1Gbs traffic, then it could well bring it back into the frame. Current CISCO routers can routinely handle multiple Gb/sec, and a few years ago were not averse to providing some hardware for high profile network tasks that could give some Marketing leverage.

CISCO and IBM Tivoli are long-time business partners. Its not beyond imagination that CISCO could bring some IBM Tivoli technology along with them to stitch together the whole server/database/network management mix - Tivoli would be a sledgehammer to crack a nut for sure (would only need a small Tivoli sub-set), as the SETI volume and complexity would be no issue to Tivoli in Systems Management terms, just needs Marketing Management clout to make it happen. They both have done it before where a marketing leverage gave the pay back on the hardware.

All depends on the reality of the fibre capacity ...... but given that, a phone call in the right place could produce results....

One thing to remember is that by solving the bandwidth problem, we probably relocate the choke point. Recall not that far back, disk space was a big issue.

Here is a crazy thought to consider - replicate the project somewhere else. There are now literally dozens of BOINC projects running out there, all running different things. Is there a partner/supporter out there with BOINC ambitions but not quite the same Noble prize aspirations willing to work with the lab, split some tapes, collect the science and ship the results back to the lab? Ideally in a different part of the world with a gigabit connection. Clearly there will be some NRE required to set it up, but the running costs should be less than 2x. I see lots of tangible benefits with bandwidth, storage, support more users, staggered down times etc.

I don't have the where all to run this down, and I imagine there are likely policy/political/practical/financial reasons that make this a long shot.

Want more crazy - once you have done this once, you can do it again.

Really, really crazy - get Google to donate a little spare server time. They have a bazillion servers, acres of disk farms and more bandwidth than most developed countries.

As a long time lurker, I know how much effort Matt and the team have put in to get the project from nothing to where it is today. So if they say this is untenable, I can respect that. I am just trying to look past what a length of cable and some new switches can do to see where the vision of an ideal future lies. I have never seen a flying pig, but some crazy ideas can bear fruit.

In reality, we have a 1GBit connection to the world via Hurricane Electric, but alas this is constrained by a 100 Mbit fiber coming out of the lab down to campus - it will take some big $$$ to upgrade that, which may happen sooner or later (as it would not only benefit us).

I mentioned how we seem to have a 60Mbit ceiling...

... We have gigabit all over our server closet (more or less - some older servers are 100Mbit going into the 1Gbit switch).

What the software guy Matt doesn't know and is out of his expertise is if that fiber is capable of giga bit speed, in which case a couple of [approved] boxes at each end is all that is needed. Crosses fingers.

As to the ticket, one wonders if some home made sweets to a worker bee in CNS might find out just what other fiber links there are and what boxes are in use and maybe even if the fiber is capable. Someone knowing that and adding it in the remarks section of a ticket might smooth things through.

Just an out of the box thought.

one problem is it could be multimode fiber with a length, short enough for a 100mb connection to work, less than 2km. But to do gig over multimode with a WS-G5486 LX/LH gbic you are limited to 550m. To get gig to go 2km or more you need single mode fiber, when you use single mode fiber with a LX/LH gbic you get 10km.

In reality, we have a 1GBit connection to the world via Hurricane Electric, but alas this is constrained by a 100 Mbit fiber coming out of the lab down to campus - it will take some big $$$ to upgrade that, which may happen sooner or later (as it would not only benefit us).

I mentioned how we seem to have a 60Mbit ceiling...

... We have gigabit all over our server closet (more or less - some older servers are 100Mbit going into the 1Gbit switch).

What the software guy Matt doesn't know and is out of his expertise is if that fiber is capable of giga bit speed, in which case a couple of [approved] boxes at each end is all that is needed. Crosses fingers.

Yeah, yeah, yeah, I know:

Question: how many software guys does it take to change a light bulb?

Answer: they can't, light bulbs are hardware.

Seriously, some of us software dudes know a little about hardware. A few of us even have DVMs, oscilloscopes, and soldering irons, and know how to use them.

Here is a crazy thought to consider - replicate the project somewhere else. There are now literally dozens of BOINC projects running out there, all running different things. Is there a partner/supporter out there with BOINC ambitions but not quite the same Noble prize aspirations willing to work with the lab, split some tapes, collect the science and ship the results back to the lab? Ideally in a different part of the world with a gigabit connection. Clearly there will be some NRE required to set it up, but the running costs should be less than 2x. I see lots of tangible benefits with bandwidth, storage, support more users, staggered down times etc.

Matt has said many times that the servers project is pretty "atomic" -- by which he means it'd be pretty hard to put some parts of the project here and other parts there. Most recently, in the "On Bandwidth" thread:

Of course, another option is relocating our whole project down the hill (where gigabit links are readily available), or at least the server closet. Since the backend is quite complicated with many essential and nested dependencies it's all or nothing - we can't just move one server or functionality elsewhere - we'd have to move everything (this has been explained by me and others in countless other threads over the years). If we do end up moving (always a possibility) then all the above issues are moot.

Someone else mentioned a SETI@Home based at Parkes or some other "Son of SERENDIP" site, and one could leverage the work at Berkeley and put up a complete second project -- with permission, I'm sure.

Here is a crazy thought to consider - replicate the project somewhere else. There are now literally dozens of BOINC projects running out there, all running different things. Is there a partner/supporter out there with BOINC ambitions but not quite the same Noble prize aspirations willing to work with the lab, split some tapes, collect the science and ship the results back to the lab? Ideally in a different part of the world with a gigabit connection. Clearly there will be some NRE required to set it up, but the running costs should be less than 2x. I see lots of tangible benefits with bandwidth, storage, support more users, staggered down times etc.

Matt has said many times that the servers project is pretty "atomic" -- by which he means it'd be pretty hard to put some parts of the project here and other parts there. Most recently, in the "On Bandwidth" thread:

Of course, another option is relocating our whole project down the hill (where gigabit links are readily available), or at least the server closet. Since the backend is quite complicated with many essential and nested dependencies it's all or nothing - we can't just move one server or functionality elsewhere - we'd have to move everything (this has been explained by me and others in countless other threads over the years). If we do end up moving (always a possibility) then all the above issues are moot.

Someone else mentioned a SETI@Home based at Parkes or some other "Son of SERENDIP" site, and one could leverage the work at Berkeley and put up a complete second project -- with permission, I'm sure.

OK so the distributed computing part they thought up works. Now to create a distributed server side to keep up with all the clients!

I know that sorta sounds silly and a bit "OMG we can't do that" but I bet people said that when the whole distributed computing thing started.

Even if the answer to the bandwidth issue is just swapping out a few routers and getting the GB connection up at fill tilt. Like it was stated. Drive space and other resources may start to strain. If a companies such as google or ibm are willing to donate some of their datacenters to the projects. Tweaking or redoing some of the backend to allow for this could prove valuable in the future.SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the BP6/VP6 User Group today!

That's what we're talking about when we add "distributed downloads" -- instead of the splitter loading up the download server (and telling the database) we have the splitter sending stuff to local storage, a new bit pushing the work down the same pipe out to distributed servers, and then work goes from there.

If those are "volunteer servers" (following the P2P model) then BOINC has to deal with another layer of failures: disappearing servers.

CPDN did something like this, and lost an upload server -- I don't remember how it was ultimately solved, but I do remember it was ugly.

At the end of the (proverbial) day, work has to originate with the project, and end up at the project.

That's what we're talking about when we add "distributed downloads" -- instead of the splitter loading up the download server (and telling the database) we have the splitter sending stuff to local storage, a new bit pushing the work down the same pipe out to distributed servers, and then work goes from there.

If those are "volunteer servers" (following the P2P model) then BOINC has to deal with another layer of failures: disappearing servers.

CPDN did something like this, and lost an upload server -- I don't remember how it was ultimately solved, but I do remember it was ugly.

At the end of the (proverbial) day, work has to originate with the project, and end up at the project.

All of the truths apply to BOINC (and everything else).

lol, I like RFC-1925

"(3) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead."

I have said something simular to this myself.SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the BP6/VP6 User Group today!

"(3) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead."

No "lol"... That is all Very Serious Stuff and highly and widely applicable.

I will admit that Section 2-(3) is a rather apt humourism...!

(Pink elephants not needed.)

I'm just a little bit worried for when Matt gets flattened by the 2-(3) or just finally gets blown away in the downdraught... :-(

Seriously, some of us software dudes know a little about hardware. A few of us even have DVMs, oscilloscopes, and soldering irons, and know how to use them.

Ned, the correct answer is "they don't etc..." - its not that they can't, it's just not in their job description! (so, like government workers everywhere, their attitude is "Not My Job!") ;-) (from a former government employee...) .

That's what we're talking about when we add "distributed downloads" -- instead of the splitter loading up the download server (and telling the database) we have the splitter sending stuff to local storage, a new bit pushing the work down the same pipe out to distributed servers, and then work goes from there.

If those are "volunteer servers" (following the P2P model) then BOINC has to deal with another layer of failures: disappearing servers.

CPDN did something like this, and lost an upload server -- I don't remember how it was ultimately solved, but I do remember it was ugly.

At the end of the (proverbial) day, work has to originate with the project, and end up at the project.

All of the truths apply to BOINC (and everything else).

lol, I like RFC-1925

"(3) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead."

I have said something simular to this myself.

Those of us here who actually practice these dark and arcane arts don't laugh at RFC-1925. We're all sitting here saying "Whoa, reality."

I know I pointed to 6a) but many of the solutions proposed (P2P, Torrent, offsite upload/download servers) feel like taking a problem and moving it around.

The problem is the 100 megabit pipe, and the servers themselves.

It looks to me like the current infrastructure can handle the current average load. The problem is how the load builds during an outage, and how networking works when the load is near or above 100%.

Increasing the bandwidth (and more/better servers) raise the 100% mark, and are always going to be a good idea.

I just wonder if there are some others that could help, without reshuffling the problem.

Seriously, some of us software dudes know a little about hardware. A few of us even have DVMs, oscilloscopes, and soldering irons, and know how to use them.

Ned, the correct answer is "they don't etc..." - its not that they can't, it's just not in their job description! (so, like government workers everywhere, their attitude is "Not My Job!") ;-) (from a former government employee...)

In all seriousness, though, I've spent the past four decades primarily as a software-type, but my first serious job was in "Design Automation" -- helping build the Burroughs B6800 and B6900 mainframes through software.

The line between software and hardware has always been a little blurry for me.

I will admit that because I'm a software guy, I prefer a smaller iron when doing PWB work, because things happen a little slower. If I did it all the time, I'd want a hotter iron so I could go fast.