Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

John3 writes "InternetNewsM is reporting that PlanetLab is getting closer to reality. According to this article, a consortium of universities (including Princeton) is launching a test-bed platform based on Red Hat Linux. This project is different than Internet2 or some of the other "alternate Internet" networks being developed, and seems to offer the most benefit to distributed computing projects rather than generic WAN/Internet communications."

Wouldn't it be cheaper to use a station wagon full of hard disks? The cost per GB on hard disks isn't that much higher than it is for DVD-R media, and if you bother to factor in the amount of time it would take to create the DVD-R's versus filling the harddrives, they might come in cheaper. Should be faster to read in to.

I know that some companies are offering thier GIS datasets on HD instead of cdr now, but they do charge a bit more. Backing up to cdr is pretty useless for 40 Gigs of data though. Ramble Ramble.

Thank you for your feedback. At generic-man joke productions, your satisfaction is our highest concern. Enclosed please find a refund of the cost you incurred in acquiring our defective joke.

I appreciate your constructive criticism. In the future, I would appreciate it if you would pre-emptively contact me via e-mail to ensure that my exaggerated comparisons are accurate and scientifically just.

I apologize for any insult that may have been transmitted via my inaccurate comparison. We resp

All of the mental masturbation behind "never underestimate the bandwidth of a (VEHICLE) full of (TYPE OF MEDIA)" has already been done. Rather than perpetuate this stupid plug-and-chug exercise, read the results [everything2.com].

As the owner of this thread, I would like to announce that this thread is closed to new posts.

...or was the article blurb just a bunch of buzzwords stuck together? I mean, each of the clauses in it on its own made sense but the whole blurb just seemed kind of incoherent. It's very thin on actual specifics; this sounds like it could just be more vapourware, unfortunately.

You're right to say that the blurb sounds like a bunch of buzzwords but this actually isn't vaporware...Planet-lab has actually got a lot of big sponsors (Intel, HP...) behind them and while I don't see this being used for the everyday internet user, Planetlab is the kind of thing corporations will find very useful for its distributed computing capabilities. It's still in its infant stages now but this definitely is a project with potential.

It took a few readings of that article as well as a visit to the PlanetLab site for me to get an idea what they are trying to do. In simple terms, it looks like a network designed specifically for distributed computing projects like SETI@Home (as an example of a publicly accessible research project). Instead of relying on the Internet to link up your distributed machines, PlanetLab would be a closed high performance network that would allow the researchers to avoid the usual Internet traffic jams.

Close, but not quite. Planetlab is not a closed, high performance network. Rather, it's more of an overlay testbed: The machines reside on the Internet (companies that host nodes) and on the Internet2 (research universities). That's part of what's so cool about it - the machines reside all over the world (see the map on the planetlab website [planet-lab.org] - it's an accurate reflection of the location of the nodes). They have a lot of visibility into nooks and crannies on the Internet, and they're beginning to be deployed enough that there's often a planetlab node nearby, whereever in the network you are.

After yet another read of the article it looks like they are just building a mock-up Internet on which to test their distributed apps. This would allow them to see how their apps will perform when linked over the Internet rather than in a closed lab 100mb network environment.

This would help them avoid comments like "Gee, those data packets sure take a long time to get back to us" once they move their app to the real world outside the lab.

Article fluffy, planetlab not fluffy. For the moment, planetlab is primarily a research testbed. It has about 160 nodes deployed at 65 sites; these nodes are in use most of the time by a decently large group of researchers conducting internet measurement studies and research into distributed computation.

But - that's only part of the goal. Ultimately, I believe that the goal of Planetlab is to help transition these research technologies into deployed, useful services; so the network becomes more than just a research platform, it becomes the next DNS infrastructure, or the next Akamai, or the next Napster (ok, ok, don't sue!).

So, some of the examples the article cited are pretty illustrative. For example, the MIT Chord [mit.edu] project is a Distributed Hash Table. DHTs are a peer-to-peer storage/retrieval system that allow completely decentralized resource sharing between cooperating hosts. And so on, and so on. The hope of the PlanetLab folk is that some of these projects will become the foundation for the next Internet architecture, or internet middleware, or whatever it is you want to call it -- the next set of critical services that change the way we use the 'net.

But even before that, Planetlab is one heck of a useful research tool. There are several papers at this year's Sigcomm [acm.org] conference (big computer networking conference) that took their measurements using Planetlab. There are a number of other papers and projects in the pipeline that're using planetlab as their research testbed. The cool thing about planetlab is that it's now considerably larger than most prior testbeds, and has a lot more momentum for future growth. Full disclosure: I spend a part of my time working on planetlab, but this post is not any kind of official view, it's just my interpretation.-

one of the things i find so interesting about PlanetLab is the way employing standards has actually increased the flexibility of the whole product. too often, standards are a primary ossifying force in technological development, especially when created after the fact; by coming up with a common platform and software package at the outset, and by having flexibility as one of the primary goals considered in development, standards will actally help ensure PlanetLab works as it was intended.

I can't help but say that most CS/IT majors need this. I've seen too many people write apps (simple ones even) that relied on that ethernet connection that the dorms give, 10Mbit between machines. "Scale down? Who has less than a fast cable modem these days?"

Now they just need to break the schedulers on the machines, to make them randomly almost-starve a process to make sure it can cope with a slow machine.

I can't help but say that most CS/IT majors need this. I've seen too many people write apps (simple ones even) that relied on that ethernet connection that the dorms give, 10Mbit between machines. "Scale down? Who has less than a fast cable modem these days?"

PlanetLab won't help much with that. Most of the PlanetLab nodes are pretty well connected, certainly better than modems. It lets you test latency pretty realistically, given that the nodes span the globe.

ohh oohh, and next they'll have the "Internet on DVD!!!" Yeah, only 20k per month, with 100k DVD's per month, cancel anytime!
In other words, there is no replacement for the internet, nothing can really beat it. Except you know.. maby the internet on DVD, for long car rides through nevada?

Except you know.. maby the internet on DVD, for long car rides through nevada?

All I'd need on a long car trip would be my e-mail, the last two weeks of Kuro5hin and Slashdot stories, plus caches of pages that the stories and the highest-rated comments link to. I don't need the whole web on DVD, just the part that I'm likely to read in the next couple hours. It's like The Matrix: when no human is looking, the Matrix does computations on its world model at a coarse precision.

So, are they looking at an infastructure/physical situation test, a protocol test, or perhaps both?

While I'd expect the test system to make at least some use of existing infastructure, but perhaps they'll find something to replace the current TCP/IP protocol, or something more towards IPV6.

It will be interesting to see the evolution of the internet in such as way. The content has changed but much of the mechanism behind it is still rooted in legacy. I wonder if this is intended to be a full switchover or just an upgrade.

Oh, and I wonder if private entities (such as myself) can also participate to test it out...?

Per your last question, probably not. I was looking through the site and as an individual user you've got to be affiliated with an organization that is on the PlanetLab network. They unhelpfully mention that you can achieve this by persuading your organization to join.

Different than the Internet 2 project or even Grid computing, the group says the most obvious benefit is that network services installed on PlanetLab experience all of the behaviors of the real Internet where the only thing predictable is unpredictability (latency, bandwidth, paths taken).

If you want to emulate all the behaviors of the real internet, you need to welcome the hackers. crackers and script kiddies, not to mention the "moms".

PlanetLab also serves as a meta testbed on which multiple, more narrowly-defined virtual testbeds can be deployed. That is, if we generalize the notion of a service to include what might traditionally be thought of as a testbed, then multiple virtual testbeds can be deployed on PlanetLab.

Any time a discussion starts to use the word meta you know you have achieved buzzword satori and can stop reading.

''[The Web is] so successful and so many people depend on it, it's become impossible to go to the core of the Internet and make radical changes to introduce the kind of new services we see people wanting to deploy,'' Princeton University scientist and Intel Research member Larry Peterson said during a conference call to the press.

How are changes so "radical" that it needs a newly designed system to merely do development and testing ever going to able to be gradually introduced into the "core of the Internet"?

I agree with other posters that the article seems high in fluff and low in content (understandable, since anything else would be a technical paper, not an article). But the things that stood out for me when I read the article were the part mentioned in the parent ("go to the core of the Internet and make radical changes"), and this:

"This is about pooling resources and to build out the infrastructure, but in the end this about lowering the barrier to entry to developing on the Internet," Peterson said.

"Lowering the barrier?" My goodness, my 12-year-old daughter could be designing Flash-enabled websites if she weren't so busy on AIM. What "barrier" are they talking about? I'd almost suggest we need higher "barriers" to keep out the "wELCOM tO MY wEBSIGHTE" kiddies.

Now read that last sentence again.

Maybe I'm letting paranoia run loose, but there are more than a few folks in industry that would also like to keep those kiddies off the 'net, raise the bar, have an Internet that is "more useful everyday," as Bill would say. The net effect, though, is to remove the internet [aolsucks.org] gadflies [toronto.edu] that make the 'net such a democratizing medium.

The web's success isn't due to the Microsofts and the AOLs -- it's the little guys like me [dixie-chicks.com] and you [texturedigital.com] who rub the fat cats the wrong way.

With "high-tech companies... key to the project's success" (and Intel and HP specifically mentioned), I'm afraid their goal is to make the 'net better for those high-tech companies... and to leave the rest of the masses out of the "New Internet".

As usual, someone is confusing Internet2 with Abilene [internet2.edu] Which is Internet2's high speed network. Abilene is just a part of what Internet2 does. If you ask me (and I know you didn't), Internet2's middleware [internet2.edu] stuff is much more interesting and ground breaking than a silly high speed network. Check out Shibboleth if you want to know where the Liberty Alliance got pretty much all their ideas:)

Right now the project is still getting started (We in the Computer Systems Lab just finished building them 75 P4 2.4Ghz machines with gigabit cards soley for the purpose of packet generation, as far as I understand) but it should be really interesting when it gets done. Basically, it's a simulation of the internet all in one room. It's a cool room to be in...lots of wires and cisco crap everywhere. Almost as cool as the main CS server room...