A prototype of the next internet architecture, PlanetLab is a set of more than 700 servers spread across the globe but connected to the internet at 340 sites in more than 35 countries. You can think of PlanetLab as an entry point onto a new internet and supporting a diversity of services, bringing together a range of new and known ideas and techniques within a comprehensive design. Just as Princeton users are within a short hop to local routers to access the information they need, each of the approximately 3 million users on PlanetLab are all within a small hop to access the information and services they require.

The key idea, explains Peterson, is “distributed virtualization.” Each of these PlanetLab servers can be shared between research groups, and virtualized such that each user thinks that they have access to the resources on all of the machines. The servers become dedicated to their research, and they themselves can deploy experiments of various kinds on those virtual machines.

Many institutions of higher education are involved, and there are currently on the order of 600 research projects and 2,500 researchers making use of PlanetLab. Some are simply short term experiments… just long enough to achieve a result for a scholarly paper. Some are running continuously. For example, there are projects in anomaly and fault detection, seeing when the internet is or is not behaving as it should, and there is research in different methods for routing. Researchers are able to observe hiccups. If just one of the 25,000,000 daily request has a timeout, researchers are able to trace routing paths to key node, triangulate to the point of failure, and by so doing, learn how to rout around such potential points of failure. There are also experiments in probing the various characteristics of the internet and its behavior. New management services are being explored.

But Peterson emphasized that this is not a self-contained laboratory. It was the intent from the beginning to use PlanetLab to deploy long running services. Researchers are therefore encouraged to attract real clients to access whatever value-added content can be provided. Users can take advantage of the services, and often find that through PlanetLab, they are obtaining better performance essentially by using a new access mechanism to the content of the internet. PlanetLab transfers 3-4 terabytes of data and touches about one million unique IP addresses every day… from users reaching out to download content or to access a PlanetLab service.

PlanetLab works particularly well with very large file transfers. It is used now, for example, to distribute video lectures from the University Channel and reaches many endpoints efficiently. Users need not necessarily know from where the lecture is being served. PlanetLab will often automatically locate the nearest server and more responsively provide the desired content.

Just a few weeks ago, for example, Princeton Professor Ed Felten published a video about American voting systems. He put a two megabyte video on the web and, given the interest, the number of hits would have swamped Princeton’s internet connection. Instead, PlanetLab distributed that content, sustaining about 700 megabits/second from clients all over the world. PlanetLab essentially broke the file into discrete chunks and distributed it among PlanetLab servers, balancing the load over the number of sites on which the content was loaded.

Peterson explained that from the start, PlanetLab’s founders had a very strong conviction that such real life experiences were necessary to test assumptions. Only when the system was so deployed would researchers begin to be able to view the impact of hidden assumptions and to understand fully the problems associated with such deployment.

Says Peterson: “Build it, learn, build more, learn more. And at the end of the day you wind up with a service that people can use.”