Does that just mean more and more bandwidth? It turns out that's only part of the story.

When people and machines interact across the network, even sending a small message and getting a reply can take significant time.

That is what's called network latency. “Latency is the key challenge for our interactive experiences on the Internet, whether between people or computers," Godfrey said. "And today every millisecond matters.”

He calls his project “Networking at the Speed of Light.” Godfrey is proposing a mission for the computer networking research community as a whole: to strive to attain an Internet with as close as possible to speed-of-light latency, which is the ultimate physical limit of network speed.

Some of the largest web players, like Google, Amazon, and Bing, have studied how much latency matters by artificially inserting delays for a small random sample of their users. These studies showed that even seemingly insignificant increases in latency have a real impact on how people interact with the technology. People decrease their interactions markedly when latency is increased, often responding to delays of a fraction of a second when browsing the Web. Low latency is also critical for playing an online game, or conversing with audio or video.

A speed-of-light Internet could have a more transformational effect on how we use the medium. "Humans perceive visual events within about 30 milliseconds as indistinguishable," said CS PhD candidate Ankit Singla, who leads Internet-wide measurement work on the project. "If we can push latencies down that low, this effectively instant response would be an important threshold in user experience."

What's more, improvements in latency compound quickly. "The Earth's surface is two dimensional, so as we shrink Internet latency, the physical area we can reach within a certain time limit grows quadratically," Godfrey said. This means even a modest improvement in latency can lead to connecting dramatically more people with very low latency. Such online communities are crucial for applications like gaming, interactive music performance, and telepresence. This "might just allow some new applications to hit critical mass," Godfrey said.

The first stage of the project, which is a collaboration with Balakrishnan Chandrasekaran and Bruce Maggs of Duke University, is to understand the state of the Internet today.

Despite the importance of latency, Singla's measurements have found that even simple operations, like downloading the first small piece of a web page, are commonly a factor of 30 slower than the speed of light—and often 100 times slower.

That means that even a small communication with a distant server might easily take seconds, when the server could be reached in a few tens of milliseconds.

The team's task is to examine the causes of the latency. That is, where does the time go? “Why is the Internet actually so slow, compared to what it could be?” Singla said. “That involves a measurement of factors at every layer of the Internet, all the way from where the fiber lines and the routers are physically located, to business policies of ISPs that can direct packets along circuitous routes, to protocols that are used to transfer data, to delays in the cloud servers and applications. Fully end to end—let’s understand the problem of where the time goes.”

With the measurement work serving as a guide, Godfrey's research group is also developing new technology to reduce latency in some of the Internet's most important protocols.

One of the hardest problems is dealing gracefully with the many unusual conditions that occur in the Internet, which can cause high variability in latency. "We can actually use that response time variability to our advantage," Godfrey explained, "by sending the same request to many different servers simultaneously, and using the first answer that comes back." CS PhD student Ashish Vulimiri developed this technique, using it to halve DNS resolution time – a key step in loading a web page. Vulimiri demonstrated that the technique is effective despite increased server utilization.

Qingxi Li (left) and Mo Dong

Meanwhile, CS PhD students Mo Dong and Qingxi Li are taking on one of the biggest causes of high latency – the venerable TCP protocol, which is responsible for controlling data transmission rates for most communication on the Internet today. Their "Performance-Oriented Congestion Control", or PCC, uses online learning algorithms to dynamically find the most effective strategy for data transmission, rather than hardwiring predefined assumptions about the right control strategy. A demo of the software, which in some situations bests TCP by more than an order of magnitude, appeared at the ACM SIGCOMM conference in Chicago in August.

To help launch and support his efforts to build speed-of-light networking, Godfrey recently received a Beckman Fellowship in the Center for Advanced Study for the 2014-2015 school year. With this distinction comes a semester of release time to pursue a particular project. “This time will allow me to focus in a way that’s usually very difficult while you are trying to teach classes and advise students and work on service and apply for grants,” said Godfrey. “The CAS Fellowship lets you take something that is potentially high impact and jump start it. That will lead to a broad research direction that will carry us forward for years.”

Godfrey said that he is grateful for the opportunity to launch this research with the CAS Fellowship. “Giving faculty the ability to focus deeply on a topic is a rare and valuable opportunity.”

In addition, interest form industry has been growing. Google will provide an $84,000 Google Research Award to help fund the project.