Why we made this change

Visitors are allowed 3 free articles per month (without a subscription), and private browsing prevents us from counting how many stories you've read. We hope you understand, and consider subscribing for unlimited online access.

Cracking Open Internet Hardware

With unprecedented access to network hardware, researchers have the chance to remake the Internet.

For decades, the physical Internet has been in a state of suspended animation. It was designed in the 1960s to transmit files and e-mail, and even the advent of YouTube, Internet phone calls, streaming music, and networked video games have done little to change it. In part, that’s because the only network big enough to provide a test bed for new hardware tricks is the Internet itself; in part, it’s because the routers and switches that make up the Internet are closed technologies, sold by a handful of companies.

A project led by Nick McKeown of Stanford University, however, has begun to open up some of the most commonly used network hardware, from companies such as HP, Cisco, NEC, and Juniper. Allowing researchers to fiddle with Internet hardware, McKeown says, will make the Internet more secure, more reliable, more energy efficient, and more pervasive.

“In the last 10 years, there’s been no transfer of ideas into the [Internet] infrastructure,” says McKeown, a professor of electrical engineering and computer science. “What we’re trying to do is enable thousands of graduate students to demonstrate ideas at scale. That could lead to a faster rate of innovation, and ultimately these ideas can be incorporated into products.”

Under the auspices of a project called OpenFlow, McKeown’s team has secured permission from equipment vendors to write a small amount of code that, essentially, grants access to a critical part of a network or switch called a flow table. When a packet–a chunk of data–arrives at a switch, for instance, software in the switch looks up instructions on the flow table to decide where to send the packet.

“What OpenFlow does is give you direct access to the flow table, to add and delete instructions,” says McKeown. “It’s a completely brain-dead idea.” But it hasn’t been implemented before because the assumption was that vendors wouldn’t open up their hardware. “We figured out that there was a minimum amount of access to the flow table that network vendors were okay with allowing that was still extremely useful to us for testing out our ideas,” McKeown says.

At a recent demonstration, McKeown and his team showed off their ability to control the traffic in a network via a simple cartoonlike interface on a PC. One test was designed to let people play a first-person-shooter video game on laptops, while moving between wireless access points, without losing any information or experiencing any lags. (First-person-shooter games are commonly used in network tests because they are resource intensive, and if the network fails, it’s immediately obvious.) In the demonstration, the researchers instructed a server on Stanford’s network to find the most efficient connection to the device at any given moment. “It’s a good idea for a game, but today you can’t do that because you can’t control the routing,” McKeown says.

In another demonstration, the researchers showed that OpenFlow can enable direct manual control of network traffic: using a mouse cursor, researchers rerouted data traffic from Stanford to a network in Japan. “The goal is not to show that you are controlling your network from a mouse, but that you now have control,” McKeown says. “It’s not left up to whatever the box vendor decides … This infrastructure that’s been held close is being opened and democratized.”

OpenFlow is creating an entirely new field of research, with benefits that the average person could enjoy within the next couple of years. “This could take over the Internet,” says Rick McGeer, a researcher at HP Labs who’s working on projects similar to McKeown’s. “This actually looks like an elegant, efficient solution that we can use to take all of these ideas that we’ve been exploring for the past five years and start implementing them, and start putting them in the network.”

There could, however, still be some challenges ahead, McGeer warns. First, he says, vendors would need to continue to support the project as it moves out of the lab and onto the live Internet. Second, companies who provide Internet service need to see the benefits of opening up their networks. “If I had to guess what would happen first,” McGeer says, “Comcast might want to offer multicast trees [a way to distribute the burden of data-intensive Web functions] for efficient YouTube videos, and they’ll start to put that in for their services.”

McKeown sees the potential to completely open up the airwaves, allowing portable devices to access any wireless network that they can detect. In a city, for instance, a Wi-Fi-enabled cell phone can probably recognize dozens of networks, McKeown says–from Wi-Fi access points to the cell networks maintained by different carriers. But if a user needs more bandwidth for a download, or a stronger signal for a clearer call, or if she moves out of range of a wireless transmitter, switching to another network is difficult, if not impossible. “Our goal is seamless mobility,” McKeown says. “We’d love to come up with a way to re-architect cellular wireless networks. But that’s further out. We’re talking 10 years.”

Tech Obsessive?Become an Insider to get the story behind the story — and before anyone else.

Tagged

Credit

I’m a freelance science and technology journalist based in San Francisco. I was the information technology editor at MIT Technology Review from 2005 to 2009, where I wrote more than 350 stories about emerging technologies in areas that include… More computers, mobile devices, displays, communication networks, Internet startups, and more.
I was an integral part of a technology trend-spotting team, highlighting early work in reality mining, plasmonics, adaptable networks, and racetrack memory. I’ve contributed to The Economist, U.S News & World Report, Gizmodo, New Scientist, Science News, and SELF, among other publications. And I’m currently working on a book with Nathan Eagle called Reality Mining: Using Big Data to Engineer a Better World (MIT Press).