The Internet economy is built on speed. So why does the Internet still feel so slow? David P. Reed, former chief scientist at Lotus Development Corp. and a self-styled “digitalist,” is advancing a provocative answer to that pressing question.

By Paul C. Judge5 minute Read

Speed is the mantra of the Internet economy. Executives make decisions fast, companies launch and revise products fast, stock prices rise and fall fast, and customers expect fast answers to their questions. But when it comes to using the Net, many of us still spend an awful lot of our time waiting. We wait for images to download. We wait for email files to be transferred. Jokes about the “World Wide Wait” and “America on Hold” have quickly become outdated, but the frustration behind the stale humor remains relevant: If we’re so fast, then what are we waiting for?

advertisement

advertisement

That’s a question that David P. Reed has been asking for some time. Now he’s championing some provocative answers and hoping that some influential companies will adopt them. Reed, 48, a self-proclaimed “digitalist” and the former chief scientist at Lotus Development Corp., is on something of a crusade to change how telecom companies and Internet-service providers (ISPs) think about “latency” — the time that elapses between a network request and the moment when that request is met. Latency, says Reed, directly affects the quality of users’ experience on the Net. Although ISPs aren’t blind to this issue, too few of them agree that latency is the defining metric of their networks’ performance. “What customers really care about is how long it takes for a request to come back after they send it,” he says. “And latency is controlled by network architecture, not by plumbing.”

Don’t get the wrong idea, Reed urges. Excessive latency isn’t caused by technical challenges that can simply be fixed with higher bandwidth, better compression algorithms, or more MIPS (millions of instructions per second). Instead, he blames an outdated mind-set — a failure on the part of the companies that are building networks to embrace the new logic of network economics. That attitude grew out of decades marked by regulation and by predictable growth in the telephone business. Demand for telephone service grew at a steady rate, and the underlying technology was well understood. Meanwhile, it was expensive to build and maintain such networks. So the industry focused on maximizing short-term efficiency — on keeping the growth of its costly infrastructure to a minimum and on managing its networks to maximize traffic.

But with the advent of the Internet, the situation has changed: Demand is growing exponentially, the technology is unpredictable, and the resources required to expand a network are relatively inexpensive. According to Reed, today’s companies need to have a gut-level understanding that “waste” can make good economic sense. Success in the Internet economy, he argues, depends on how quickly you are able to deliver products and services that people need, not on how efficiently you use internal resources. His solution to the Net-lethargy dilemma? Overprovisioning a network rather than optimizing it — that is, building a network that can handle far more capacity than is necessary right now, so that data can flow through the system faster.

“By building more and more fat pipes and then stuffing them to capacity,” Reed says, “network operators are creating their own nightmare scenarios. But the resources and the technology required to manage these saturated networks could be used instead to buy extra data pipes and to run the whole system at lower capacity.”

If this sounds like an idea that only a technologist could love, that’s because there’s more than a bit of the geek in Reed. His eye for mathematical models carried him through MIT, where he earned four degrees in engineering and computer science, and eventually landed him a position on the faculty there. He left MIT to head up R&D at Software Arts Inc., the company that created VisiCalc, the first electronic spreadsheet. Reed joined Lotus when the company acquired Software Arts, in 1985, and eventually became chief scientist at Lotus. He later spent four years at Interval Research Corp., the recently disbanded think tank sponsored by Microsoft Corp. cofounder Paul Allen.

Reed has spent much of his career pondering ways to improve network performance. “In some ways, this isn’t a new idea,” he says, “but every generation seems to have the same mental bug.”

advertisement

When Reed was a professor at MIT, for instance, the university’s computer-systems managers (who ran time-sharing systems for faculty and students) focused on promoting efficient use of their resources. “They felt that their job was to optimize the system so that it was 99% saturated all of the time,” Reed remembers. “They didn’t think about what would make users happy, although they wouldn’t admit to that fact.”

Reed complained about the sluggish network performance, and so did other faculty members. “But if the system ran at 50% capacity, resources would be wasted, and the whole notion of wastefulness offended them,” he says.

It turns out that waste doesn’t offend people at some of the world’s most successful high-tech companies — companies that operate on a strategic principle that Reed calls “constant overprovisioning.” Intel and other semiconductor makers would go out of business if they didn’t massively invest in factories before demand for their chips ever materialized. “Because it’s such a competitive industry, the players must make huge capital investments in new chip designs and in fabrication equipment, even though they can’t predict demand,” Reed says. “The idea is to invest early — and always to have overcapacity.”

The downside, of course, is that if demand flattens out, microchip producers are saddled with huge product surpluses. “That can be embarrassing for a while, and there can be real costs when that happens,” says Reed. “But for the most part, customers and companies share those costs.”

Reed’s ideas have gained currency in some key places. Through his involvement on the board of advisers of the influential Vanguard conferences, Reed has spread the word to several leading technologists. And as a fellow with Diamond Technology Partners, a digital-strategy consulting firm based in Chicago, he has been putting his theories about latency and network performance on the agenda of Diamond’s high-powered corporate clients.

Even so, among most network providers, Reed’s theory remains decidedly radical. There are a few well-known companies, such as UUNet, that have bought into the idea of making latency the key measure of customer satisfaction for their Internet service. But even the people at UUNet have trouble seeing how deliberately overbuilding its network will boost performance.

advertisement

“We spend $2 million to $3 million every day on network development, and we are moving as fast as is humanly possible,” says Jeff Sturgeon, 40, vice president of marketing at UUNet. “I don’t see how we could ever say, ‘We’re going to build excess capacity into the system.’ “

Reed’s best chance to test his prescription for improving the Net’s performance lies with a couple of young, fiber-based “backbone” carriers. He’s not prepared to name those companies, and he’s not yet sure if they will embrace his agenda.

So will David Reed’s strategy eventually help reduce the amount of time that we spend waiting to use the Internet? We’ll just have to wait and see.

Paul C. Judge (pjudge@fastcompany.com) is a Fast Company senior editor. Contact David P. Reed by email (dpreed@reed.com), or learn more about his career and thinking on the Web (www.reed.com).

advertisement

advertisement

advertisement

A version of this article appeared in the September 2000 issue of Fast Company magazine.