Brough's writings on the technology, economic and social issues of communications at the intersection of the Internet, telecom and mobility.

March 10, 2014

I've been invited to give a talk on Internet Peering at the upcoming Optical conference, OFC 2014 in San Francisco. If per chance you will be attending, it's Wedneday morning at 9:15am. If not, here's a draft of a paper I wrote to go along with my presentation.

Impact of Internet Peering on Network Architectures and Economics

Brough Turner

netBlazr Inc., 18 Bridge St., Watertown, MA USA

brough@netblazr.com

Abstract: The Internet backbone consists of ~6000 independent networks. The technology and economics of how these networks exchange data drives the location of data centers and the location and utilization of high capacity fiber links. We explain how Internet peering works, how it has evolved and trends that will influence future network deployments.

1. Introduction

The Internet is made up of millions of independently controlled networks that, together, support nearly 3 billion users [1]. Most of these networks, e.g. home networks and medium and smaller enterprise networks, pay an Internet Service Provider (ISP) to provide access to any valid Internet address. Some larger enterprise networks pay for separate connections to two or more ISPs. Tens of thousands of ISPs support access and aggregation. Finally, just over 6000 ISP networks [2] form the Internet backbone by exchanging traffic over a sparse mesh of interconnections that use common link protocols and link technology, but two different business arrangements: “Internet transit” and “peering.” With transit, the upstream ISP offers to handle traffic for any Internet address. In peering, operators forward only those packet destined for subscribers on the peer’s network. This difference is key for the economics discussed in section 3.

2. Interconnection Protocols & Technology

All links transport Internet Protocol (IP) data packets with public IP addresses. Peering links (and many transit links) use Border Gateway Protocol (BGP) (currently BGPv4 per RFC 4271) to advertise the IP address ranges for which they will accept traffic. Because BGP is deployed on a link-by-link basis, there is room to negotiate BGP options (of which there are many) in support of the exchange policies of the specific operators.

Typically, each ISP arranges their own data transport circuits to a common meeting point where each ISP has their own router. The actual peering link is usually an Ethernet connection between these routers, either directly or via an in-building peering fabric (effectively an Ethernet switch provided by an Internet exchange operator).

3. Peering economics

For local or regional ISPs, peering economics are dominated by where you are and how much traffic you have. For example, a regional ISP in mid-state Illinois will have limited options for any kind of Internet transit and no local options for peering. Fiber connections from the incumbent telephone company are widely available but expensive. Competing local fiber is rare while long haul fiber routes have relatively few physical access points. Outside of major cities, the rates for Internet transit service can be 10x to 100x greater than at a neutral data center in a major hub like Chicago. So one tradeoff is between locally purchased Internet transit and the combined cost of a transport circuit to Chicago plus Internet transit purchased in Chicago. But additional benefits at a hub like Chicago include the ability to peer with Netflix, Google, Amazon and many others. This can offload 1/2 to 2/3 of the traffic, dramatically reducing the bill for Internet transit.

While the preceding example was a small ISP, the same principals apply at every scale. The goal is to deliver as much of your customers’ traffic directly to the networks with the desired destination addresses and minimize traffic that must transit multiple networks. Larger carriers are typically present in multiple cities and multiple data centers where additional considerations come into effect. The first is “hot-potato routing.” Internet traffic is typically handed off at the first available opportunity. Depending on hand-off points, this could result in one party carrying most cross-country traffic while the other party carries mostly local traffic. To counter this, large carriers’ peering policies typically require connectivity at multiple hubs and relatively balanced traffic flows. The second consideration is economic clout coming from the scale and nature of the operator’s traffic. This shifts over time and among operators leading to peering disputes that have briefly made portions of the Internet unreachable for some users.

4. History and futures

When the NSFnet was turned off in 1995, there were six commercial backbone carriers that did settlement-free peering at four major Internet exchange points. These so-called Tier 1 carriers attempted to form a cartel that did not peer beyond the initial group. However, traffic grew more rapidly than original providers could handle and, by the early 2000s, large groups of secondary carriers were exchanging traffic between each other, effectively forming a donut around the original tier 1 carriers, so that by the mid-2000s the original cartel was irrelevant [3].

The next major evolution began with the emergence of content delivery networks (CDN) like Akamai in the early 2000s. Whether web surfing or buffering video, lower round trip latency improves user experience, so there is big incentive to move content closer to the user. This also reduces the amount of data that must be carried long distances. Larger networks deploy CDN servers within their networks, so most such traffic is entirely locally. Additionally, to reach the broadest number of networks, most CDN providers host CDN servers at major Internet exchanges. By 2010, the majority of inter-domain traffic went to CDNs and Google had emerged as the 2nd largest ISP in the world, by volume [4].

The recent trend is the emergence of multi-lateral peering sites where dozens or hundreds of networks exchange traffic at one location or across one tightly interconnected set of buildings. These peering points arose first in Europe and Asia, where they reduced the amount of local traffic that was routed to the US and back [5]. For example, there are 144 networks interconnected at the Hong Kong Internet Exchange. The group open-ix.org is trying to bring this model to North America. In any event, the focus is on shorter paths and less long haul data transmission as this cuts costs and improves user experience.

5. Peering in practice

A 2011 survey of 4,331 ISP networks (86% of backbone carriers) analyzed 142,210 inter-carrier interconnection agreements with some interesting results [5]. 99.5% were handshake agreements based on commonly understood peering principles, without any written contract, and 99.7% were symmetric. So settlement-free peering dominates. While the majority of networks have less than ten peers, a number of multi-lateral peering agreements are visible in the data since participating operators have hundreds of peers.

Because of the costs in setting up and managing interconnections, there are normally minimum traffic conditions for peering. For example, Google peers at 70+ Internet exchanges and 60+ other facilities around the world. Under their policy, networks at any of these exchanges can peer if they have adequate traffic destined for Google: 100 Mbps for US or EU peering, 25 Mbps for Asia but no minimums at African or South American Internet exchanges [6].

6. Peering Politics

As mentioned above, economic clout plays a role in large carrier interconnection agreements, and occasionally, a breakdown in negotiations results in service disruptions for some users. For example, a November 2010 peering dispute between Level 3 and Comcast arose when Netflix changed CDN partners, from Akamai to Level 3, driving an enormous increase in traffic on Comcast-Level 3 interconnection links. How this was resolved is not public, but the two parties came to some agreement and upgraded links so Comcast customers could again get access to Netflix videos, although nearly three years passed before they announced they had a final agreement [7].

Strong competition in the Internet backbone causes prices to reflect marginal costs, however access networks in the US and many countries are monopolies or duopolies under dramatically less competitive pressure. Recently, major consumer access ISPs like Verizon, Time Warner, Comcast and France Telecom have been using the size of their subscriber bases to demand payments from major content providers like Netflix and Google [8, 9]. How this will play out remains a matter of speculation (and politics).

7. Conclusions

The Internet is a voluntary agreement among network operators to exchange traffic for their mutual benefit [10]. This exchange is remarkable: essentially unregulated, to a great extent informal, not well understood by outsiders, and yet, successfully responding to explosive growth and dramatic technical change for nearly two decades. And, all indications are the system will continue to work for decades to come.

8. References

[1] "Key ICT indicators for developed and developing countries and the world (totals and penetration rates)," International Telecommunications Unions (ITU), Geneva, 27 February 2013

December 19, 2013

Today is the 100th anniversary of the Kingsbury Commitment which effectively established AT&T, a.k.a. The Bell System, as a government sanctioned monopoly.

It was on December 19, 1913 that AT&T agreed to an out-of-court settlement of a US Government's anti-trust challenge. In return for the government agreeing not to pursue its case, AT&T agreed to sell its controlling interest in Western Union telegraph company, to allow independent telephone companies to interconnect with the AT&T network and to refrain from acquisitions the Interstate Commerce Commission did not approve.

This was part of AT&T president Theodore Vail's strategy to make telephony a public utility run by AT&T. It was AT&T Vice President Nathan Kingsbury who signed the letter to the US attorney general, but the strategy was that of Theodore Vail and the credit for resulting Bell System monopoly goes to him above all others.

The official monopoly was finally sanctioned and regulated in 1934 (after yet another round of anti-trust investigations) under the Communications Act of 1934 (which set up the Federal Communications Commission as the regulator), but by then the monopoly was complete. It's start was the Kingsbury Commitment exactly 100 years ago today.

While that monopoly was supposedly broken into seven pieces by the Bell System breakup of 1984, in fact none of these "Baby Bells" ever competed with each. After successive acquisitions, these seven local monopolies are now owned by either Verizon Communications or AT&T. And, despite promises of competition made to entice regulators into accepting various acquisition, none of these local exchange operations has any meaningful business outside of their original monopoly footprint. They were and are monopolies today. The only difference is the local telephone monopolies are less and less regulated today.

Yes, there is local competition today, but local competitors don't have access to the wire or fiber that the monopolies have installed, at rate payer expense and with preferential access to the rights of way. This means companies like netBlazr are reduced to using wireless links for most of our connections.

April 26, 2013

There are plenty of studies that show the economic value of better communications, but their conclusions come out something like "a ten percent increased adoption of high speed Internet access results in a 1% increase in the growth rate for gross domestic product." Yes, that's big, but it sounds lame.

Imagine no Internet: no data plans on phones, no ethernet or wi-fi connections at home — or anywhere. No email, no Google, no Facebook, no Skype.

That’s what we would have if designing the Internet had been left up to phone and cable companies, and not to geeks whose names most people don’t know, and who made something no business or government would ever contemplate: a thing nobody owns, everybody can use and anybody can improve — and for all three reasons supports positive economic externalities beyond calculation.

March 23, 2013

The results of an excellent study made, for reasons that will become clear, by an anonymous author reaches this conclusion:

So, how big is the Internet?That depends on how you count. 420 Million pingable IPs + 36 Million more that had one or more ports open, making 450 Million that were definitely in use and reachable from the rest of the Internet. 141 Million IPs were firewalled, so they could count as "in use". Together this would be 591 Million used IPs. 729 Million more IPs just had reverse DNS records. If you added those, it would make for a total of 1.3 Billion used IP addresses. The other 2.3 Billion addresses showed no sign of usage.

Notice that, of the roughly 4 billion possible IPv4 addresses, less than half appear to be "owned" by somebody and only 591 million appear to be active.

The problem is, to make the study, the author created a botnet - that is he wrote a small program that took advantage of insecure devices to enlist additional machines to help in the study. What is amazing (if you are not a security researcher) is the extent to which he was able to coop insecure devices testing only four name/password combinations, e.g. root:root, admin:admin and both without passwords.

This is very valuable research and it was apparently done without causing anyone any harm. None-the-less, the US government has treated this kind of research as a crime in the past even before all the cyber security laws of the past decade. So I hope this researcher anonymity holds.

November 18, 2012

On Friday (Nov 16th), a republican study committee chaired by US Congressman Jim Jordan published "RSC Policy Brief: Three Myths about Copyright Law and Where to Start to Fix it." This was a remarkable and insightful statement of the issues and suggested some possible resolutions, all in just a little over 8 pages of text.

Unfortunately, someone (we assume the copyright lobby) got to them within hours of the document appearing and the publication was pulled from the committee's website. Luckily, in the Internet age, copies were preserved elsewhere. As I write, there is a copy here.

The first myth is "The purpose of copyright is to compensate the creator of the content."

Of course the truth is the US Constitution establishes copyright (in Article I, Section 8, Clause 8) “To promote the Progress of Science and useful Arts, by securing for limited Times to
Authors and Inventors the exclusive Right to their respective Writings and Discoveries."

In other words, the purpose of copyrights is not to protect Disney, but to further progress and innovation in our country.

It gets better. Read the eight page study here. If that copy has been taken down, Google for the title. If you can't find it anywhere, write me.

August 04, 2012

Here's the introduction to a document obtained via WCITLeaks. WCIT Leaks is our only source of information on what is submitted to an otherwise secret International Telecommunications Union (ITU) process. As others have noted, here for example, there is a group of countries attempting to get the ITU to take control of the Internet. This would certainly be an advantage for countries wishing to limit their citizen's access to the Internet, but it would be a disaster for the Internet at large. Luckily the US State Department is now on record as strongly opposing any such intervention. Here's the introduction to their submission (with emphasis by yours truly).

This contribution presents proposals to the World Conference on International Telecommunications 2012 (WCIT-12) that have been developed by the United States of America for the revision of the International Telecommunications Regulations (ITRs). The intent of these proposals is to support a revision of the ITRs that advances the worldwide goal of greater competitive and affordable access to telecommunications networks. The ITRs have provided a foundation for growth in the international telecommunications market, contributing to overall economic development around the world. The United States supports efforts to utilize the ITRs as a tool to foster continued development of international telecommunications, without overburdening the telecommunications sector with unnecessary and intrusive regulation. The United States reaffirms its readiness to work with all of the delegations to achieve a successful outcome at WCIT-12.

The United States also notes, however, that the Internet has evolved to operate in a separate and distinct environment that isbeyond the scope or mandate of the ITRs or the International Telecommunication Union. Specifically, it emerged from multi-stakeholder organizations such as the Internet Society, the Internet Engineering Task Force (IETF), the World Wide Web Consortium (W3C), the Regional Internet Registries (RIRs), and the Internet Corporation for Assigned Names and Numbers (ICANN). These organizations have played a major role in designing and operating the Internet and have succeeded by their very nature of openness and inclusiveness. The United States believes these existing institutions are most capable of addressing issues with the speed and flexibility required in this rapidly changing Internet environment. As a decentralized network of networks, the Internet has achieved global interconnection without the development of any international regulatory regime. The development of such a formal regulatory regime could risk undermining its growth.

Therefore, the United States will not support proposals that would increase the exercise of control over Internet governance or content. The United States will oppose efforts to broaden the scope of the ITRs to empower any censorship of content or impede the free flow of information and ideas. It believes that the existing multi-stakeholder institutions, incorporating industry and civil society, have functioned effectively and will continue to ensure the continued vibrancy of the Internet and its positive impact on individuals and society. Furthermore, recalling that Member States agreed in Plenipotentiary Resolution 130 (Guadalajara, 2010) that “legal or policy principles related to national defense, national security, content and cybercrime . . . are within [Member States’] sovereign rights,” the United States will oppose any provisions that interfere with those rights. The United States invites other administrations to engage in dialogue consistent with these principles, which are vital to the continuing development of international telecommunications.

Thank you, State Department. This is both a clear statement of the desired outcome and introduces an interesting diplomatic ploy (highlighted in light green) to help accomplish the desired outcome.

July 27, 2012

To get access to content behind the WSJ paywall, go to Google News, type in the headline (Google Ramps Up Challenge to Cable) and then follow the Google News link.

Of course it's a short quote that only captures part of what I had discussed. My point in the phone interview with Amir Efrati (at the wsj) was that Google's KC fiber project is:

a great political statement, showing what the duopoly could be delivering (if it weren't a monopoly or duopoly) and

a great platform for Google to experiment with TV boxes, TV content licensing issues and consumer access services in general,

but it is not disruptive.

Because of the legal and regulatory structure of US access markets, we are stuck with one or a very few (usually 2) vertically integrated services. And even Google can't change that.

Until we get structural separation between the natural monopoly physical layer (dark fiber) and the potentially competitive upper layers, we will be stuck with a cable service provider monopoly or duopoly. Google's announcement of a vertically-integrated, cable TV-like set of services just proves this point.

At this year's Freedom to Connect conference, I had the pleasure of speaking with John Brown who runs CityLink Fiber in Albuquerque New Mexico. They offer fiber connections to businesses (everything from dark fiber on up) and, in residential parts of downtown Albuquerque, they offer 100 Mbps and 1 Gbps fiber Internet connections.

Of course it's hard to get a single TCP connection to utilize even 100 Mbps but if you're running many things at once and/or have several people at home using the net at once, a 1 Gbps connection can't be beat. Look at this speedtest by a residential user on their network:

They are each building fiber access networks independent of the incumbents or the Federal or local government (although all have to deal with local governments of course). Each story is different, but that's excellent. It means there are multiple paths forward.

May 11, 2012

While I have no direct involvement with underseas fiber infrastructure, I've long been interested in the spread of communications, especially to the developing world, so I've tracked international submarine cable deployments for many years. Today, I was looking at the new submarine cable directory from the Submarine Telecoms Forum and a friend looking over my shoulder said "Wow, where does that come from?"

So if you also want a brief distraction from your busy day, here are the submarine cable sites I follow:

But for cable by cable data, with maps, I love the the Submarine Telcoms Forum's Submarine Cable Almanac. It has detailed data and individual maps for over 200 cables systems. So where overview maps like Greg's above give you a global picture, SubTelForum's Almanac has per-cable maps like this:

April 02, 2012

If you are interested in the FCC's broadband measurement program or in Verizon's FiOS service, here's some info. I've been a FiOS customer since July 2005 and I've been an FCC measurement site since their program began in December 2010. Until last month, the monthly reports routinely showed rock solid performance at the advertised rate of 15/5 Mbps with less than 15 ms delay and less than 0.05% packet loss.

Beginning last month and more obvious in this month's report we saw our service degrade and then come back. Of all the measurements, it was delay and packet loss which best reflected what we were sensing.

During the entire period, bandwidth measurements were unaffected:

But over some weeks, something seemed to be happening. Some transactions seemed to take longer than expected. Well, it's visible when you look at latency and packet loss. Delay was creeping up above 20 ms and then above 30 ms with packet loss going up to and above 1%. Then on Feb 24th, Verizon did something (split a PON segment? added capacity on a backhaul?) that fixed the problems. Look at these graphs:

Seeing this with Verizon and SamKnows (the FCC's measurement contractor) was reassuring as, at netBlazr, I'd already decided the most sensitive way to measure performance was by looking at latency and packet loss. We find the open source program SmokePing is the single most useful measure of our network's performance -- a single graphic shows both latency and packet loss.

As an example, here is a SmokePing graphic for a netBlazr member that was having problems, showing before and after the problem was fixed. If only I had been monitoring SmokePing at that time, I could have found and fixed this problem on Monday rather than Tuesday afternoon. :( We know better now!

Then on the Thursday Afternoon I'm participating in the Knowledge Exchange - Technical which is described as "A panel discussion on lessons learned by experienced technical leaders.". Hopefully I can contribute some theory as the other panelists are likely well ahead of me in practical experience!

Should the Internet be considered public infrastructure? What are the best ways to increase access to the underserved? What are the impediments to more effecient and economic access for everyone?

The Internet is a major force in the world’s economic and political systems, as well as in how people live, work and play in their daily lives. With over 2 billion users worldwide, the Internet still has huge capacity for growth and users have tremendous opportunities today to leverage the technology to develop game-changing innovations that could equally radically change the communications landscape. The Internet is integral to GDP growth, economic modernization, and job creation, generating over 10 percent of GDP growth in the past 15 years in the countries studied.

Providing the opportunity that comes with Interent access to everyone will empower millions and millions of people to leverage the information economy to improve their lives and their communities.

Bob Frankston is the co-creator with Dan Bricklin of the VisiCalc spreadsheet program and the co-founder of Software Arts, the company that developed it. Frankston graduated in 1966 from Stuyvesant High School in New York City and in 1970 from M.I.T. Frankston has received numerous honors and awards for his work:

Fellow of the Association for Computing Machinery (1994) "for the invention of VisiCalc, a new metaphor for data manipulation that galvanized the personal computing industry"

The MIT LCS Industrial Achievement Award The Washington Award (2001) from the Western Society of Engineers (with Bricklin)

Fellow of the Computer History Museum

In recent years, Frankston has been an outspoken advocate for reducing the role of telecommunications companies in the evolution of the internet, particularly with respect to broadband and mobile communications. He coined the term "Regulatorium" to describe what he considers collusion between telecommunication companies and their regulators that prevents change.

Brough Turner is a well established communications industry engineer and entrepreneur. He founded netBlazr to dramatically change the landscape for broadband Internet access in the US. Previously Brough was founder and CTO of Natural MicroSystems (IPO 1994) and NMS Communications, building several successful businesses in fixed and mobile communications equipment. He is an electrical engineering graduate of the Massachusetts Institute of Technology.

While his leading interests are technology and innovation, his career has included roles in engineering, operations, finance, marketing and customer support. He writes and is quoted widely on telecommunications topics in trade and general business publications and he is a frequent speaker at telecom industry events around the world. From 2001-2008, Brough focused on wireless infrastructure and mobile applications. His 3G and 4G tutorials are widely popular (Google ‘3G Tutorial’ for more info). Brough blogs at http://blogs.broughturner.com on the technology, economic and social issues of communications at the intersection of telecom, mobility and the Internet.

Preston Rhea is a Program Associate for the Open Technology Initiative at the New America Foundation. He supports OTI's mission of digital justice for its Broadband Technology Opportunities Program work with research, analysis, writing and program assistance. Preston also researches and writes on community-based communications and technology activism. Before joining the New America Foundation, Preston spent a year in Beijing, China working for an internet content delivery network. He holds a bachelor of science degree in electrical engineering from the Georgia Institute of Technology, as well as a Spanish minor and an International Plan certificate. Preston also studied electrical engineering at the Universitat Politècnica de València in Valencia, Spain, and spent five years in several countries working with the global student-run organization AIESEC.

February 20, 2012

On Friday evening, netBlazr had it's first major outage, unfortunately lasting six hours. In the end, the problem was with one of the fibers in Cogent's riser cable inside the John Hancock Tower. It was not a simple cut. That would have been obvious. Instead we had marginal light levels, perhaps due to a nick, a overly-tight bend or a poorly assembled connector.

For those that are interested, I wrote up my Friday evening adventures in summary form, followed with more detail for those who are really, really interested. Then this morning I wrote my followup list, again in the netBlazr blog for those who are really interested in the day-to-day issues of being an ISP.

Shortly after I wrote that blog post, Ove Edfors pointed me to a paper that he wrote with Anders J. Johansson, Is orbital angular momentum (OAM) based radio communication an unexploited area? In this paper they prove that radio vortices are a subset of MIMO. Specifically, they show that OAM radio communication, i.e. using radio vortices, is a sub-class of traditional MIMO communication with circular antenna arrays. So, if you have a set of antenna elements that can create radio vortices, the calculations inherent in a MIMO radio system will automatically create and use vortices to the extent they are useful.

Unfortunately they also proved vortices don't give us the additional independent paths I'd hoped for. To get spatial multiplexing gain, MIMO needs additional independent paths, a.k.a. multi-path. Multipath is easy to come by in an office environment (due to reflections from walls, ceiling, floor, filing cabinets, etc.). But in an outdoor point-to-point link with directional antennas there are no reflections, so the only independent paths we've been able to exploit are those resulting from polarization differences (e.g. horizontal vs vertical). This remains the case - radio vortices don't help.

At very short distances, with widely spaced antenna elements at either end, you can get multiple independent paths, just due to the separate fields radiated by separate elements, but this separation gets jumbled as soon as the two ends of a link are separated by more than the Rayleigh distance, as shown in this graph.

At 5.8 GHz with a total antenna apeture of 30 cm, the Rayleigh distance is about 3.5 meters. Since I'm interested in point-to-point links that are typically over 50 meters, we remain limited to two spatial streams. There are no additional paths due to twisted vortices.

October 28, 2011

I'm at the MassTLC's annual Unconference. I've been to each of the previous events and each year it gets larger. I'll post on my Goggle+ account (that also propagates to my Twitter stream - @brough - and my Facebook stream with the has tag #MassTLC.

October 23, 2011

Today, significant money and political capital are being expended to obtain and hold onto usable license-exempt access in the TV white spaces. These efforts are important for applications today but there’s a follow-on spectrum initiative that, if successful, would yield much greater benefits in the long term. We should be seeking similar access to as much as possible of the spectrum above 3 GHz, almost all of which is dramatically under-utilized today.

Throughout the 20th century and right up to today, it's been the case that higher frequencies "don't go as far." But this is the result of technology limits, not ultimate physical limits, and these technology limits are now being overcome.

Within ten years it will be widely apparent that higher frequencies go just as far through the atmosphere, they do just as well at penetrating buildings, and they have other extremely important benefits that lower frequencies lack.

Among their many advantages, directional antennas are smaller and more economical at higher frequencies. Directional antennas reduce received interference and facilitate spatial reuse, thus vastly increasing the utility of higher frequency spectrum.

What’s more, it's easier to send high-speed data because there is more spectrum available at higher frequencies. We'll never be able to send 1 Gbps over any real-world 6 MHz TV channel but, above 3 GHz, we can easily find 200 MHz of spectrum that's temporarily vacant and that's enough to carry more than 1 Gbps of data even with today's technology.

For the next decade or two, TV white spaces will continue to be important for penetrating foliage, but even with foliage, the physics of what is possible differs from 20th century experience. In the future, the real action will be above 3 GHz.

Finally, while it's never easy to persuade existing licensees to accept secondary users in “their spectrum” even while it’s idle and they are non-interfering, it should be easier to fight the political battles now, when most people don't realize the long term value of spectrum above 3 GHz. Now is the time we should be seeking license-exempt access to as much as possible of the white spaces above 3 GHz.

Details for the technically inclined

All photons (light or radio waves of any frequency) go at the same speed (the "speed of light"). In our atmosphere, photons at frequencies above 10 GHz are subject absorption because they excite resonances in atmospheric molecules like water vapor or oxygen. But the atmosphere is transparent to radio signals between 30 MHz and 10 GHz so, with a clear line-of-sight, radio signals at 8 GHz go just as far as signals at 700 MHz or 50 MHz [2].

This physical fact is sometimes missed because the Free Space Path-Loss (FSPL) equation (see: http://en.wikipedia.org/wiki/Free-space_path_loss) commonly used to calculate radio frequency (RF) transmission losses actually encapsulates two effects. These are 1) the actual path loss (which is independent of frequency) and 2) the receiving antenna aperture which is based on wavelength. Thus the FSPL equation assumes smaller antennas for higher frequencies and of course, smaller antennas collect less energy. With equal antenna apertures, unobstructed line-of-sight radio transmissions are frequency independent, even with 20th century technology.

The problems that have favored lower frequencies are reflection, refraction, polarization and diffraction. Higher frequencies have shorter wavelengths and shorter wavelengths signals are more easily scattered. Scattered signals that reach the receiver have taken a longer path and thus arrive a little later. With 20th century technology, these delayed signals (called "multi-path" signals) were just part of the noise degrading the primary signal. Now with Multiple Input Multiple Output (MIMO, see: http://en.wikipedia.org/wiki/MIMO), it's possible to decode multi-path signals, remove them from the noise, align them in time and add them to the primary signal - multi-path signals are no longer a deficit but actually improve system performance!

MIMO technology only began to emerge in the mid-1990s but it is now an option in the latest Wi-Fi, WiMAX and LTE specifications. MIMO uses multiple radios and higher order MIMO requires increasingly sophisticated calculations, but early (2x2) MIMO systems are already widely deployed in 802.11n consumer WiFi products and continued semi-conductor progress (following Moore's Law) will make MIMO calculations and additional radios ever lower cost.

Also inherent in higher order MIMO is beamforming and beamsteering. As the number of radios and antennas in a MIMO system increases, the system is able to simulate tighter and tighter beams, providing ever more spatial reuse of spectrum and more range or capacity for individual connections. However, tighter beams require more wavelengths of separation between the outer most antenna elements. Again higher frequencies have shorter wavelengths, so antennas that support tighter beams require less space at higher frequencies. For example, a 10 degree beam at 700 MHz requires an antenna ~10 feet across. To do the same at 8 GHz, the antenna need only be ~10 inches across.

Long term, the spectrum above 3 GHz will be more valuable than the spectrum below 3 GHz. Let’s get license-exempt access to these white spaces now, while the political stakes are still (relatively) low.

[4] NIST, Electromagnetic Signal Attenuation in Construction Materials, NISTIR 6055, http://fire.nist.gov/bfrlpubs/build97/PDF/b97123.pdf Note that ordinary window glass is essentially transparent to RF at the frequencies tested (500 MHz to 8 GHz) while most other building materials provide substantial attenuation. One caveat: as this study was done to help design RF-based measurement device for the construction industry, they post-processed their data to remove delayed signals. In other words, MIMO communications systems will do considerably better than these measurements suggest.

[7] RF Engineering for Wireless Networks: Hardware, Antennas and Propagation, by Daniel M. Dobkin, PhD, ISBN 0750678739 has useful chapters on signal propagation in the atmosphere, in the environment and in buildings.