June 13, 2007

The Spectrum Auction for Dummies (and by Dummies)

by publius

As Matt Stoller and others have noted, the FCC is finalizing plans for its upcoming auction of extremely-valuable wireless spectrum. It’s an incredibly important auction, the outcome of which will shape wireless voice service and, more importantly, wireless broadband for years to come. I’ll be writing more about this, but I thought it would be useful to provide an overview of some of the big-picture issues at stake. I’ll give a basic intro first, and then move on to more complicated policy debates.

As many of you know, the right to use the electromagnetic spectrum is controlled by the federal government. The FCC carves it up and “licenses” pieces of it to various types of parties (radio broadcasters, TV broadcasters, cell phone providers, etc.). Like real estate, though, not all spectrum is created equal. Lower-frequency spectrum is more valuable because it travels further and is more resilient (i.e. better able to go through walls, hills, etc.) than higher-frequency spectrum. That’s why, for instance, AM radio stations have much wider range than FM stations -- they operate at lower-frequencies.

For reasons both historical and political, TV broadcasters have enjoyed access to wide swaths of incredibly valuable low-frequency spectrum. However, as a result of the DTV transition (digital TV), broadcasters will soon be abandoning parts of this spectrum. Specifically, they will soon be required to broadcast digitally rather than in analog. Because digital transmissions are more efficient, the transition will free up spectrum space. Our eminently-wise public servants in Congress have decided that this newly-freed-up broadcasters’ spectrum should be reallocated to commercial users via auctions and to public safety agencies (e.g., fire departments, emergency communications). These are all good things.

At long last, the DTV transition draws nigh. Later this year (or early next year), the FCC will auction off big chunks of the broadcasters’ spectrum -- often referred to as the “700 MHz spectrum.” In the wireless world, this spectrum is considered “beachfront property” because it is stronger and more resilient than the spectrum that wireless providers (voice and broadband) currently use. For instance, one reason why your cell phone doesn’t work in urban office buildings (particularly if you have Sprint or T-Mobile) is that the phones often use higher frequency spectrum (e.g., PCS spectrum) that can’t penetrate heavy concrete walls very well. In addition, and for similar reasons, the 700 MHz spectrum is far better-suited for mobile broadband than higher-frequency spectrum.

Right now, the FCC is considering the rules (the “service rules”) that will govern the spectrum auction and will likely release those rules in the next month or so. The upshot here is that the rules the FCC adopts will essentially determine who is going to “win” this spectrum. There really is no “neutral” outcome here. The structure of the rules will determine the winners, and thus the shape of the wireless market in the years ahead. That’s why parties are furiously lobbying right now to get the rules they prefer embedded in regulations governing the auctions. To take one example, consider the “block size” rules (i.e., should the pie be sliced in 6 pieces or 4 pieces? Or maybe 2 big pieces and 100 tiny pieces?). Big carriers want the spectrum auctioned in “big” blocks that no one else can afford to bid on. Smaller carriers in turn want “smaller” (and more local) blocks.

Intro aside, let me cut to the chase (this is where it gets a bit wonkier). One key question in the policy debate is whether the wireless market is too concentrated. Most everything else turns on that. Do the ever-growing market shares of the AT&T (Cingular) and Verizons of the world pose a threat? Or do they provide important efficiencies, economies of scale, etc.? That’s really the debate. And depending on how you answer it, the rules can be structured accordingly. In other words, they can be structured to facilitate more competition or to facilitate more concentration (which, again, isn’t necessarily a bad thing).

If the answer to the question is “yes, the wireless market is too concentrated,” then that concentration has two important consequences on the future of the wireless market. First, the cell phone competition we’ve enjoyed over the years will slowly fade away as smaller carriers get pushed out or purchased. Second (and this is critical), wireless broadband will become a flimsy adjunct service run by national carriers rather than something that competes directly with landline (DSL, cable) broadband (more on this below). In other words, this auction will have a strong effect on the future of wireless broadband. Let me unpack both of these points and discuss how they relate to the auction.

On the first point, the fear is that the wireless voice market will eventually consist of a very small number of national carriers in most places and, accordingly, new providers won’t enter or existing ones won’t survive. Competition is difficult largely because it’s hard to become a big wireless carrier. Entering the wireless market (or expanding services) has enormous capital costs (building towers, etc.) that are basically barriers to entry. The use of 700 MHz spectrum, however, lowers these barriers to entry because it significantly lowers capital costs. In short, it’s cheaper to operate a 700 MHz service than an equivalent service at a higher frequency because you need less stuff. For instance, because higher-frequency spectrum (e.g., 2.5 GHz) doesn’t travel as far, you have to build many times more towers than you would if you had 700 MHz spectrum. The point here is that if non-national carriers (or smaller national ones) could obtain this valuable spectrum, they would be in a better position to compete or expand services.

The second point -- wireless broadband -- is more important though. Many people treat wireless broadband as the Second Coming-- the great next wave of the future of communications. Frankly, I don’t buy it. Barring some technical advances, wires will always beat wireless -- for instance, I suspect few of you have “cut the cord” for whatever crappy Internet service you get on your handheld devices.

But my skepticism aside, the only chance wireless broadband has to be a real competitor is by obtaining more and better spectrum. The current WiFi and WiMax services -- the supposed great competitors of big cable and telcos -- operate on high-frequency spectrum that isn’t ideal. The 700 MHz spectrum, however, could really help make wireless broadband ready for prime time, particularly if it is combined with providers’ existing high-frequency spectrum. (The construction costs argument applies here too).

There are good reasons, however, to fear that wireless broadband won’t be fully developed. Most importantly, the biggest wireless carriers -- AT&T Wireless and Verizon Wireless-- are wings of the behemoths AT&T and Verizon. These carriers are invested heavily in wireline broadband (i.e., DSL or fiber lines). Wire-less broadband potentially competes with those lucrative services. For this reason, these carriers have an incentive to make sure that wireless broadband never emerges as a full competitor with their landline services. Instead, they would prefer wireless broadband to be an additional service rather than a substitute for wireline broadband. Thus, these big carriers have strong incentives to buy up the spectrum that poses this threat and not fully exploit it (or, more likely, develop it as a limited service that you use in addition to your landline broadband).

This is getting long (and maybe boring), so I’ll stop for now. The point though is that the FCC’s service rules are going to determine the types of companies who get this valuable spectrum. This, in turn, will have a strong influence on the shape of the wireless market for decades to come (this auction is the last biggie for the foreseeable future). I haven’t really gotten into the nitty-gritty of what rules would be best for facilitating competition, but I’ll try to get into that later.

[On a final point, though I’m not longer a lawyer and no longer have clients, I should disclaim that I did represent clients in this proceeding that supported the arguments I outlined above. Take that for what it’s worth.]

Interesting. It had not occured to me that the strength of the signal had anything to do with the frequency. In my mind it had simply become a pile of 'layers' and there suddenly were some layers available (we allready had the auctioning of free frequency bands).

Isn't the development in protocols more important than the frequency though? Just like we now have much higher speed over copper wires than we thought possible a few years ago?

Wireless has the advantage that you can pay for a service that you have at a variety of places. There are places (café's, restaurants) over here where free wifi acces is offered, and lots of people go there to work their laptops. I like the idea of being able work whereever I want and laptops are getting close to giving the same price/performance as desktops.

At the same time: in two years time the hole town I live in will have acces to glass fibre. It's hard to compete with that even if you have the 'stronger' frequencies.

Great, great post, publius. It's my understanding from my brief involvement in radio and my college studies that few federal agencies have been as subject to regulatory capture as the FCC has. Is there any reason to believe that this spectrum will be made available to small carriers, rather than that the big few will get exactly what they want? We all saw who the DTV frequencies went to as soon as they were made available.

By far the best utilized spectrum is the unlicensed 2.4 GHz range. No commercially used band even comes close.

There is thus not only a question to whom you sell the spectrum in which bundles and under what conditions, but also the option NOT to sell it all and let parts of it be free for everybody. Imagine how good private mesh networks and WLANs could be if they had access to more spectrum.

Good, useful post and not at all boring. Is there any indication which way this will go? If it is broken down into smaller pieces, will there be some sort of anti-trust provision to prevent a post auction buy-up of all the smaller licences by the giants? And finally, what are the chances that this will become an issue to which the public will pay attention or is it likely to be one of those under the radar deals which give less consideration to the public good than the lawmakers' and corporate good?
Thanks for the heads up, in any case.

Thank you! We don't get cell phone at my house and Verizon has let us know it has no intention of ever offering DSL there. It's amazing how much of the internet is realistically unavailable to people on dial up. I can't even update my daughter's Wii. This information affects me directly and I'm glad I have a better handle on what to write my representatives now.

There’s likely a tradeoff between range and spectrum width that I don’t see anyone considering. Basically, when you decrease the number of towers per square mile of urban area, you increase the number of calls that a given tower has to handle, because you’ve increased the number of customers a given tower will have to service.

So it’s possible that a given chunk of bandwidth at 700 MHz will be worth less than the same bandwidth up at over 2GHz, or that a good chunk of the advantage (less infrastructure cost) will be wiped out. A given chunk of spectrum can support N simultaneous calls, after all, in a given cell.

And of course it should go without saying that the 36 MHz chunk of bandwidth in question isn’t really what would be considered to be a fat information pipe. For voice, it's quite valuable, but for data, you're talking perhaps a few megabits per second in a given cell. Total.

So I’d guess that the way things might move is we’d see the same number of towers at this frequency, and the selling point would be longer battery life due to reduced transmit power.

Fair warning: the current FCC chairman, Kevin J. Martin, has shown a very strong tendency to favor big users.

Martin is young-ish (40 IMS) and had little background in telecoms before becoming chairman of the FCC. He was part of the Bush-Cheney 2000 legal team; politics junkies with good memories may remember him from the Florida recount. His wife was counsel to VP Cheney and, before that, to John Cornyn.

FCC Commissioners are appointed for 5-year terms, and by law, no more than three can be from the same political party. So there are currently 2 Democrats and 3 Republicans. (Chairman Martin counts as one.)

The Commissioner to watch is probably Jonathan Adelstein, a former Tom Daschle staffer who has been a vocal critic of FCC policy under Martin. He'll likely throw out the first ball.

thanks doug - i didn't know that. actually, i think the commissioner to watch is mcdowell. martin, in addition to what you say, is a wholly owned subsidiary of verizon. mcdowell (the new republican) is emerging as the swing vote, the justice kennedy if you will. b/c he chose to abstain from the AT&T merger, we got the first sort of net neutrality like conditions (fleeting, but a good precedent). Tate (the 3d republican) does what Martin and the Bells say.

Slart - I don't think that's really a trade-off. You need to think of the infrastructure involved in setting up a network - building a tower, getting permits, running lines, installing equipment on the tower. towers are really the key. once you've built the tower, run the lines, got the permits, etc., it's much easier to slap on a new piece of technology to handle more traffic. By example, it's easier to build an additional door if you've already constructed the building.

Now that we've seen digital cellular evolve, we can see the mistakes that have been made over the past quarter-century. What worked and made sense back in the '80s with A-side and B-side bands, with roaming charges for anything out of local service area, is obsolete, but the geographically small areas are still the method of allocating spectrum, some of which are not being served.

My approach would be to guarantee AT&T, Sprint, T-Mobil, Verizon and any other prospective national carrier a license for the entire country available in the higher-band spectrum as long as they guaranteed to provide coverage in all areas with more than a low minimum density, say 1 person per square mile. The nationals would be allowed to pool their receiver/transmitters in other low density areas and would be required to make any pooled receiver/transmitters or towers available to others for use.

Local, captive, government-sponsored, regional and other services would then have the lower-band spectrum available for their use for voice or data services.

These words from my Communications Theory professor always stuck with me: "The world is, for the most part, a low-pass filter." Then he said something about Nyquist and I fell asleep.

I started along the same path as Slarti, but then realized that even if you need smaller cells to manage users, the towers for those smaller cells will be cheaper to build or modify for lower-frequency use than they would be for higher-frequency use. And the indoor coverage will be better.

I'm not sure why we should have to decide whether or not it is 'too concentrated' now. Divide it up into small enough chunks that small guys have the opportunity to buy it.

If the economies of scale are enough that it is much better to be large, let the big ones buy up the smaller ones. (I know, I'm hopelessly pro-market).

I would tend to think that the stable equilibrium would be 2-4 large carriers nationwide with an additional 1 or 2 smaller ones in huge or super-concentrated markets. But I'm not wedded enough to my concept to force it with the rules.

This question illustrates the spilt between pro-market Republicans and pro-business Republicans.

I think publius is saying that the towers are the expensive part, hsh.

Which reinforces my point, I think, rather more than it does to discount it: if the cost savings is in fewer transmitters (whether the cost is in the tower itself, or in the antenna and transceiver is not all that crucial, imo) then you're going to take a hit at some point; that point being where operating at the lower tower density limits your call volume per tower per megahertz of spectrum.

Now, it could be that conventional cellular networks never, ever operate anywhere near peak capacity; I'd be surprised if that were the case, though. Usually you design to worst case, then add margin.

Again, I'm not a telecommunications guy, but I have taken a course in communications theory, once upon a time. And occasionally use what I learned, from time to time. You might say I'm long on signal communications theory (modulation, in other words), but short on network theory.

I guess I'm partly agreeing with you and partly with Publius, but on different levels. I would say it's cheaper to build a lower-frequency network, not because you need fewer towers, but because you need less of a tower for each cell. So I agree with you that you might need the same number of towers to manage your users on a per-cell basis, but I agree with Publius' larger point that a lower-frequency network is cheaper, if for a somewhat different reason. Of course, I could be completely wrong about all of this.

I'm not sure why we should have to decide whether or not it is 'too concentrated' now. Divide it up into small enough chunks that small guys have the opportunity to buy it.

Sounds OK at first glance, but I wonder if you get into holdout or squatter problems. I don't know enough about the technology to know if a small owner has the ability to create problems for a possibly more sensible large-scale provider.

Maybe that can be solved by requiring winning bidders to actually put what they buy to use. Maybe slarti or hsh or someone else who actually understands this stuff could tell us whether that's a problem or not.

Technically you are obligated to use the frequencies you are licensed for. If another party wants some piece of spectrum your licensed for but not using, that party can petition to have it taken from you and granted to them if they can show non-use. I don't know if the pending auctioning will follow this same logic, but that's typically how radio licensing works in my experience.

Now, it could be that conventional cellular networks never, ever operate anywhere near peak capacity; I'd be surprised if that were the case, though. Usually you design to worst case, then add margin.

Slarti, this seems contradictory to me. If you design for worst case and add margin, you should expect the network not to operate near capacity. Or are you saying that "margin" is small and "worst case" is common, therefore you will sometimes be somewhat near peak capacity?

I don't know about cell networks, but I know that traditional telephone trunks are not designed for worst case (e.g. Mother's Day). They design for regularly encountered heavy use and leave you to try a few times to get through on Mother's Day. "We're sorry, but all circuits are busy..."

Think about how you would allocate resources if you were a phone company. You would want your service to be reliable enough to keep customers and get their calls through so you could bill them. But, on the other hand, you wouldn't want to make a capital investment that will be under-utilized. It's an optimization problem. Google the word "Erlang" for a bit of insight.

This puzzles me: "always" is a terribly long time. I'm seriously doubtful that wires will have a significant advantage over wireless, in voice communications, in 5000 years. Or even in 500 years. Or even in 50 years.

I'm even doubtful about that as regards more complex and higher-bandwidth-requiring data communications. In 50 years, or 60, or 70, or 100, nothing will change? Why are "some technical advances" unlikely, rather than absolutely inevitable?

Always? Always?

I'm seriously doubtful that we're going to be installing interstellar wires in a few hundred years, though I certainly could be wrong, and would be thrilled to see the sight of those wires linking solar systems.

In an earlier life I wrote software that dealt with this stuff. You generally establish a service standard which is something like "5% chance of a call being blocked at peak demand time." Then you put in enough capacity to meet that. The work we did was for private companies, not telephone utilities, mostly, so they may do things differently, but that kind of approach is the general idea.

The principles are very well understood and apply to a broad range of activities.

Slarti, this seems contradictory to me. If you design for worst case and add margin, you should expect the network not to operate near capacity. Or are you saying that "margin" is small and "worst case" is common, therefore you will sometimes be somewhat near peak capacity?

More like, design margin is going to be the same, likely, for a 700 MHz network as it is for, for example, a 4GHz network. So the network that serves a larger number of callers is going to be closer to the edge of failure. That's how I see it, anyway.

Think about how you would allocate resources if you were a phone company. You would want your service to be reliable enough to keep customers and get their calls through so you could bill them. But, on the other hand, you wouldn't want to make a capital investment that will be under-utilized. It's an optimization problem.

This is pretty much the point I was attempting to convey.

I would say it's cheaper to build a lower-frequency network, not because you need fewer towers, but because you need less of a tower for each cell.

Could be, but it also could be that tower cost is driven more by footprint than height. Anyway, way past the limit of what I usefully know or suspect, here.

Say, has anyone heard about the storm we had here this afternoon? My pool screen, which stayed intact during a direct hit from 100mph winds, is now full of larger-than-designed holes.

I know: circular reference. But yes, the failure probably won't be catastrophic, unless there's something that excess connection requests might do to bring a subnet down. Again: way past any semblance of expertise I might have, now.

Still: the performance of a larger cell in a densely-populated area is likely (almost certainly) to be worse than that of a smaller cell, if the smaller cell was sized properly to begin with.

We had a 20-minute-long hailstorm here, evidently, and my tomatoes are somewhat the worse for wear. I've got a friend whose house I'm keeping an eye on (while he's on vacation); he's got half a dozen Sweet 100 plants that are lying prostrate on the ground.

Sweet 100, for the uninitiated, are the only cherry tomato plants worth growing in this part of the country. Possibly the entire world.

Lower frequencies don't inherently result in longer range due to some fundamental law of physics. In the absence of things like trees and buildings in the way, electromagnetic energy becomes more diffuse at rate that's the inverse of the square of the distance the farther one goes away from the transmitter independent of frequency. What changes is that a receiver antenna of a given gain shrinks at higher frequencies, capturing less energy. Increase antenna size to capture the same amount of energy and free space range stays constant. However, the antennas become much more directional. http://en.wikipedia.org/wiki/Free-space_loss

In real world non-line-of-sight (NLOS) signal paths, higher frequencies tend to suffer higher losses for the part of the signal path where radio signals get absorbed and reflected by trees and building materials. In the suburbs, this effect can be reduced by using relatively tall towers and lots of antenna downtilt to reduce the distance radio signals travel through the trees.

Various smart antenna techniques take advantage of physically small antennas at higher frequencies to achieve better spectral efficiency than would be possible at lower frequencies where arrays of antennas become too big. The tradeoff is cost and complexity go up with multiple transmitters, receivers, and antennas operating in parallel.

"Sounds OK at first glance, but I wonder if you get into holdout or squatter problems."

Educated buyers know how to get around these problems (at least those related to opportunistic holdouts rather than crankiness): options. When the pieces are worth much less than the whole, you don't buy piece by piece. Instead, you buy options to pieces. Holding out then becomes a less profitable strategy than selling.

It's the only way they lay pipelines and railways these days (unless there are "eminent domain" laws in place which forces sale at below market price, of course!).