So my joke to satellite operators and insurers at Satellite 2018 was that they’d better watch out for an increase in launch risk, if Elon is going to suck up the best talent at SpaceX for his BFR project. Or perhaps some of those people will simply decide to leave the company, like SpaceX’s director of software engineering.

As Musk continues serenely on with his SpaceX focus, at Tesla things have been going from bad to worse this week, with Tesla’s heads of engineering and production writing a (conveniently leaked) memo on how they planned to free up workers for the Model 3 line this week in order achieve the “incredible victory” of producing 300 Model 3s in a day. No one seems to have remarked that back in September 2016, a near identical memo came from Elon Musk personally (and was even leaked to the same reporter), which seems to confirm that Musk has stepped back from direct involvement in Tesla’s production woes.

That’s all in sharp contrast to Musk’s camping expedition on the roof of the Gigafactory last fall, and raises the question as to how long Musk will be lost in SpaceX affairs, and how he will divide his time between the two businesses in future. Certainly he will be critical to Tesla’s upcoming fundraising efforts, and I’m sure Tesla’s investors thought they would be getting a lot more of Musk’s attention in exchange for his latest $2.6B pay package.

In fact, ironically enough, the current outcry has made it easier for the new disclosure-based regime to operate effectively: consumer advocates will be watching out for perceived violations of net neutrality principles and if they can drum up sufficient outrage about unfairness or antitrust violations, then the FTC will be forced to take action. However, its hard to see technical violations which (at least in the short term) benefit consumers, such as zero rating or content bundling, prompting much of an outcry. And even supporters of net neutrality agree that the big tech companies are likely to benefit from the new rules.

But what I find interesting here is the long political game. Its amusing to see net neutrality proponents accusing Pai of being a shill for Verizon and the cable companies. While many past FCC commissioners have simply gone through the revolving door to make millions in the industry, Chairman Pai has the talent and ambition to achieve much bigger goals.

Given how easy it will be to portray net neutrality opponents as “fake news”, Pai has a clear political platform to run on and it wouldn’t be in the least bit surprising to me to see him ultimately figuring as part of the Presidential or Vice Presidential race in 2028 or 2032. In that context its intriguing to consider what other hot button political issues might come within the overall ambit of the FCC. One area is freeing up spectrum, where there are possibilities for a big bipartisan win with the satellite C-band downlink.

However, an even bigger issue (as highlighted in my last post) is that Pai has already shifted to raising questions about whether you can you trust Silicon Valley companies, such as Google, Facebook and Twitter. And as noted above, many people think these companies are likely to get even stronger after the abolition of net neutrality rules. So a winning populist theme in the latter part of this administration could well be to threaten to cut these companies down to size, potentially with the helpful side effect of limiting their influence (which next time around will more likely reflect these companies’ preference for Democrats) in the 2020 election.

As a result, I think Silicon Valley now has to be concerned not just about losing the favors it has been granted on a regular basis for the last 20 years, but a rising hostility within government to the big tech companies and their role both in the economy and in political dialog.

09.14.16

I’ve been thinking a lot about the failure of Google Fiber and if there are any wider lessons about whether Silicon Valley will ever be able to compete effectively as an owner and builder of telecom networks, or indeed in other large scale capex intensive businesses (such as cars).

One conclusion I’ve come to is that there may be a fundamental incompatibility between the planning horizon (and deployment capabilities) of Silicon Valley companies and what is needed to be a successful operator of national or multinational telecom networks (whether fiber, wireless or satellite). The image above is taken from Facebook’s so-called “Little Red Book” and summarizes pretty well what I’ve experienced living and working in Silicon Valley, namely that the prevailing attitude is “There is no point having a 5-year plan in this industry” and instead you should think just about what you will achieve in the next 6 months and where you want to be in 30 years.

In software that makes a lot of sense – you can iterate fast and solve problems incrementally, and scaling up (at least nowadays) is relatively easy if you can find and keep the customers. In contrast, building a telecom network (or a new car design) is at least two or three years’ effort, and by the time you are fully rolled out in the market, it’s four or five years since you started. So when you start, you need to have a pretty good plan for what you’ll be delivering (and how it is going to be operated at scale) five years down the road.

For an existing wireless operator or car company that planning and implementation is obviously helped by years of experience in operating networks or manufacturing facilities at scale. But a new entrant has to learn all of that from scratch. And it’s not like technology is transforming the job of deploying celltowers, trenching fiber or running a vehicle manufacturing line. Software might change the service that the end customer is buying, but it’s crazy to think that “if tech companies build cars and car companies hire developers, the former will win.”

Of course self-driving cars will drastically change what people do with vehicles in the future. But those vehicles still have to be made on a production line, efficiently and with high quality. Mobile has changed the world dramatically over the last 30 years, but AT&T, Deutsche Telekom, BT, etc. are still around and have absorbed some of the most successful wireless startups.

Moreover, Silicon Valley companies simply don’t spend capex on anything like the scale of telcos or car companies. In 2015 Alphabet/Google’s total capex for all of its activities worldwide was $9.9B and Facebook’s capex was only $2.5B (surprisingly, at least to me, Amazon only spent $4.6B, though Apple spent $11.2B and anticipated spending $15B in 2016).

But the US wireless industry alone invested $32B in capex in 2015, which is more than Facebook, Google, Amazon and Apple put together, and that excludes the $40B+ spent on spectrum in the AWS-3 auction last year. In the car industry, GM and Ford each spent more than $7B on capex in 2015. So in round numbers, total wireless industry and car industry capex on a global basis are both of order $100B+ every year, a sum that simply can’t be matched by Silicon Valley.

Undoubtedly some other Silicon Valley companies will end up trying to build their own self-driving cars. But after the (continuing) struggles of Tesla to ramp up, it seems more likely that most startups will end up partnering with or selling their technology to existing manufacturers instead. And similarly, in the telecom world, does anyone believe Google (or any other Silicon Valley company) is going to build a new national wireless broadband network that is competitive with AT&T, Verizon and Comcast?

It seems to me that about the best we could hope for is for Google to push forward the commercialization of new shared access/low cost frequency bands like 3.5GHz (e.g. as part of an MVNO deal with an existing operator) so that the wireless industry no longer has to spend as much on spectrum in the future and can deliver more data at lower cost.

If Facebook and Google are now simply going to come up with clever technology to reduce network costs (rather than building rival networks) or even just act as a source of incremental demand for mobile data services, then that will be good for mobile operators. Those operators may just be “dumb pipes,” but realistically, despite Verizon’s (flailing) efforts, that’s pretty much all they could hope for anyway.

02.04.16

Last June, I pointed out that CTIA had taken the odd (but hardly surprising) decision to run away from its own data on US mobile data traffic growth in 2014, which showed only a 26% increase, in favor of an error-strewn Brattle Group paper that used Cisco’s February 2015 mobile VNI estimates to supposedly show the “FCC’s mobile data growth projection in 2010 was remarkably accurate.”

Now Cisco has released its February 2016 mobile VNI report, CTIA’s attempt to bury its own data has backfired, because Cisco has retrospectively revised its prior estimates of North American mobile data traffic downwards by more than one third. Cisco’s estimate of EOY North American mobile data traffic in 2014 has been reduced from 562TB/mo in the Feb 2015 mobile VNI to only 360TB/mo in the new release, which makes it almost the same as the estimate of 2013 traffic in last year’s publication (345TB/mo). Similarly the estimate of 2015 traffic has been reduced from 849TB/mo in last year’s publication to only 557TB/mo this year, a loss of more than an entire year’s growth.

The chart below highlights the impact of this massive revision to Cisco’s estimates, showing that when combined with previous revisions, the latest estimate of traffic in 2014 is less than half the figure projected by Cisco back in February 2010 (which was used by the FCC as one of the traffic estimates in its infamous October 2010 paper).

Ironically, Cisco’s latest revision actually brings its estimate of US mobile data traffic for 2014 (323TB/mo at the end of the year – from the online tool, the North America figure above includes Canada) below CTIA’s own total of 4.1PB for the year as a whole (i.e. an average of 340TB/mo) meaning that in fact CTIA would have been better off using its own data. Of course that wouldn’t have had the desired effect of trying to make the FCC feel good about its hopelessly flawed 2010 paper and justify CTIA’s own attempt to disinter this methodology and use it to claim a spectrum crisis is still imminent.

UPDATE (2/4): Oddly, in a blog post Cisco asserts that the revision was so that the “2014 baseline volume adjustment now aligns with CTIA’s figures.” That’s actually wrong: the Cisco estimate represents an end-of-year figure (the first line of the report states “Global mobile data traffic reached 3.7 exabytes per month at the end of 2015“) and given that the monthly traffic is growing (one assumes relatively consistently) through the year, it is implausible for the final month of the year to have less traffic than the average for the year as a whole. However, it is certainly very ironic that Cisco has chosen to (try and) align its traffic figures with CTIA at precisely the time when CTIA wants to bury its own numbers in favor of more optimistic growth estimates.

Specifically, Cisco’s latest VNI forecast shows that the FCC’s estimate that traffic would grow by 35 times between 2009 and 2014 (a reduction from Cisco’s own estimate of 48 times growth) was actually 50% too high, because according to the latest Cisco data, North American traffic grew by only 22 times between 2009 and 2014 (unless of course Cisco has also made some undisclosed retrospective revision to its 2009 data).

As usual, Cisco has not provided any detailed explanation of this dramatic change in its numbers, though part of the cause seems to be a further increase in the assumption of offloaded traffic in 2014 and 2015: the latest forecast claims offloaded traffic on a global basis will grow from 51% in 2015 to 55% in 2020, whereas the previous version claimed growth from 45% in 2014 to 54% in 2019. However, this certainly can’t account for the scale of the change and Cisco must have revised many other individual elements of its traffic estimates in order to reduce its estimates by such a large amount.

Ironically enough this new report confirms that the FCC was totally wrong in 2010, because the total amount of spectrum in use at the end of 2014 was only 348MHz, not the 822MHz that the FCC projected. Despite this clear demonstration of how ludicrous the original projections were, Brattle reuses the same flawed methodology, which ignores factors such as that new deployment is of cells for capacity not for coverage, and so the ability to support traffic growth is in no way proportional to the total number of cellsites in the US.

Now Verizon’s Q2 results, announced today, highlight another fundamental flaw in the methodology used by Brattle, in terms of the projected gains in spectral efficiency. Brattle assume that the gain in spectral efficiency between 2014 and 2019 is based on the total amount of traffic being carried on 3G, 4G LTE and LTE+ technologies, so with 72% of US traffic in 2014 already carried on LTE, there is relatively little scope for further gains.

This is completely the wrong way to account for the data carrying capacity of a certain number of MHz of spectrum, since it is the share of spectrum used in each technology that is the critical factor, not the share of traffic. Verizon highlighted that only 40% of its spectrum is used for LTE at present, while 60% is still deployed for 2G and 3G, despite the fact that 87% of traffic is now carried on LTE. Of course once that 60% of 2G and 3G spectrum is repurposed to LTE, Verizon’s network capacity will increase dramatically without any additional spectrum being needed.

Brattle’s methodology would suggest that moving the rest of Verizon’s traffic to LTE would only represent a gain of 5% in capacity (assuming an improvement from 0.72bps/Hz to 1.12bps/Hz) but in fact moving all of Verizon’s spectrum to LTE would produce a gain of 27% in network capacity (and an even bigger improvement once LTE Advanced is considered). Adjusting for this error in the methodology reduces the need for more spectrum very sharply, and once it is considered that the incremental cellsites will be deployed to add capacity, not coverage, the need for additional spectrum above the current 645.5MHz is completely eliminated.

06.22.15

Back in October 2012, despite their misleading press release, CTIA’s own data indicated that there had been a significant slowdown in data traffic growth and confirmed that the emperor/FCC Chairman had no clothes when talking about the non-existent spectrum crisis. Now it seems CTIA is at it again, releasing an error-strewn paper today on how the FCC’s October 2010 forecasts of mobile data traffic have supposedly proven to be “remarkable accurate.”

Instead CTIA is praising the “solid analytical foundation” of the FCC’s October 2010 paper, which was recognized at the time, by myself and others, to be fundamentally flawed. So perhaps its not so ironic that the CTIA’s new paper mischaracterizes the data that the FCC used, stating that the forecasts “were remarkably accurate: In 2010, the FCC’s growth rate projections predicted mobile data traffic of 562 petabytes (PBs) each month by 2014; the actual amount was 563 PBs per month.”

Firstly, the FCC did not actually state an explicit projection of mobile data traffic, instead giving an assessment of growth from 2009 to 2014, as the (simple arithmetic) average of growth projections by Cisco, Yankee and Coda (use of an arithmetic average in itself is erroneous in this context, a geometric average of multipliers should be used instead).

Secondly, the FCC was projecting US mobile data traffic, not North American data traffic, which is the source of the quoted 563PB per month (which is taken from Cisco’s February 2015 mobile VNI report). We can see the difference, because the February 2010 Cisco report (available here) projects growth for North America from 16.022PB/mo in 2009 to 773.361PB/mo in 2014, a multiplier of 4827%, whereas the FCC paper quotes Cisco growth projections of 4722% from 2009 to 2014. (The reason for the difference is that growth in Canada was expected to be faster than the US, because Canada was expected to partially catch-up with US in mobile data traffic per user over the period).

If CTIA had bothered to look at Cisco’s mobile VNI tool, which gives data for major countries, it could have easily found out that Cisco estimates US mobile data traffic grew by 32 times between 2009 and 2014, not 35 times as the FCC forecast, let alone the 47 times that Cisco forecast back in February 2010.

Moreover, CTIA completes fails to mention that Cisco’s figure for 2014 (which according to the VNI tool is 531.7PB/mo for the US, rather than the 562.5PB/mo for North America that CTIA quotes), is completely different to (and far higher than) CTIA’s own data, which is based on “aggregated data from companies serving 97.8 percent of all estimated wireless subscriber connections” so should obviously be far more accurate than Cisco’s estimates.

However, CTIA is instead running away from its own data, stating in a footnote to the new paper that:

“Note that participation in CTIA’s annual survey is voluntary and thus does not yield a 100 percent response rate from all service providers. No company can be compelled to participate, and otherwise participating companies can choose not to respond to specific questions. While the survey captures data from carriers serving a significant percentage of wireless subscribers, the results reflect a sample of the total wireless industry, and does not purport to capture nor reflect all wireless providers’ traffic metrics. CTIA does not adjust the reported traffic figures to account for non-responses.”

Compare that disclaimer to the report itself, which notes that “the survey has an excellent response rate” (of 97.8%) and that it is adjusted for non-responses (at least so far as subscribers are concerned):

“Because not all systems do respond, CTIA develops an estimate of total wireless connections. The estimate is developed by determining the identity and character of non-respondents and their markets (e.g., RSA/MSA or equivalent-market designation, age of system, market population), and using surrogate penetration and growth rates applicable to similar, known systems to derive probable subscribership. These numbers are then summed with the reported subscriber connection numbers to reach the total estimated figures.”

CTIA’s wireless industry survey states that total US mobile data traffic was 4061PB in 2014, equating to an average of 338.4PB/mo over the year. Even allowing for the fact that Cisco estimate end of year traffic, not year averages, it is hard to see how the CTIA number for Dec 2014 could be more than 400PB/mo, some 25% less than Cisco.

If we instead compare growth estimated by CTIA’s own surveys (which only provide data traffic statistics back to 2010), then the four year growth from 388PB in 2010 to 4061PB in 2014 is a multiplier of 10.47 times, whereas the FCC model is a multiplier of 13.86 times (3506%/253%) and Cisco’s projection is a multiplier of 19.51 times (4722%/242%).

Thus by any rational and undistorted analysis, the FCC’s mobile data traffic growth projections have proven to be overstated. Likely reasons for this include the increasing utilization of WiFi (which was dismissed by the FCC paper, stating that “the rollout of such network architecture strategies has been slow to date, and its effects are unclear”) and the effect of dilution, as late adopters of smartphones use far fewer apps and less data than early adopters.

Nevertheless, what the data on traffic growth does confirm is that the FCC’s estimate of a 275MHz spectrum deficit by 2014 was utter nonsense. Network performance has far outpaced expectations, despite cellsite growth being far slower than predicted (3.9% compared to the 7% assumed in the FCC model) and large amounts of spectrum remaining unused: if we simply look at the Brattle paper prepared for CTIA last month, its easy to calculate that of the 645.5MHz of licensed spectrum identified by Brattle, at least 260MHz remains undeployed (12MHz of unpaired 700MHz, 10MHz of H-block, 65MHz of AWS-3, 40MHz of AWS-4, 20MHz of WCS, and all but around 40MHz of the 156.5MHz of BRS/EBS).

Thus in 2014, the US didn’t require 822MHz of licensed spectrum as the FCC forecast (which would have increased to 861MHz if the FCC model was corrected to the supposed traffic growth of 32x, as estimated by Cisco, and the actual number of 298,055 cellsites, as reported by CTIA), but instead, as CTIA proclaims, US mobile operators enabled “Americans [to] enjoy the best wireless experience in the world” with less than 400MHz of actual deployed spectrum.

06.11.15

I’m told that after a fair amount of difficulty and a month or two of delay, Greg Wyler has now successfully secured commitments of about $500M to start building the OneWeb system, and he will announce the contract signing with Airbus at the Paris Air Show next week. The next step will be to seek as much as $1.7B in export credit financing from COFACE to support the project with an objective of closing that deal by the end of 2015.

This comes despite Elon Musk’s best efforts to derail the project, culminating in an FCC filing on May 29. That filing proposes the launch of 2 Ku-band test satellites in late 2016, which would presumably be aimed at ensuring OneWeb is forced to share the spectrum with SpaceX, as I predicted back in March.

Clearly Musk is not happy about the situation, since I’m told he fired Barry Matsumori, SpaceX’s head of sales and business development, a couple of weeks ago, after a disagreement over whether the SpaceX LEO project was attracting a sufficiently high public profile.

Most observers appear to think that Musk’s actions are primarily motivated by animus towards Wyler and question whether SpaceX is truly committed to building a satellite network (which is amplified by the half-baked explanation of the project that Musk gave in his Seattle speech in January, and the fact that I’m told SpaceX’s Seattle office is still little more than a sign in front of an empty building).

Google also demonstrated what appears to be a lack of enthusiasm for satellite, despite having invested $900M in SpaceX earlier this year, when its lawyers at Harris, Wiltshire & Grannis asked the FCC on May 20 to include a proposal for WRC-15 that consideration should be given to sharing all of the spectrum from 6GHz to 30GHz (including the Ku and Ka-bands) with balloons and drones (see pp66-81 of this document). Needless to say, this last minute proposal has met with furious opposition from the satellite industry.

However, one unreported but intriguing aspect of SpaceX’s application is the use of a large (5m) high power S-band antenna operating in the 2180-2200MHz spectrum band for communication with the satellites. Of course that spectrum is controlled by DISH, after its purchase of DBSD and TerreStar, and so its interesting to wonder if SpaceX has sought permission from DISH to use that band, and if so, what interest Charlie Ergen might have in the SpaceX project.

Nevertheless, it looks like Wyler is going to win the initial skirmish, though there are still many rounds to play out in this fight. In particular, if Musk truly believes that the LEO project, and building satellites in general, are really going to be a source of profits to support his visions of traveling to Mars (as described in Ashlee Vance’s fascinating biography, which I highly recommend) then he may well invest considerable resources in pursuing this effort in the future.

If that’s the case, then the first to get satellites into space will have a strong position to argue to the FCC that they should select which part of the Ku-band spectrum they will use, and so Wyler will also have to develop one or more test satellites in the very near future. Fortunately for him, Airbus’s SSTL subsidiary is very well placed to develop such a satellite, and I’d expect a race to deploy in the latter part of 2016, with SpaceX’s main challenge being to get their satellite working, and OneWeb’s challenge being to secure a co-passenger launch slot in a very constrained launch environment.

09.17.14

The report I wrote jointly with LS Telcom on “Mobile spectrum requirements estimates: getting the inputs right” has now been published and is available here. This report critiques the ITU Speculator model, concluding that (as I noted earlier), the Speculator model traffic assumptions vastly exceed any reasonable traffic density that can be expected in 2020, even at events such as the Superbowl or World Cup Final.