Pretty Advanced New Stuff from CCG Consulting

Main menu

Monthly Archives: June 2016

Now that the US District Court has affirmed the net neutrality ruling in its entirety it’s worth considering where the FCC will go next. Up until now it’s been clear that they have been somewhat tentative about strongly enforcing net neutrality issues since they didn’t want to have to reverse a year of regulatory work with a negative court opinion. But there are a number of issues that the FCC is now likely to tackle.

Zero-Rating. I would think that zero-rating must be high on their list. This is the practice of offering content that doesn’t count against monthly data caps. This probably most affects the customers in the cellular world where both AT&T and T-Mobile have their own video offerings that don’t count against data caps. With the tiny data caps on wireless broadband there is no doubt that it is a major incentive for customers to watch that free content, and consequently drive ad revenues to their own carrier.

But zero-rating exists in the landline world as well. Comcast has been offering some of its content on the web to its own customers. They claim this is not zero-rating, but from a technical perspective it is. However, now that Comcast has raised the monthly data cap to 1 terabit then this might not be of much concern to the FCC right now.

Privacy. The FCC has already proposed controversial rules that apply to the ISPs and consumer privacy. In those rules the FCC proposes to give customers the option to opt-out of getting advertisement from ISPs, but more importantly consumers can opt-out of being tracked. This would put the ISPs at a distinct disadvantage compared to edge providers like Facebook or Google who are still free to track online usage.

Last year the FCC also started to look at the ‘super-cookies’ that Verizon was using to track customers across the web. This privacy ruling (which is now on a lot more secure footing based upon the net neutrality order) could end the supercookies and many other ways that ISPs might track customer web behavior. Interestingly, both Verizon and AT&T have been bidding on buying Yahoo and this potential privacy ruling puts a big question mark on how valuable that acquisition might be if customers can all opt out from being tracked. I think Verizon and AT&T (and Comcast) all are eyeing the gigantic ad revenues being gained by web companies and this ruling is going to make it a challenge for them to make big headway in that arena.

Lifeline. I think that the net neutrality ruling also makes it easier for the FCC to defend their new plans to provide a subsidy to low-income data customers in the same manner they have always done for voice customers. Now that data is also regulated under Title II it fits right in to the existing Lifeline framework.

Data Caps. At some point I expect the FCC to tackle data caps. It’s been made clear by many in the industry that there are no network reasons for these caps, even in the cellular world. The cellular data plans in most of the rest of the world are either unlimited or have extremely high data caps.

The FCC said in establishing net neutrality that they would not regulate broadband rates. And in the strictest sense if they tackle data caps they would not be. The regulatory rate process is one where carriers must justify that rates aren’t too high or too low and has always been used, as much as anything, to avoid obvious subsidies.

But data caps – while they can drive a lot of revenues for ISPs – are not strictly a rate issue, and in facts, the ISPs hop through a lot of verbal hoops to say that data caps are not about driving revenues. And so I think the FCC can regulate data caps as an unnecessary network practice. It’s been said recently that AT&T is again selectively enforcing its 150 monthly gigabit cap, and so expect the public outcry to soon reach the FCC again, like happened last year with Comcast.

Like this:

There is a lot of recent news of technological breakthroughs that ought to have some an on telecom and broadband.

Faster Microwave Radios. A collaboration of researchers working for ACCESS (Advanced E Band Satellite Link Studies) in Germany has created a long-range microwave link at 6 Gbps speeds. The technology uses the very high E band frequencies at 71 – 76 GHz and in testing were able to create a data path between radios that were 23 miles apart. It’s the very short length of the radio waves at this frequency that allow for the very fast data rates.

The radios rely on transistor technology from Fraunhofer IAF, a firm that has been involved in several recent high-bandwidth radio technologies. The transmitting radio broadcasts at a high-power of 1 watt while the receivers are designed to detect and reconstruct very weak signals.

When perfected this could provide a lower cost way to provide bandwidth links to remote locations like towns situated in rough terrain or cellular and other radio towers located on mountaintops. This is a significant speed breakthrough for point-to-point microwaves at almost six times the speed of other existing microwave technologies.

Smarter Chip Processing. A team at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a programming technique to make much better use of denser computer chips. In theory, a 64-core chip ought to be nearly 64 times faster than one with a single core, but in practice that has not been the case. Since most computer programs run sequentially (instructions and decision trees are examined one at a time, in order) most programs do not run much faster on denser chips.

The team created a new chip design they call Swarm that will speed up parallel processing and that will also make it easier to write the code for denser chips. In early tests, programs run on the swarm chip have been 3 to 18 times faster while also requiring as little as 10% of the code needed for normal processing.

1,000 Processor Chip. And speaking of denser chips, scientists at the University of California at Davis Department of Electrical and Computer Engineering have developed the first chip that contains over 1,000 separate processors. The chip has a maximum computation rate of 1.78 trillion instructions per second.

They are calling it the KiloCore chip and it’s both energy efficient and the highest clock rate chip ever developed. The chip uses IBM’s 32 nanometer technology. Each chip can run separate programming or they can be used in parallel. The scientists envision using a programming technique called the single-instruction-multiple-data approach that can break applications down into small discrete steps so that they can be processed simultaneously.

Faster Graphene Chips. Finally, US-Army funded researchers at MIT’s Institute for Soldier Nanotechnologies have developed a technology that could theoretically make chips as much as 1 million times faster than today. The new technology uses graphene and relies on the phenomenon that graphene can be used to slow light below the speed of electrons. This slowed-down light emits ‘plasmons’ of intense light that create what the scientists called an optic boom (similar to a sonic boom in air).

These plasmons could be used to greatly speed up the transmission speeds within computer chips. They have found that the optic booms can push data through graphene at about 1/300th the speed of light, a big improvement over photons through silicon.

The researchers have been able to create and control the plasmon bursts and are hoping to have a working graphene chip using the technology within two or three years.

Like this:

YouTube recently caused a big stir in the broadcast world by announcing that it now reaches more Millennials (individuals between 16 and 36) than any broadcast or cable network. Of course, since massive advertising dollars are at stake, the cable networks all pushed back on the claim. And the fact is, nobody knows the real numbers because YouTube is not measured by Nielsen ratings in the same way as broadcast and cable networks.

But one thing is clear – that Millennials are abandoning traditional TV in droves. Just this last TV season there was a huge fall-off in Millennial viewers almost across the board, as measured by Nielsen. This not only has a big impact on advertising and on content providers, but it has to be of great concern to anybody that offers a traditional cable TV product.

Nielsen reports that during the 2014-15 TV season that there were 19 shows broadcast in primetime that drew 1 million or more Millennials. In this past season that dropped to 12 shows. And the drop is almost across the board. ABC Millennial viewers were down almost 19%. The CW that has programming for younger viewers was down 16%. NBC dropped 10%, Fox dropped over 7% and CBS was down 3%.

Some individual shows lost a lot of support from Millennials. For example, How to Get Away with Murder and Family Guy each lost an average of 700,000 live weekly viewers. Scandal and Once Upon a Time each lost 500,000 live viewers. The trend isn’t just one of Millennials abandoning live viewing. Nielsen tracks viewing also on video-on-demand. In 2014-15 the show How to Get Away with Murder had 2.7 million viewers aged 18 – 34 in live-plus-three and 2.8 million in live-plus-seven viewing. Those numbers dropped in one year to 1.7 million and 1.9 million, a drop of 37%.

This trend is one of the primary drivers that is moving advertising away from traditional TV to web-TV like YouTube’s Google Preferred. eMarketer reports that 2017 will be the year where Web advertising passes TV advertising. They are predicting $77.4 billion for web advertising compared to $72 billion for TV. And they predict after that web advertising will skyrocket while TV advertising will remain flat. They also predict that by 2020 that mobile advertising will eclipse TV advertising.

None of these statistics are good signs for traditional TV networks and for cable TV operators. An entire generation of viewers is tuning out, and the expectation is that generation Z behind them will have almost no affinity for television. Recent studies suggest that peoples’ TV viewing habits are largely set by their experience with the medium as children, and the children of Millennials are going online far more than watching traditional TV.

This doesn’t mean that watching video content is down. The average hours for individuals to watch some kind of video content has grown slightly over the last decade. But that viewing time is now being spent watching YouTube, Netflix, Amazon Prime and other non-TV sources of video. And the viewing is rapidly shifting away from the TV screens to other devices.

Millennials are an interesting generation. They are old enough to remember the time just before the explosion of technology, but they are young enough to have adopted new technologies as they came along. They are the generation that has experienced the biggest change during the shortest period of time for digital technologies. But it seems that as they are getting older that they are becoming more like their kids and are abandoning older technologies like sitting in front of a TV.

I’m not sure that cable companies really are going to have any product to attract the attention of this generation and certainly not for their children in generation Z. Cable companies are hoping that things like TV Everywhere and skinny bundles will slow people from dropping TV entirely, but even that might not be enough. Broadcast TV is now largely something that is being produced for – and watched by – Baby Boomers. And they aren’t going to be around forever.

Like this:

The traditional open access business model to serve residential customers has never worked in this country. I am familiar with the financial performance of most of the existing open access networks and from a purely financial perspective they are all failures. A few networks have failed outright like Provo. A few others have been able to generate enough revenues to cover annual operating costs, but most don’t even do that. And from what I’ve seen, none of the existing open access networks have ever been able to generate enough cash to pay anything towards the cost of building the fiber network, leaving the cities that build the network holding the financial bag for the initial investment.

There are a few reasons that this has never worked. First is that open access naturally drives ISPs towards cherry picking. Open access networks operate by charging fees to ISPs to use the network. If an ISP pays the typical $30 per month fee to use the network, they are not going to sell inexpensive broadband to anybody in the community. So when ISPs only sell high-priced products they don’t get enough customers and the city network owner doesn’t collect enough revenue to pay for the network. This has happened to every traditional open access network. None of them have signed up enough customers to pay for the networks and everyone who has built a network using this model ends up heavily subsidizing the open access network.

The other issue is that most cities have had trouble attracting very many quality ISPs. The whole concept of open access is to offer choices to customers. But most of the open access networks in the country only have a few ISPs, and even the ones they attract are often tiny, undercapitalized businesses. Attracting ISPs is so hard that there is one large open access network today that has been reduced over time to having only one residential ISP on the network. That’s not providing much customer choice.

But there are two cities looking at an alternative model. One is the small city of Ammon, Idaho, and the other is San Francisco. Both cities want residents to pay for the basic cost of the network. It’s an interesting idea.

In Ammon a household that wants broadband access will pay a tax levy of $10 to $15 per month and will also pay a utility fee of $16.50 per month. This means that each subscriber will pay $26 to $31 per month for the fiber network – a very similar charge to what is charged to ISPs on other open access networks. The Ammon commitment is voluntary and only those that sign up for broadband will pay the fees.

San Francisco is considering a similar proposal. There, residents would pay a monthly utility fee of $25 and businesses would pay as much as $115. In San Francisco this fee would be mandatory and everybody in the City would be assessed the fee. In an NFL city the fee probably has to be mandatory to assure that the network will be paid for.

Having customers pay a fee to the city takes the pressure off the cherry picking issue. By lowering what ISPs pay there is a lot better chance of having affordable products on the network. And that ought to result in more customers on the network.

But like any idea this one still leaves some open questions. For instance, how does the city make enough money over time to pay for the inevitable replacement of electronics or catastrophic events like storm damage? Or what does a city do if the ISPs don’t do a good job and customers don’t like them? The Ammon plan requires the payment of fees for a very long time, and small businesses like ISPs often don’t have the staying power to last for a long time. How will the business keep up with inflation – will the fees have to increase every year? And what happens if the city doesn’t get or keep enough customers to pay for this – will the fees go up for everybody else or will the city subsidize the network?

In a voluntary system like Ammon I also wonder what the consequences are for homes that change their mind over time. What if somebody has a financial problem and is unable to pay the fees? What happens when they want to sell their home – is this fee a tax lien of sorts? That’s what has happened to homes that buy solar power systems that are paid for over time. And what happens if a new buyer doesn’t want the fiber and doesn’t want to pay the fee? No doubt over time there will be legal issues to figure out.

The challenge to make this work in San Francisco seems much more difficult. It’s not hard to envision lawsuits from citizens who don’t want to pay the fees. And I can imagine a fierce battle with Comcast and the other current ISPs over the legality of a mandatory fiber utility fee. This seems like a concept that could take a decade of court time to resolve.

But the idea of having citizens somehow pay for the fiber network is an interesting one. Irrevocable customer pledges are a revenue stream that can be used to finance fiber construction. It’s hard to know if this concept will work until we see it in action. But it shows how serious cities are becoming to get good broadband. One has to think that if households are willing to sign long-term pledges to pay for fiber that it has to make a difference. I am sure communities all over the country will be watching to see if this works.

Like this:

I don’t know how typical I am, but I suspect there are a whole lot of people like me when it comes to dealing with customer service. It takes something really drastic in my life for me to want to pick up the phone and talk to a customer service rep. Only a lack of a broadband connection or having no money in my bank account would drive me to call a company. My wife and I have arranged our life to minimize such interactions. For instance, we shop by mail with companies that allow no-questions-asked returns.

The statistics I hear from my clients tell me that there are a lot of customers like me – ones that would do almost anything not to have to talk to a person at a company. And yet, I also hear that most ISPs average something like a call per month per customer. That means for everybody like me who rarely calls there are people that call their ISP multiple times per month.

It’s not like there aren’t things that I want to know about. There are times when it would be nice to review my bill or my open balance, but unless there is no alternative I would never call and ask that kind of question. But from what I am told, billing inquiries and outages are the predominant two reasons that people call ISPs. And if a carrier offers cable TV you can add picture quality to that list.

Customer service is expensive. The cost of the labor and the systems that enable communicating with customers is a big expense for most ISPs. Companies have tried various solutions to cut down on the number of customer calls, and a few of them have seen some success. For instance, some ISPs make it easy for a customer to handle billing inquiries or to look at bills online. This is something banks have been pretty good at for a long time. But it’s apparently difficult to train customers to use these kinds of web tools.

Many companies are now experimenting with online customer service reps who offer to chat online with you from their website. I don’t know the experiences other people have had with this, but I generally find this even less satisfactory than talking to a real person – mostly due to the limitations that both parties have of typing back and forth to each other. Unless you are looking for something really specific and easy, communicating by messaging is not very helpful and might even cost a company more labor than talking to somebody live. Even worse, many companies use a lot of pre-programmed scripts for online reps to reduce messaging response times, and those scripts can be frustrating for a customer.

There are a handful of solutions I have seen that offer new tools for making it easier for customers to communicate with an ISP. For instance, NuTEQ has developed an interactive test messaging system that can answer basic questions for customers without needing a live person at the carrier end. Customers can check account balances, report an outage, schedule and monitor the status of a tech visit or do a number of other tasks without needing to talk to somebody having to wade through a customer service web site.

But I think the real hope for me is going to be the advent in the near future of customer service bots. Artificial Intelligence and voice recognition are getting good enough so that bots are going to be able to provide a decent customer service experience. Early attempts at this have been dreadful. Anybody who has tried to change an airline ticket with an airline bot knows that it’s nearly impossible to do anything useful in the current technology.

But everything I read says that we will soon have customer service bots that actually work as well as a person without the annoying habits or real service reps like putting a caller on hold, losing a caller when supposedly transferring you to somewhere else, or in trying to upsell you to a product you don’t want. And there ought to be no holding times since bots are always available. If a bot could quickly answer my question I would have no problem talking to a bot.

But as good as bots are going to be even within a few years, I am waiting for the next step after that. I have been using the Amazon Echo and getting things done by talking to Alexa, the Amazon AI. Alexa still has a lot of the same challenges like Siri or the other AIs, but I am pleasantly surprised about how often I get what I want on the first try. In my ideal world I would tell Alexa what I want and she (it?) would then communicate with the bots, or even the people at a company to answer my questions. At the speed at which AI technology is improving I don’t think this is going to be too many years away. I may like customer service when my bot can talk to their bot.

Like this:

One of the biggest issues that’s not talked about much with fiber deployment is getting fiber to older apartment buildings. According to the US Census there are around 19 million housing units in buildings with 5 or more rental units. Statistics from several apartment industry sources estimate that over half of those units were built before 1980 and over 80% were built before 2000. This means that a large percentage of apartment buildings are not pre-wired for broadband. There are both network and business issues associated with serving apartments that makes this part of the fiber business a real challenge.

The network issues are of two types. First is the issue of access to buildings. Building owners have the right to allow or not allow access to service providers. Many business owners will already have some sort of contractual or financial arrangement with the incumbent cable company. A few years ago the FCC outlawed a lot of specific kinds of onerous contracts between the two parties. The cable companies had used deceptive tactics to lock cable owners to only allowing them into buildings. But there are still numerous ways for an apartment owner and cable company to agree to keep other providers out of apartments.

But even should an apartment owner allow a fiber builder in, the cost of wiring older apartment buildings can be prohibitive. When a fiber builder comes to a single family home they generally are free to use the existing coaxial and telephone wiring in the home if they want access to it. But unless an apartment owner is going to grant exclusive access to a fiber builder the existing wires are still going to be used by the incumbent cable and telephone company.

And that means a total rewiring of the building. That can be a nightmare for older buildings. It might mean dealing with asbestos in ceilings and walls. It often means trying to somehow snake fiber through concrete floors and walls or else having to somehow run wiring through open hallways. And it often means disturbing tenants, and coordinating gaining entrance to multiple apartments or condos can be a challenge.

There are also business issues to deal with in apartments. Probably the number one issue is dealing with tenant churn. Almost by definition apartments have a much higher percentage of turnover than single family homes and it can be a real challenge to get and keep a decent penetration rate in apartments. Companies have tried different ideas, such as getting referrals from apartment owners, but the turnover generally is seen as a problem by most overbuilders.

One way to deal with the churn is to make a financial arrangement with the building owner rather than with each tenant. That generally involves paying some sort of commission. That is not a big problem with selling broadband, but the commissions expected on cable TV could easily push under the cost of providing the service. There was a time when seeking wholesale cable arrangements was a good business plan, but the rising cost of programming has made it far less attractive.

There are companies that are concentrating on serving apartment units in metropolitan areas. They bring a fiber to a complex and then serve data and cable to every tenant, often built into the rent. One would have to think that most of these deals are being done with newer apartments that have been built with telecom expansion in mind – lots of empty conduit throughout the building or even fiber already in the walls.

But for the normal fiber overbuilder who mostly serves single family homes and small businesses, apartments are mostly seen as an obstacle more than an opportunity. I have numerous clients who have built whole towns except for the apartments and they have yet to find an affordable and profitable business case for doing so. There are some new and interesting ways to more easily wire older buildings, such as running fiber along the ceilings in hallways – but a lot of my clients are not yet convinced there is a long-term profitable option for serving apartments.

Like this:

I had a conversation today that is the same one I’ve had many times. I was talking to a City that has already built hundreds of miles of fiber to connect municipal buildings. They were shocked to hear that the network they had built had very little value if they wanted to build FTTP for all homes and businesses. To do that would require building additional fiber on just about every route they had built in the past.

In order to understand why this is so, you have to understand that there are several very different kinds of fiber networks in the world – long-haul, distribution and last mile. Each kind of these networks is built in a very different way that doesn’t make them useful for the other two uses.

Consider long-haul fiber. This is the fiber that is built to connect cities and to stretch across the country and the world. The key to having an affordable long-haul fiber route is to carry each leg of the network as far as possible without stopping and having to repeat the signals. So generally long-haul fibers will only stop in major POPs in cities or wherever necessary along the way at repeater sites that are used to boost the signal.

I have a client who is sitting on one of the major east-west Interstates in the country. Every major long-haul carrier in the country is passing very near to him with owned or leased capacity on the long-haul route. My client was shocked when not one carrier would come off of the long-haul route to provide him with bandwidth. But it’s very costly to break into long-haul fibers, and every splice creates a little interference, and so long-haul providers are generally extremely reluctant to provide bandwidth anywhere except at established POPs along the fiber.

Distribution fiber is similar to long-haul fiber, but within local fiber networks. Distribution fibers are built to connect very specific points. The networks that cable companies build to get to their neighborhood nodes are a distribution network. So are networks built by telcos to reach DSL cabinets or networks built by cities to reach traffic signals and networks built by school boards to connect schools. These networks were generally built for the specific purpose of reaching those end points.

Distribution fiber routes share the same issue as the long-haul routes and the companies that own them are generally reluctant to break these routes along the way to connect to somebody that the network was not designed for. Certainly distribution fibers have very little use for providing service to many homes or businesses. There are generally not enough pairs of fiber in distribution routes, but more importantly they are not built with access points along the way.

The major characteristic of last mile fibers (and the one that makes them the most expensive) is the presence of many access points – using handholes, manholes, splice boxes or other ways to connect to the fiber where it’s needed. Providing these access points every few hundred feet adds a lot of cost to building fiber, and it’s very expensive, and often impossible to add access points after the fact to existing distribution fiber.

This is why I actually laughed out loud last year when I read about Comcast’s 2 gigabit product that they would sell to people who were close enough to their fiber. The Comcast fiber network is almost entirely a distribution network that connects from a central hub or ring to electronic boxes in neighborhoods known as nodes. When Comcast said a customer had to be close to their fiber, they didn’t mean that fiber had to be close the home, because distribution fiber passes lots of homes and businesses. They meant that a customer had to live close to a neighborhood node which are the only access points on a cable distribution network. Cable nodes are often placed where they aren’t an eyesore for homeowners and so I laughed because in most places hardly anybody lives close enough to a Comcast node to buy the touted 2 Gig service. In a city of 20,000 like mine I would be surprised if there are more than a dozen or two homes that would qualify to buy the Comcast service.

The same is true for most of the big ISPs. Verizon has built a lot of last mile fiber with FiOS and now CenturyLink is starting to do the same. But some of the other large companies like AT&T are very happy to talk about how close they are with fiber to homes and businesses without mentioning that the vast majority of that fiber is distribution fiber, which is close to worthless for providing fiber to homes and small businesses. This is not to say that companies like Comcast or AT&T don’t have any last mile fiber. They just don’t have very much of it and it’s generally limited to business parks or to greenfield neighborhoods where they’ve installed fiber instead of copper.

Like this:

It’s been apparent for a few years that the 2.4 GHz band of WiFi is getting more crowded. The very thing that has made the spectrum so useful – the fact that it allows multiple users to share the spectrum at the same time – is now starting to make the spectrum unusable in a lot of situations.

Earlier this year Apple and Cisco issued a joint paper on best network practices for enterprises and said that “the use of the 2.4 GHz band is not considered suitable for use for any business and/or mission critical enterprise applications.” They recommend that businesses avoid the spectrum and instead use the 5 GHz spectrum band.

There are a number of problems with the spectrum. In 2014 the Wi-Fi Alliance said there were over 10 billion WiFi-enabled devices in the world with 2.3 billion new devices shipping each year. And big plans to use WiFi to connect IoT devices means that the number of new devices is going to continue to grow rapidly.

And while most of the devices sold today can work with both the 2.4 GHz and the 5 GHz spectrum, a huge percentage of devices are set to default to several channels of the 2.4 GHz spectrum. This is done so that the devices will work with older WiFi routers, but it ends up creating a huge pile of demand in only part of the spectrum. Many devices can be reset to other channels or to 5 GHz, but the average user doesn’t know how to make the change.

There is no doubt that the spectrum can get full. I was in St. Petersburg, Florida this past weekend and at one point I saw over twenty WiFi networks, all contending for the spectrum. The standard allows that each user on each of these networks will get a little slice of available bandwidth, which leads to the degradation of everyone using it in a local neighborhood. And in addition to those many networks I am sure there were many other devices trying to use the spectrum. The WiFi spectrum band is also filled with uses by Bluetooth devices, signals from video cameras and is one of the primary bands of interference emitted by microwave ovens.

We are an increasingly wireless society. It was only a decade or so ago where people were still wiring new homes with Category 5 cable so that the whole house could get broadband. But we’ve basically dropped the wires in favor of connecting everything through a few channels of WiFi. For those that in crowded areas like apartments, dorms, or within businesses, the sheer number of WiFi devices within a small area can be overwhelming.

I’m not sure there is any really good long-term solution. Right now there is a lot less contention in the 5 GHz band, but one can imagine that in less than a decade that it will also be just as full as the 2.5 GHz spectrum today. We just started using the 5 GHz spectrum in our home network and saw a noticeable improvement. But soon everybody will be using it as much as the 2.4 GHz spectrum. Certainly the FCC can put bandaids on WiFi by opening up new swaths of spectrum for public use. But each new band of spectrum used is going to quickly get filled.

The FCC is very aware of the issues with 2.4 GHz spectrum and several of the Commissioners are pushing for the use of 5.9 GHz spectrum as a new option for public spectrum. But this spectrum which has been called dedicated short-range communications service (DSRC) was set aside in 1999 for use by smart vehicles to communicate with each other to avoid collisions. Until recently the spectrum has barely been used, but with the rapid growth of driverless cars we are finally going to see a big demand for the spectrum – and one that we don’t want to muck up with other devices. I, for one, do not want my self-driving car to have to be competing for spectrum with smartphones and IoT sensors in order to make sure I don’t hit another car.

The FCC has a big challenge in front of them now because as busy as WiFi is today it could be vastly more in demand decades from now. At some point we may have to face the fact that there is just not enough spectrum that can be used openly by everybody – but when that happens we could stop seeing the amazing growth of technologies and developments that have been enabled by free public spectrum.

Like this:

Tony Goncalves, the senior VP of strategy and business development for AT&T said last week that he thinks that pay-TV’s experiment with skinny bundles can’t last. He said that the economics are not there for it today and that his expectation is that over time that there will be pressure on skinny bundles to grow back into fat bundles. What if Gonzales is right and skinny bundles don’t work?

It’s clear that the traditional cable TV model is broken. Programmers seem to have lost their collective minds and are raising the cost of programming more each year to the point where many of my clients have seen several years in a row with programming cost increases over 10%.

And those big cost increases for programming turn directly into big rate increases on their cable products. For most cable companies it takes a 6% to 7% overall annual rate increase on cable just to cover the increased cost of programming. One doesn’t have to do much math to see that cable rates will be over $100 per month in just a few more years of continual rate increases.

Meanwhile customers cite the cost of a cable subscription as the number biggest factor that makes them consider alternatives. A lot of households are attracted to the idea of downsizing to a skinny bundle and adding Netflix or Amazon to reduce overall spending while still providing decent viewing options.

You can look at the early skinny bundles like Sling TV to see what Gonzales is talking about. They started with a small line-up of some of the most popular channels, but since then have added more and more options. It’s now possible to spend almost as much with Sling TV as with a traditional cable subscription, but getting a lot less channels. But a lot of customers seem to be finding the skinniest Sling TV options to be good enough. It includes ESPN and some of the more popular channels like the Food Network.

I know a lot of small telcos and cable companies are really hoping that customers like skinny bundles. Their biggest fear is that they continue to lose voice customers and that as customers continue to drop traditional cable that they will be left with only broadband as a product. I advise companies to do a simple test – look to see what your existing data rates would be if your only product was data. Most of them don’t like the answer, which often shows that data rates might have to climb to over $100 per month to keep companies whole.

And so the hope in the industry is that there will be some decent margin on skinny bundles. Selling skinny bundles would basically recalibrate cable TV as a product. While the costs for providing skinny bundles might grow quickly, starting over with a base rate of $25 or $30 can mean many years of providing affordable options for customers.

I’ve heard that the NCTC is negotiating skinny bundles for the small cable providers. Everybody is hoping there will be several options and that cable operators can make some decent margin on the skinny bundles. If so, I have a number of clients who will be aggressive in moving people from today’s giant packages down to the skinny bundles.

But if Gonzales is right and the math doesn’t work then I think we can all just watch cable start fading away over the next five years. We are starting to see the same kind of changes in the marketplace with cable that we saw for many years with telephone service. Consumer advocates are not advising people to drop traditional cable. Even Walmart has come out with a package that is inviting people to drop cable. As more outside forces tell your customers that the big cable packages are a bad idea the cord cutting movement will gain momentum if there isn’t an alternative product to offer to customers.

Like this:

The federal appeals court for Washington DC just upheld the FCC’s net neutrality order in its entirety. There was a lot of speculation that the court might pick and choose among the order’s many different sections or that they might like the order but dislike some of the procedural aspects of reaching the order. And while there was one dissenting option, the court accepted the whole FCC order, without change.

There will be a lot of articles telling you in detail what the court said. But I thought this might be a good time to pause and look to see what net neutrality has meant so far and how it has impacted customers and ISPs.

ISP Investments. Probably the biggest threat we heard from the ISPs is that the net neutrality order would squelch investment in broadband. But it’s hard to see that it’s done so. It’s been clear for years that AT&T and Verizon are looking for ways to walk away from the more costly parts of their copper networks. But Verizon is now building FiOS in Boston after many years of no new fiber construction. And while few believe that AT&T is spending as much money on fiber as they are claiming, they are telling the world that they will be building a lot more fiber. And other large ISPs like CenturyLink are building new fiber at a breakneck pace.

We also see all of the big cable companies talking about their upgrades to DOCSIS 3.1. Earlier this year the CEO of Comcast was asked at the INTX show in Boston where the company had curtailed capital spending and he couldn’t cite an example. Finally, I see small telcos and coops building as much fiber as they can get funded all over the country. So it doesn’t seem like net neutrality has had any negative impact on fiber investments.

Privacy. The FCC has started to pull the ISPs under the same privacy rules for broadband that have been in place for telephone for years. The ISPs obviously don’t like this, but consumers seem to be largely in favor of requiring an ISP to ask for permission before marketing to you or selling your information to others.

The FCC is also now looking at restricting the ways that ISPs can use the data gathered from customers from web activity for marketing purposes.

Data Caps. The FCC has not explicitly made any rulings against data caps, but they’ve made it clear that they don’t like them. This threat (along with a flood of consumer complaints at the FCC) seems to have been enough to get Comcast to raise its data caps from 300 GB per month to 1 TB. It appears that AT&T is now enforcing its data caps and we’ll have to see if the FCC is going to use Title II authority to control the practice. It will be really interesting if the FCC tackles wireless data caps. It has to an embarrassment for them that the wireless carriers have been able to sell some of the most expensive broadband in the world under their watch.

Content Bundling and Restrictions. Just as the net neutrality rules were passed there were all sorts of rumors of ISPs making deals with companies like Facebook to bundle their content with broadband in ways that would have given those companies priority access to customers. That practice quickly disappeared from the landline broadband business, but there are still several cases of providers using zero-rating to give their own content priority over other content. My guess is that this court ruling is going to give the FCC the justification to go after such practices.

It’s almost certain that the big ISPs will appeal this ruling to the Supreme Court. But an appeal of a positive appeal ruling is a hard thing to win and the Supreme Court would have to decide that the appeals court of Washington DC made a major error in its findings before they would even accept the case, let alone overturn the ruling. I think the court victory gives the FCC the go-ahead to fully implement the net neutrality order.