Pretty Advanced New Stuff from CCG Consulting

Main menu

Monthly Archives: August 2018

NCTA, the lobbying group for the big cable companies filed a pleading with the Federal Trade Commission (FTC) asking the agency to not get involved with regulating the broadband industry. When the FCC killed net neutrality, Chairman Ajit Pai promised that it was okay for the FCC to step away from broadband regulation since the FTC was going to take over much of the regulatory role. Now, a month after net neutrality went into effect we have the big cable ISPs arguing that the FTC should have a limited role in regulation broadband. The NTCA comments were filed in a docket that asks how the FTC should handle the regulatory role handed to them by the FCC.

Pai’s claim was weak from the outset because of the nature of the way that the FTC regulates. They basically pursue corporations of all kinds that violate federal trade rules or who abuse the general public. For example, the FTC went after AT&T for throttling customers who had purchased unlimited data plans. However, FTC rulings don’t carry the same weight as FCC orders. Rulings are specific to the company under investigation. Rulings might lead other companies to modify their behavior, but an FTC order doesn’t create a legal precedent that automatically applies to all carriers. In contrast, FCC rulings can be made to apply to the whole industry and rulings can change the regulations for every ISP.

The NCTA petition asks the FTC to not pursue complaints about regulatory complaints against ISPs. For example, they argue that the agency shouldn’t be singling out ISPs for unique regulatory burdens, but should instead pursue the large Internet providers like Facebook and Google. The NCTA claims that market forces will prevent bad behavior by ISP and will punish a carrier that abuses its customers. They claim there is sufficient competition for cable broadband, such as from DSL, that customers will leave an ISP that is behaving poorly. In a world where they have demolished DSL and where cable is a virtual monopoly in most markets they really made that argument! We have a long history in the industry that says otherwise, and even when regulated by the FCC there are long laundry lists of ways that carriers have mistreated their customers.

One of the more interesting requests is that the ISPs want the FTC to preempt state and local rules that try to regulate them. I am sure this is due to vigorous activity at the state level currently to create rules for net neutrality and privacy regulations. They want the FTC to issue guidelines to state Attorney Generals and state consumer protection agencies to remind them that broadband is regulated only at the federal level. It’s an interesting argument to make after the FCC has punted on regulating broadband and when this filing is asking the FTC to do the same. The ISPs want the FTC to leave them alone while asking the agency to act as the watchdog to stop others from trying to regulate the industry.

I think this pleading was inevitable since the big ISPs are trying to take full advantage of the FCC walking away from broadband regulation. The ISPs view this as an opportunity to kill regulation everywhere. At best the FTC would be a weak regulator of broadband, but the ISPs don’t want any scrutiny of the way they treat their customers.

The history of telecom regulation has always been in what I call waves. Over time the amount of regulations build up to a point where companies can make a valid claim of being over-regulated. Over-regulation can then be relieved either by Congress or by a business-friendly FCC who loosens regulatory constraints. But when regulations get too lax the big carriers inevitably break enough rules that attracts an increase of new regulation.

We are certainly hitting the bottom of a trough of a regulatory wave as regulations are being eliminated or ignored. Over time the large monopolies in the industry will do what monopolies always do. They will take advantage of this period of light regulation and will abuse customers in various ways and invite new regulations. My bet is that customer privacy will be the issue that will start the climb back to the top of the regulatory wave. The ISPs argument that market forces will force good behavior on their part is pretty laughable to anybody who has watched the big carriers over the years.

Like this:

It seems like one of the big digital platforms is in the news almost daily – and not in a positive way. Yet there has been almost no talk in the US of trying to regulate digital platforms like Facebook and Google. Europe has taken some tiny steps, but regulation there are still in the infancy state. In this country the only existing regulations that apply to the big digital platforms are antitrust laws, some weak privacy rules, and general corporate regulation from the Federal Trade Commission that protect against general consumer fraud.

Any time there has been the slightest suggestion of regulating these companies we instantly hear the cry that the Internet must be free and unfettered. This argument harkens back to the early days of the Internet when the Internet was a budding industry and seems irrelevant now that these are some of the biggest corporations in the world that hold huge power in our daily lives.

For example, small businesses can thrive or die due to a change in an algorithm on the Google search engine. Search results are so important to businesses that the billion-dollar SEO industry has grown to help companies manipulate their search results. We’ve recently witnessed the damage that can be done by nefarious parties on platforms like Facebook to influence voting or to shape public opinion around almost any issue.

Our existing weak regulations are of little use in trying to control the behavior of these big companies. For example, in Europe there have been numerous penalties levied against Google for monopoly practices, but the fines haven’t been very effective in controlling Google’s behavior. In this country our primary anti-trust tool is to break up monopolies – an extreme remedy that doesn’t make much sense for the Google search engine or Facebook.

Regulating digital platforms would not be easy because one of the key concepts of regulation is understanding a business well enough to craft sensible rules that can throttle abuses. We generally regulate monopolies and the regulatory rules are intended to protect the public from the worst consequences of monopoly use. It’s not hard to make a case that both Facebook and Google are near-monopolies – but it’s not easy to figure out what we would do to regulate them in any sensible way.

For example, the primary regulations we have for electric companies is to control profits of the monopolies to keep rates affordable. In the airline industry we regulate issues of safety to force the airlines to do the needed maintenance on planes. It’s hard to imagine how to regulate something like a search engine in the same manner when a slight change in a search engine algorithm can have big economic consequences across a wide range of industries. It doesn’t seem possible to somehow regulate the fairness of a web search.

Regulating social media platforms would be even harder. The FCC has occasionally in the past been required by Congress to try to regulate morality issues – such as monitoring bad language or nudity on the public airwaves. Most of the attempts by the FCC to follow these congressional mandates were ineffective and often embarrassing for the agency. Social platforms like Facebook are already struggling to define ways to remove bad actors from their platform and it’s hard to think that government intervention in that process can do much more than to inject politics into an already volatile situation.

One of the problems with trying to regulate digital platforms is defining who they are. The FCC today has separate rules that can be used to regulate telecommunications carriers and media companies. How do you define a digital platform? Facebook, LinkedIn and Snapchat are all social media – they share some characteristics but also have wide differences. Just defining what needs to be regulated is difficult, if not impossible. For example, all of the social media platforms gain much of their value from user-generated content. Would that mean that a site like WordPress that houses this blog is a social media company?

Any regulations would have to start in Congress because there is no other way for a federal agency to be given the authority to regulate the digital platforms. It’s not hard to imagine that any effort out of Congress would concentrate on the wrong issues, much like the rules that made the FCC the monitor of bad language. I know as a user of the digital platforms that I would like to see some regulation in the areas of privacy and use of user data – but beyond that, regulating these companies is a huge challenge.

Like this:

Earlier this month in WC Docket No. 17-84 and WT Docket No. 17-79 the FCC released new rules for one touch make ready (OTMR) for connecting wires to poles. These new rules allow a new attacher to a pole to use a single contractor to perform simple make-ready work, which they define as work where “existing attachments in the communications space of a pole could be transferred without any reasonable expectation of a service outage or facility damage and does not require splicing of any existing communication attachment or relocation of an existing wireless attachment.” These new rules will go into effect on February 1, 2019 or sooner, after 30 days, if the new rules are published in the Federal Register announcing approval by the Office of Management and Budget.

The OTMR rules don’t apply to more complex make-ready work where poles need to be replaced or where existing cables must be cut and spliced to accomplish the needed changes. The new rules don’t cover wireless attachments, so this is not an order that lets wireless companies place devices anywhere on poles at their choice (something the wireless companies are lobbying for). These rules also don’t apply to any work done above the power space at the top of poles.

For those not familiar with make-ready, a new attacher must pay to rearrange existing wires if there is not enough space on the poles for the new wire to meet safety standards. In most cases this can be accomplished by shifting existing wires higher or lower on the pole to create the needed clearance.

Possibly the most interesting part of the new order is that the FCC says that a new attacher is not responsible for the cost of fixing problems that are due to past attachers being out of compliance with safety codes. The reality is that most make-ready work is due to past attachers not spacing their wires according to code. This FCC language opens the door for new attachers to argue that some of the cost of make-ready should be charged to past attachers. Anybody who wants to make such claims needs to photograph and document existing violations before doing the work. I can foresee big fights over this issue after the make-ready work is completed.

These rules end some of the practices that have made it time consuming and costly to put a new wire on a pole. Existing rules have allowed for sequential make-ready, where each existing utility can send out a crew to do the work, adding extra time as each separate crew coordinates the work, as well as adding to the cost since the new attacher has to pay for the multiple crews.

The new rules don’t apply everywhere and to all pole owners. There is still an exception for poles owned by municipalities and by electric cooperatives. The rules also don’t automatically apply to any state that has its own set of pole attachment rules. There are currently 22 states that have adopted at least some of their own pole attachment rules and the states still have the option to modify the new FCC rules. Expect delays in many states past the February 1 effective date as states deliberate on the issue. Interestingly, there are also two cities, Louisville, KY and Nashville, TN, that have already adopted their own version of OTMR and the order does not say if local governments have this right.

The order considerably shortens the time required to perform simple make ready. There are many nuances in the new time line that make it hard to condense to a paragraph, but the time lines are considerably shorter than the previous FCC rules. The FCC also shortened the time line for some of the steps for complex make-ready. Unfortunately, in many cases it’s the complex make-ready time lines that will still impact a project, because a few poles needing complex make ready can delay implementation of a new fiber route.

The order encourages pole owners to publish a list of contractors that are qualified to do the make ready work. The new rules also define the criteria for selecting a contractor in the case where the pole owner doesn’t specify one. Pole owners can veto a suggested contractor from the new attacher, but in doing so they must suggest a qualified contractor they find acceptable. Not mentioned in the order is the situation where a utility insists on doing all work themselves.

As a side note, this order also prohibits state and local governments from imposing moratoria on new wireless pole attachments. The ruling doesn’t stop states from imposing new rules, but it prohibits them from blocking wireless carriers from getting access to poles.

Overall this is a positive order for anybody that wants to add fiber to existing poles. It simplifies and speeds up the pole attachment process, at least for simple attachments. It should significantly hold down pole attachment costs by allowing one contractor to do all of the needed work rather than allowing each utility to bill for moving their own wires. There are still some flaws with the order. For instance, although the time frames have been reduced, the pole attachment process can still take a long time when complex pole attachment work is needed. But overall this is a much needed improvement in the process that has caused most of the delays in deploying new fiber.

Like this:

There is a new lingo being used by the large telecom companies that will be foreign to the rest of the industry – containers. In the simplest definition, a container is a relatively small set of software that performs one function. The big carriers are migrating to software systems that use containers for several reasons, the primary being the migration to software defined networks.

A good example of a container is a software application for a cellular company that can communicate with the sensors used in crop farming. The cellular carrier would install this particular container in cell sites where there is a need to communicate with field sensors but would not install the container at the many cell sites where such communications isn’t needed.

The advantage to the cellular carrier is that they have simplified their software deployment. A rural cell site will have a different set of containers than a small cell site deployed near a tourist destination or a cell site deployed in a busy urban business district.

The benefits of this are easy to understand. Consider the software that operates our PCs. The PC manufacturers fill the machine up with every applications a user might ever want. However, most of us use perhaps 10% of the applications that are pre-installed on our computer. The downside to having so many software components is that it takes a long time to upgrade the software on a PC – my iMac laptop has taken an hour at times to compile a new operating system update.

In a software defined network, the ideal configuration is to move as much of the software as possible to the edge devices – in this particular example, to the cell site. Today every cell site much hold and process all of the software needed by any cell site anywhere. That’s both costly, in terms of computing power needed at the cell site as well as inefficient, in that the cell site are running applications that will never be used. In a containerized network each cell site will run only the modules needed locally.

The cellular carrier can make an update to the farm sensor container without interfering with the other software at a cell site. That adds safety – if something goes wrong with that update, only the farm sensor network will experience a problem instead of possibly pulling down the whole network of cell sites. One of the biggest fears of operating a software defined network is that an upgrade that goes wrong could pull down the entire network. Upgrades made to specific containers are much safer, from a network engineering perspective, and if something goes wrong in an upgrade the cellular carrier can quickly revert to the back-up for the specific container to reestablish service.

The migration to containers makes sense for a big telecom carrier. Each carrier can develop unique containers that defines their specific product set. In the past most carriers bought off-the-shelf applications like voice mail – but with containers they can more easily customize products to operate as they wish.

Like most things that are good for the big carriers, there is a long-term danger from containers for the rest of us. Over time the big carriers will develop their own containers and processes that are unique to them. They’ll create much of this software in-house and the container software won’t be made available to others. This means that the big companies can offer products and features that won’t be readily available to smaller carriers.

In the past the products and features available to smaller ISPs are due to product research done by telecom vendors for the big ISPs. Vendors developed software for cellular switches, voice switches, routers, settop boxes, ONTs and all of the hardware used in the industry. Vendors could justify spending money on software development due to expected sales to the large ISPs. However, as the ISPs migrate to a world where they buy empty boxes and develop their own container software there won’t be a financial incentive for the hardware vendors to put effort into software applications. Companies like Cisco are already adapting to this change and it’s going to trickle through the whole industry over the next few years.

This is just one more thing that will make it a little harder in future years to compete with the big ISPs. Perhaps smaller ISPs can band together somehow and develop their own product software, but it’s another industry trend that will give the big ISPs an advantage over the rest of us.

Like this:

Ronan Dunne, an EVP and President of Verizon Wireless recently made Verizon’s case for aggressively pursuing 5G. In this blog I want to examine the two claims based upon improved latency – gaming and stock trading.

The 5G specification sets a goal of zero latency for the connection from the wireless device to the cellular tower. We’ll have to wait to see if that can be achieved, but obviously the many engineers that worked on the 5G specification think it’s possible. It makes sense from a physics perspective – a connection of a radio signal through air travels for all practical purposes at the speed of light (there is a miniscule amount of slowing from interaction with air molecules). This makes a signal through the air slightly faster than one through fiber since light slows down when passing through fiberglass by 0.83 milliseconds for every hundred miles of fiber optic cable traversed.

This means that a 5G signal will have a slight latency advantage over FTTP – for the first few connection from a customer. However, a 5G wireless signal almost immediately hits a fiber network at a tower or small cell site in a neighborhood, and from that point forward the 5G signal experiences the same latency as an all-fiber connection.

Most of the latency in a fiber network comes from devices that process the data – routers, switches and repeaters. Each such device in a network adds some delay to the signal – and that starts with the first device, be it a cellphone or a computer. In practical terms, when comparing 5G and FTTP the network with the fewest hops and fewest devices between a customer and the internet will have the lowest latency – a 5G network might or might not be faster than an FTTP network in the same neighborhood.

5G does have a latency advantage over non-fiber technologies, but it ought to be about the same advantage enjoyed by FTTP network. Most FTTP networks have latency in the 10-millisecond range (one hundredth of a second). Cable HFC networks have latency in the range of 25-30 ms; DSL latency ranges from 40-70 ms; satellite broadband connections from 100-500 ms.

Verizon’s claim for improving the gaming or stock trading connection also implies that the 5G network will have superior overall performance. That brings in another factor which we generally call jitter. Jitter is the overall interference in a network that is caused by congestion. Any network can have high or low jitter depending upon the amount of traffic the operator is trying to shove through it. A network that is oversubscribed with too many end users will have higher jitter and will slow down – this is true for all technologies. I’ve had clients with first generation BPON fiber networks that had huge amounts of jitter before they upgraded to new FTTP technology, so fiber (or 5G) alone doesn’t mean superior performance.

The bottom line is that a 5G network might or might not have an overall advantage compared to a fiber network in the same neighborhood. The 5G network might have a slight advantage on the first connection from the end user, but that also assumes that cellphones are more efficient than PCs. From that point forward, the network with the fewest hops to the Internet as well the network with the least amount of congestion will be faster – and that will be case by case, neighborhood by neighborhood when comparing 5G and FTTP.

Verizon is claiming that the improved latency will improve gaming and stock trading. That’s certainly true where 5G competes against a cable company network. But any trader that really cares about making a trade a millisecond faster is already going to be on a fiber connection, and probably one that sits close to a major internet POP. Such traders are engaging in computerized trading where a person is not intervening in the trade decision. For any stock trades that involve humans, a extra few thousandths of a second in executing a trade is irrelevant since the human decision process is far slower than that (for someone like me these decisions can be measured in weeks!).

Gaming is more interesting. I see Verizon’s advantage for gaming in making game devices mobile. If 5G broadband is affordable (not a given) then a 5G connection allows a game box to be used anywhere there is power. I think that will be a huge hit with the mostly-younger gaming community. And, since most homes buy broadband from the cable company, lower latency with 5G ought to be to a gamer using a cable network, assuming the 5G network has adequate upload speeds and low jitter. Gamers who want a fiber-like experience will likely pony up for a 5G gaming connection if it’s priced right.

Like this:

The research firm eMarketer is predicting that cord cutting is accelerating this year at a pace faster than predicted by the industry. They’ve done surveys and studies and conclude that 187 million people will watch Pay TV this year (satellite or cable TV), a drop of 3.8% in viewership.

The drop in 2017 was 3.4%, but the big cable companies like Comcast and Charter hoped they could slow cord cutting this year by offering Netflix and other alternative programmers on their platforms. Perhaps that is working to a degree since cable companies are losing customers at a slower pace than satellite cable or the big telcos delivering cable on DSL, like AT&T.

eMarketer looks at the statistics in a different way than most others and predicts the people who will watch the various services – which is different than counting households. I suppose that some members of a household could stop watching traditional Pay TV while the home continues to pay for a subscription. They are predicting that the total number of people who will stop watching Pay TV will rise to 33 million by the end of 2018, up from 25 million just a year ago.

As you would expect, if Pay TV viewers are dropping, then viewers of online services ought to be increasing. They are predicting the number of viewers of the major OTT services as follows for 2018: YouTube – 192 M; Netflix – 147.5 M; Amazon – 88.7 M; Hulu – 55 M; HBO Now – 17.1 M and Sling TV – 6.8 M. eMarketer says that in 2018 that 52% of homes now watch both Pay TV and an online service.

We know that Netflix’s growth has slowed and they added only 670,000 net customers in the US in the second quarter of this year and only 4.5 million worldwide. It appears, however, that the other online services are all growing at a faster pace as people are diversifying to watch more than just Netflix.

eMarketer credits a lot of the exodus of Pay TV subscribers to the proliferation of original content available. In 2010 there were 216 original TV series produced. That was 113 from the broadcast networks, 74 from cable-only networks, 25 from premium movie channels and 4 from online providers like Netflix. In 2017 that number has grown to an astonishing 487 original series. That’s 153 from the broadcast networks, 175 from cable-only networks, 42 from premium movie channels and 117 from online providers. A large percentage of the 487 series are now available online to somebody willing to track them down. These figures also ignore the proliferation of other content available online such as movies, documentaries, comedy specials, etc.

The proliferation of content from multiple sources is making it harder to rely on just one source of content these days. Somebody with a basic cable subscription is missing out on the 159 series produced by the premium movie channels and the online providers. Somebody cutting the cord and only using Netflix would be missing out on even more content. Some of the content generated by the broadcast and cable networks is available for free online, with commercials from places like Hulu. If a cord cutter wants to have access to a lot of the available content they’ll have to subscribe to multiple services – perhaps Netflix plus something like Hulu or Sling TV.

The eMarketer survey didn’t ask about the affordability of traditional cable – a factor that is at the top of the list in other surveys that have studied cord cutting. This particular survey concentrated on what people are watching without delving into the issues that drive somebody to cut the cord.

I don’t know about my readers, but I’m a cord cutter and I’ve already reached the point of content saturation. I probably have fifty items on my Netflix watchlist, and it would take more than a year to watch it all, even if I never add anything new. I have a similar list on Amazon Prime and a smaller list on Hulu. I never sit down to watch content without more options than I know what to do with. I have the luxury these days of watching content that fits my mood and available time – a real luxury compared to even a decade ago.

Like this:

The FCC recently issued the Notice of Inquiry (NOI) seeking input on next years broadband progress report. As usual, and perhaps every year into the future, this annual exercise stirs up the industry as we fight to define the regulatory speed of broadband. That definition matters because Congress has tasked the FCC to undertake efforts to make sure that everybody in the country has access to broadband. Today broadband is defined as 25 Mbps downstream and 3 Mbps upstream, and households that can’t buy that speed are considered underserved if they can get some broadband and unserved if they have no broadband options.

The NOI proposes keeping the 25/3 Mbps definition of broadband for another year. They know if they raise it that millions of homes will suddenly be considered to be underserved. However, the FCC is bowing to pressure and this year will gather data to see how many households have access to 50/5 Mbps broadband.

It was only a year ago when this FCC set off a firestorm by suggesting a reversion to the old definition of 10/1 Mbps. That change would have instantly classified millions of rural homes as having adequate broadband. The public outcry was immediate, and the FCC dropped the idea. For last year’s report the FCC also considered counting mobile broadband as a substitute for landline broadband – another move that would have reclassified millions into the served category. The FCC is not making that same recommendations this year – but they are gathering data on the number of people who access to cellular data speeds of 5/1 Mbps and 10/3 Mbps.

The FCC has also been tasked by Congress for getting faster broadband to schools. This year’s NOI recommends keeping the current FCC goal for all schools to immediately have access of 100 Mbps per 1,000 students, with a longer-term goal of 1 Gbps per 1,000 students.

Commissioner Jessica Rosenworcel has suggested in the current NOI that the official definition of broadband be increased to 100 Mbps download. She argues that our low target for defining broadband is why “the United States is not even close to leading the world” in broadband.

I think Commissioner Rosenworcel is on to something. The gap between the fastest and slowest broadband speeds is widening. This year both Comcast and Charter are unilaterally raising broadband speeds to customers. Charter kicked up the speed at my house from 60 Mbps to 130 Mbps a few weeks ago. AT&T is building fiber to millions of customers. Other fiber overbuilders continue to invest in new fiber construction.

The cable companies decided a decade ago that their best strategy was to stay ahead of the speed curve. This is at least the third round of unilateral speed increases that I can remember. A customer who purchased and kept a 20 Mbps connection a decade ago is probably now receiving over 100 Mbps for that same connection. One way to interpret Commissioner Rosenworcel’s suggestion is that the definition of broadband should grow over time to meet the market reality. If Charter and Comcast both think that their 50 million urban customers need speeds of at least 100 Mbps, then that ought to become the definition of broadband.

However, a definition of broadband at 100 Mbps creates a major dilemma for the FCC. The only two widely deployed technologies that can achieve that kind of speed today are fiber and cable company hybrid fiber/coaxial networks. As I wrote just a few days ago, there are new DSL upgrades available that can deliver up to 300 Mbps for 3,000 – 4,000 feet from a DSL hub – but none of the US telcos are pursuing the technology. Fixed wireless technology can deliver 100 Mbps – but only to customers living close to a wireless tower.

If the FCC was to adopt a definition of broadband at 100 Mbps, they would be finally recognizing that the fixes for rural broadband they have been funding are totally inadequate. They spent billions in the CAF II program to bring rural broadband up to 10/1 Mbps broadband. They are getting ready to give out a few more billion in the CAF II reverse auction which will do the same, except for a few grant recipients that use the money to help fund fiber.

By law, the FCC would have to undertake programs to bring rural broadband up to a newly adopted 100 Mbps standard. That would mean finding many billions of dollars somewhere. I don’t see this FCC being bold enough to do that – they seem determined to ignore the issue hoping it will go away.

This issue can only be delayed for a few more years. The country is still on the curve where the need for broadband at households doubles every three or so years. As the broadband usage in urban homes grows to fill the faster pipes being supplied by the cable companies it will become more apparent each year that the definition of broadband is a lot faster than the FCC wants to acknowledge.

Like this:

For the second time in three years the municipally owned and operated ISP in Chattanooga got the highest ranking in the annual Consumer Reports survey about ISPs. They were the only ISP in the survey that received a positive ranking for value. This is a testament to the fact that consumers love independent ISPs compared to the big ISPs like Comcast, Charter, AT&T and Verizon.

Chattanooga’s EPB makes it into the ranking due to their size, but there are numerous other small ISPs offering an alternative to the big companies. There are about 150 other municipal ISPs around the country providing residential ISP service and many more serving their local business communities. There are numerous cooperatives that provide broadband – many of these are historically telecom cooperatives with the ranks recently growing as electric cooperatives become ISPs. There are hundreds of independent telephone companies serving smaller markets. There is also a growing industry of small commercial ISPs who are building fiber or rural wireless networks.

As somebody who works with small ISPs every day it’s not hard to understand why consumers love them.

Real customer service. People dread having to call the big ISPs. They know when they call that the person that answers the phone will be reading from a script and that every call turns into a sales pitch. It’s the negative customer service experience that drives consumers to rank big ISPs at the bottom among all other corporations. Small ISPs tend to have genuine, non-scripted service reps that can accurately answer questions and instantly resolve issues.

Transparent pricing. Most big ISPs have raised rates significantly in recent years by the introduction of hidden fees and charges. People find it annoying when they see broadband advertised in their market as costing $49.99 when they know they are paying far more than that. Smaller ISPs mostly bill what they advertise and don’t disguise their actual prices. I’m sure that’s one of the reasons that consumers in Chattanooga feel like they are getting a value for their payment.

No negotiations on prices. Big ISPs make customers call every few years and negotiate for lower prices. It’s obvious why they do this because there are many customers who won’t run the gauntlet and end up paying high prices just to avoid making that call. The big ISPs probably think that customers feel like they got a bargain after each negotiation – but customers almost universally hate the process. The ISP triple play and cellular service are the only two common commodities that put consumers through such a dreadful process. Most small ISPs charge their published prices and consumers love the simplicity and honesty.

Quality networks. The big ISPs clearly take short cuts on maintenance, and it shows. Big ISPs have more frequent network outages – my broadband connection from Charter goes out for short periods at least a dozen times a week. Small ISPs work hard to have quality networks. Small ISPs do the needed routine maintenance and spend money to create redundancy to limit and shorten outages.

Responsive repair times. The big ISPs, particularly in smaller markets can take seemingly forever to fix problems. Most of us now are reliant on broadband in our daily routine and nobody wants to wait a few days to see a repair technician. Most of my small ISP clients won’t end a work day until open customer problems are resolved.

Fast resolution of problems. Big ISPs are not good at dealing with things like billing disputes. If a customer can’t resolve something on a first call with a big ISP they have to start all over from the beginning the next time they call. Small ISPs tend to resolve issues quickly and efficiently.

Privacy. The big ISPs are all harvesting data from customers to use for advertising and to monetize in general. ISPs, by definition can see most of what we do online. Small ISPs don’t track customers and are not collecting or selling their data.

Small ISPs are local. Their staff lives and works in the area and they know where a customer lives when they call. It’s common when calling a big ISP to be talking to somebody in another state or country who has no idea about the local network. Small ISPs keep profits in the community and generally are a big part of local civic life. Big ISPs might swoop in occasionally and give a big check to a local charity, but then disappear again for many years.

Big ISPs are really the Devil. Not really, but when I see how people rank the big ISPs – below banks, insurance companies, airlines and even the IRS – I might be onto something!

Like this:

It’s been obvious for over a decade that the big telcos have given up on DSL. AT&T was the last big telco to bite on upgraded DSL. They sold millions of lines of U-verse connections that combined two pairs of copper and using VSDL or ADSL2 to deliver up to 50 Mbps download speeds. Those speeds were only available to customers who lived with 3,000 – 4,000 feet from a DSL hub, but for a company that owns all of the copper, that was a lot of potential customers.

Other big telcos didn’t bite on the paired-copper DSL and communities served by Verizon, CenturyLink, Frontier and others are still served by older DSL technologies that delivers speeds of 15 Mbps or less.

The companies that manufacture DSL continued to do research and have developed faster DSL technologies. The first breakthrough was G.fast that is capable of delivering speeds near to a gigabit, but for only short distances up to a few hundred feet. The manufacturers hoped the technology would be used to build a fiber-to-the curb network, but that economic model never made much sense. However, G.fast is finally seeing use as a way to distribute high bandwidth inside apartment buildings or larger businesses using the existing telephone copper without having to rewire a building.

Several companies like AdTran and Huawei have continued to improve DSL, and through a technique known as supervectoring have been able to goose speeds as high as 300 Mbps from DSL. The technology achieves improved bandwidth in two ways. First it uses higher frequencies inside the telephone copper. DSL works somewhat like an HFC cable network in that it uses RF technology to create the data transmission waves inside of the captive wiring network. Early generations of DSL used frequencies up to 8 MHz and the newer technologies climb as high as 35 MHz. The supervectoring aspect of the technology comes through techniques that can cancel interference at the higher frequencies.

In the US this new technology is largely without takers. AdTran posted a blog that says that there doesn’t seem to be a US market for faster DSL. That’s going to be news to the millions of homes that are still using slower DSL. The telcos could upgrade speeds to as much as 300 Mbps for a cost of probably no more than a few hundred dollars per customer. This would provide for another decade of robust competition from telephone copper. While 300 Mbps is not as fast as the gigabit speeds now offered by cable companies using DOCSIS 3.1 it’s as fast as the cable broadband still sold to most homes.

This new generation of DSL technology could enable faster broadband to millions of homes. I’ve visiting dozens of small towns in the country, many without a cable competitor where the DSL speeds are still at 6 Mbps speeds or less. The big telcos have milked customers for years to pay for the old DSL and are not willing to invest some of those earnings back into another technology upgrade. To me this is largely due to deregulating the broadband industry because there are no regulators pushing the big telcos to do the right thing. Upgrading would be the right thing because the telcos could retain millions of DSL customers for another decade, so it would be a smart economic decision.

There is not a lot of telephone copper around the globe it was only widely deployed in North America and Europe. In Germany, Deutsche Telekom (DT) is deploying the supervectoring DSL to about 160,000 homes this quarter. The technical press there is lambasting them for not making the leap straight to fiber. DT counters by saying that they can deliver the bandwidth that households need today. The new deployment is driving them to build fiber deeper into neighborhoods and DT expects to then make the transition to all-fiber within a decade. Households willing to buy bandwidth between 100 Mbps and 300 Mbps are not going to care what technology is used to deliver it.

There is one set of companies willing to use the faster DSL in this country. There are still some CLECs who are layering DSL onto telco copper, and I’ve written about several of these CLECs over the last few months. I don’t know any who are specifically ready to use the technology, but I’m sure they’ve all considered it. They are all leery about making any new investments in DSL upgrades since the FCC is considering eliminating the requirement that telcos provide access to the copper wires. This would be a bad regulatory decision since there are companies willing to deliver a faster alternative to cable TV by using the telco copper lines. It’s obvious that none of the big telcos are going to consider the faster DSL and we shouldn’t shut the door on companies willing to make the investments.

Like this:

This is a second in a series of blogs that look at Verizon’s list of ways that the company thinks they can monetize 5G. The first blog looked at medical applications. Today I look at the potential market use for 5G for retail.

Verizon’s retail vision is interesting. They picture stores that offer an individualized shopping experience that also uses augmented and virtual reality to communicate with and sell to customers. This is not a new idea and the idea of using 3D graphics and holograms in stores was one of the first future visions touted by augmented reality developers. We are just now on the verge of having technology that could make this possible.

Verizon obviously envisions using 3G bandwidth to enable these applications. Stores will want the flexibility to be able to put displays anywhere in the store, and change them at will, so doing this wirelessly would be a lot cheaper than stringing fiber all over stores. Streaming holograms requires a lot of bandwidth, so this seems like a natural application for millimeter wave spectrum. Our current cellular frequencies are not sufficient to support holograms.

The new 5G standard calls for the use of millimeter wave spectrum to deliver gigabit data paths wirelessly indoors. These frequencies don’t pass through walls, so transmitters in the ceilings could be used to beam down to displays anywhere in a store.

Verizon envisions companies using Verizon licensed spectrum. However, the FCC has already set aside several bands of millimeter wave spectrum for public use and there will soon be a whole industry developing millimeter wave routers for use as WANs – likely the same companies that today make WiFi routers. I have a hard time seeing how Verizon will have any market advantage over the many other companies that will be developing millimeter wave WANs using public spectrum.

The personalized shopping experience is a different matter. Verizon is envisioning a network that identifies customers as they enter the store, either through facial recognition, through cell phone signals, or perhaps because customers voluntarily use an app that identifies them. Verizon envisions using the 5G network tied into big data applications to enable stores to craft a unique shopping experience for each customer. For regular customers that would meaning using a profile based on their past shopping history, and for everybody else it means using a profile cobbled together from the big data all of the ISPs are gathering on everybody.

Verizon and the other big ISPs have invested in subsidiaries that can crunch big data and they are hungry to snag a piece of the advertising revenue that Google has monetized so well. Using big data to enhance the shopping experience will likely be popular with the kinds of shoppers who use in-store apps today. Customers can be offered live specials as they walk down aisles, with offers personalized to them. This could be tied into the holographic product displays and other in-store advertising systems.

However, this application could quickly get creepy if it is done for all shoppers. I know I would never visit a store a second time that recognizes me as I walk in the door and that uses a cloud-based profile of me to try to direct my shopping. Perhaps my distaste for this kind of intrusion is a generational thing and it might be attractive to younger generations of shoppers – but I would find it invasive.

There are physical issues to consider with this kind of network. I tried to use my cellphone from the rear of a grocery store yesterday and I had zero bars of data and couldn’t connect to the voice network. Dead spots can be fixed by installing one or more small cell sites inside a store to reach all parts of a store – something that will become more affordable over time.

Verizon will have an advantage if smartphones are a needed component of the customized shopping experience. But the shopping applications don’t necessarily require smartphones. For example, screens built into shopping carts could fulfill the same functions and not tie a retailer to pay Verizon.

One of the biggest hurdles I see for Verizon’s vision is that retail stores are slow adapters of new technology. This kind of application would likely start at the big nationwide chains like Target or Walmart, but it’s a decades-long sales cycle to get stores everywhere to accept this. Verizon’s vision also assumes that stores want this – but they are already competing for their own survival against online shopping and fast delivery and they might be leery about using a technology that could drive away a portion of their customer base. From what I can see, stores that provide a personal touch are the ones that are competing best with online shopping.

To summarize, Verizon is espousing a future vision of retail where the retailer can interact electronically with shoppers on a personalized basis. The first big hurdle will be convincing retailers to try the idea, because it could easily go over the top and be viewed by the public as invasive. More importantly, licensed 5G from Verizon isn’t the only technology that can deliver Verizon’s vision since there will be significant competition in the indoor millimeter wave space. This is one of those ideas that might come to pass, but there are enough hurdles to overcome that it may never become reality.