Pretty Advanced New Stuff from CCG Consulting

Main menu

Monthly Archives: March 2015

There has always been some uncertainty in the telecom industry and over my career I have seen some giant companies come and go from the scene. But as I watch the big companies today I am seeing more unease about the future than I can ever remember.

This unease is justified. And I think that perhaps this is due largely to the cumulative effect of the choices customers are making. In the past the large telcos and cable companies had a limited portfolio of products that they offered, with assurances that a significant portion of their potential customer base would buy them.

But in today’s world, customer choice is expanding rapidly. People have a huge number of options compared to the past, and as customers pick what they like we see winners and losers in the industry. This has to scare the big companies to death.

This happens at both the macro and the micro level. Let me start with the micro and look at programming choices. Recent Nielsen surveys show that the time spent watching traditional television programming, particularly on a real-time basis, is starting to decrease significantly. But the time spent by people watching some kind of content is increasing.

I’m not sure that older people understand how fundamentally differently our kids watch content than the way that we do. As an example, my daughter watches a lot of YouTube, in particular videos on how to make crafts. She has an artistic bent and she now finds content that pleases her rather than watching what some media company thinks that kids ought to watch. Basically, every kid is creating their own channel of content, and most of it is free.

As kids make these choices, and as that generation ages, traditional content is going to be in a world of hurt. For example, somebody you never heard of who goes by the name of Stampy Longnose started making Minecraft tutorials and walk-throughs and putting them on YouTube. He’s been a huge hit with kids under 10 and is now a millionaire due to his work on YouTube. We are now at a time when even 4 year olds are able to up-vote their favorite content. Certainly there will always be some content like Game of Thrones or House of Cards that grabs national attention and that gets millions of viewers. But over time a lot of the content that the various networks are putting together is going to go largely unwatched.

In this new world of micro-content, viewers find content by word-of-mouth. For instance, I have been watching a hilarious comedy on YouTube called The Katering Show that was recommended by a friend. This is a small budget ‘cooking show’ by two Australians that I have found to suit my own sense of humor (caution, it’s a bit bawdy). This is my first foray onto YouTube other than to watch music videos, but I know I will now be looking for other content there.

The same thing happens with cellphone apps. While there are a handful of apps that a whole lot of people use, over time we each go find things that please us. My favorite app is Flipboard; I get most of my news from it these days. Flipboard allows you to choose from a wide array of news sources and end up with a customized newsfeed. Every cellphone user has their own set of favorite apps. If enough people use a given app it succeeds, and if they don’t it fails.

On the macro level, there is a huge tug-of-war going on between platforms and devices. Anybody who is in those businesses has to be worried. For example, smartphones are becoming a serious competitor to PCs and tablets and even to televisions. My wife and daughter watch a surprising (to me) amount of content on their phones.

The industry still has some sway, of course, in the device market. They can make a huge marketing splash and get people interested in something new like wearables or smartwatches. But in the end, the public is going to pick the winners and losers in any new area. Countless companies have already come out with devices that they were sure would be a hit but that flopped badly.

Almost every segment of the industry is being tugged at by significant (or soon to be significant) competition. We are going to see WiFi battling it out with cellular, WebRTC battling for a big chunk of the voice business, OTT programming battling the cable companies, gigabit fiber networks challenging the incumbent ISPs.

In the hardware world we see cloud services going head to head with company routers and IT departments. Manufacturers of headends and cell sites are worried about software-defined networks that will eliminate the need for their equipment. Settop boxes are being replaced by smart TVs, Roku boxes, and game platforms.

It’s hard to find many parts of the industry that are not in turmoil in some fashion, though there are a few. Makers of fiber optic cables are working at a feverish pitch to keep up with demand. ESPN is making tons of money due to exclusive sports content. But more and more it seems that for the first time that I can remember in our industry, customers are picking the winners. That is something very new.

Like this:

Barely two week after the release of the FCC’s new net neutrality rules there have been two lawsuits filed asking the courts to set aside the new rules. This is one of the quickest reactions to an FCC order that I can remember.

One petition was filed by the USTelecom Association with the US Court of Appeals for the District of Columbia. This is the trade group representing Verizon, AT&T, and other large telcos and ISPs. The second was filed with the US Court of Appeals in New Orleans by Alamo Broadband, a small ISP from Texas.

The lawsuits are a bit surprising because the FCC hasn’t yet published the new rules in the federal register, meaning that they are not yet in effect. There is a good chance that these two suits will be dismissed for being prematurely filed, but there is no doubt that these and other cases will be filed once the order has been officially published. It’s rumored that the CTIA plans to file an appeal on behalf of all of the large cable companies. Challenges by the trade groups will save AT&T, Verizon, and Comcast from having to challenge the FCC directly.

As expected, the suits challenge the FCC’s authority to impose Title II regulation on broadband. USTelecom refers to this as ‘utility-style regulation’, although the FCC provided forbearance on most of the regulatory requirements that apply to telcos and CLECs.

I’m not a lawyer, but I recall a lot of dismay in the industry when the FCC decided many years ago to classify the Internet as an information service rather than as a utility. My opinion is that all they have done by this ruling is to set straight the mistake they made years ago, and that they always have had the option to regulate the Internet under Title II. But of course, the courts are going to be the ones to decide the extent of the FCC’s jurisdiction.

One thing is clear, if these lawsuits succeed the FCC is basically out of options and net neutrality will probably be dead.

The final net neutrality rules are somewhat simple and straightforward and make four distinct points:

No Blocking. The order says there can be no blocking of transmissions of lawful content, although the order allows ISPs to refuse to transmit unlawful material such as child pornography or copyright-infringing materials.

No throttling. ISPs are not allowed to slow down content. This means that everything delivered to customers must have the same priority.

No Paid Prioritization. While this is similar to the no throttling rule, it applies more to the network between content providers and the ISPs and means that companies can’t arrange deals that provide for preferential treatment. They might still need to pay for interconnection, but they can’t use that process to gain an advantage over other content.

Case-by-case challenges. The FCC took an approach similar to Canada’s net neutrality rules, and rather than lay out a lot of specific net neutrality rules they will look at specific cases in the future that are brought to their attention. I think this is a wise choice because our networks and technology can change faster than any rules, and this process will allow net neutrality rules to keep up with changes in technology.

Assuming the courts allow the current rules to stand, that last rule means that net neutrality is never going to be finished. The intent of the three major rules are pretty clear, but as the FCC hears future cases they will be crafting more detailed rulings on specific topics. So, while there are no detailed restrictions in this order, over time a body of rules concerning net neutrality will grow as the result of rulings on challenges. This means that it’s likely there will always be lawsuits floating around on net neutrality topics.

I also foresee one other danger for net neutrality. Modifying net neutrality over time on a case-by-case basis will make the whole process subject to the whim of future FCC commissioners. In recent years the FCC has tended to bend and sway with changes in the administration, and so we may suffer through conflicting rulings from different FCC commissioners. But I guess before we worry too much about this we’ll have to wait a while to see if the courts allow these rules to stand.

Like this:

As if you needed even one more reason to be wary of social media, today’s topic is about social media bots. Bots are algorithms that operate in social media networks. They are created to look like users, but are instead software that is being used to mimic an actual user.

You may think this is somewhat of a silly topic, but consider the following statistics. Over 20% of users on social media sites will accept an invitation from an unknown person, making them open to accepting bots as friends. Surveys have also shown that 30% of users can be fooled by a bot into believing it is a real person. It’s also been estimated that about 7% of tweeps are from bots (a tweep being a follower that will respond when you create a tweet).

That sounds innocuous enough, but consider how bots are being used today:

Bots are being used to try to influence public opinion. There have been a number of bots created to be somewhat conversant on a specific topic in order to spread a specific political agenda. So when you are having a conversation on Twitter with somebody who readily spouts lots of facts about global warming, net neutrality, the minimum wage, or almost any topic, you might be talking to a bot.

Bots can be used to create web fame. For example, there are tools available that would let you quickly produce tens of thousands of fake Twitter accounts to follow you, making you somebody who becomes famous and tracked by others. While that may sound innocuous, there is money to be made by gathering real followers and bots can be used to exaggerate one’s web influence. If somebody makes a living by selling books, for example, having a mountain of fake followers can make a person seem more famous than they really are.

Bots also might be responsible for some of the trending topic on Twitter since an army of bots can be used to discuss any keyword and make it track. That may not sound like a big deal, but stories that track on twitter are often followed up with actual news coverage. So this can be a tool to influence news coverage, and as such is a propaganda tool.

Now that Twitter feeds can feed into the Google search engine, it’s not hard to envision bots being used to influence a company’s standing on web page searches. This means a company might pay to get a high priority on Google but then be trumped by somebody who instead used an army of bots to make them look popular.

Bots are also a new way of spamming. If you get a tweet recommending that you buy something it’s most likely a bot. But unlike spam for Viagra or Nigerian banking, spam from bots is more likely to make a recommendation for a book you should buy or a movie you should see that is responding to something you’ve said on social media.

But bots are used for other kind of spam that is more lucrative. For example, I get probably twenty spam attempts trying to make comments on this blog every day. WordPress is pretty good at identifying such spam, but every once in a while these comments get through and I have to delete them. This spam is being undertaken in order to improve standing in the Google search engine, since being linked to credible web sites like this blog adds credence to a commercial or scam website.

Bots are often used to fill up newsfeeds and Twitter feeds with negative comments for somebody that the bot creator doesn’t like. So when you hear about something getting a lot of negative attention, this might be due to bot traffic.

Bots can be used to block somebody from being heard. As an example, during the Arab Spring movement tweets from protestors were filled by Arab governments with spam messages so that any tweet they made quickly got lost in the volume of noise.

Bots can be used to gather big data on specific topics. For instance, there could be a bot that follows as many people as possible to gather everything said on Twitter on a specific topic. One could picture political action groups, trade organizations, or corporations that might use bots to keep an ear out for what is being said about them or the topics that they care about. It’s not hard to make sense of huge volumes of tweets if you are using data analytics to parse them.

In a use that scares me, the US Air Force revealed that it was creating a program that would allow it to mass-produce bots for influencing political opinion. (I find it scarier that the military thinks that part of their mission is to influence public opinion than that they might use bots to do so).

This phenomenon lends itself to social media that is somewhat impersonal. There are certainly Facebook bots that people can pick up if they will friend people they don’t know, but bots are more likely to try to friend you on LinkedIn or follow you on Twitter since those sites promote making new contacts. I look at my own Twitter account and there are a number of followers who could very easily be bots – I have no way of knowing if these are real people or real organizations.

A number of social media sites have undertaken steps to identify and eliminate bots, but it’s an uphill battle when somebody can create a new army of bots within hours to replace ones that are taken down. The presence of bots is one more thing that should make you skeptical of things like the trending news topics on Twitter. It’s possible for one person to influence such things for their own purposes, and sadly, most people have no idea that these bots even exist.

Like this:

In a blog last week I talked about an alternate model for the Internet that can make it safer to communicate with others. The idea that I explored last week was to base web transactions on block chains, which is a technology that decentralizes communications without needing to pass through centralized servers.

Today I want to talk about mesh networks as another idea on how to develop safer communications. There is now a movement within the country to create mesh networks as an alternate to the traditional web. Mesh networks have been around a long time. The concept of a mesh is simple. Today’s Internet relies solely upon making every connection for every transaction through an ISP. The ISP, using a series of servers and routers, then directs your traffic to where it’s supposed to go.

But it is these servers and routers that are the weak points in today’s web. First, the ISP is recording everything you do and mining every piece of data you send through them. These servers and routers are also where malicious entities get access to your data, making you vulnerable to everybody from hackers to the NSA.

The idea of a mesh network is to skip these intermediate checkpoints whenever possible. In a mesh network every device in the mesh is able to communicate directly with the other devices within the mesh. Picture, as an example, a neighborhood where all of the households meshed their WiFi networks together. In such a network you could communicate with anybody in the neighborhood and exchange data with them without having to go back to the ISP network. It would function as if you were all on the same WiFi network within a home. Granted there is not generally that much traffic exchanged with your neighbors, so such a network would be of limited use. But it’s an example of how a mesh works.

There are other existing examples of mesh technology today. For example, there are now a lot of applications for smartphones using bluetooth. These applications let people in close proximity to each other exchange business cards or texts or other forms of communication that don’t first get routed back to the cellphone hub. Any data exchanged in this local manner is not subject to being recorded or tracked at the hub.

Probably the biggest use of mesh today is Firechat. This is a mesh app for smartphones that lets users within close proximity communicate with each other using Bluetooth. This is big with younger people and there are over five million people using the app. With Firechat kids can message each other privately at concerts or at school if there are enough users in close proximity to create a mesh network. The app allows for private communications are private and also can provide the ability to message in places where cellular or texting doesn’t work.

Is it possible to take the idea of mesh networks further than is being done today to make them practical in a wider setting? I can picture such networks. For example, the students at a university could band together to create a mesh network from all of their WiFi connections. Inside such a mesh the students could communicate with each other free of the University servers recording everything they do and say. Such a mesh network would be decentralized and nobody would be able to monitor or record what was done on the mesh. Over time, private connections could be established between different university mesh networks, which would allow students to also communicate with students on other university mesh networks.

It would be a lot of work to establish and maintain such a network, but there are a lot of people who are growing alarmed at all of the spying done upon us on the open web. Mesh networks would pull people-to-people communication out of the open web and make it private. When somebody on the web chose to make a connection to the current web, and say, make an enquiry on Google, then Google would be able to track them just as is done today. But communication inside the mesh network could provide an alternative to social media or using one of the public messaging services, all of which are monitored by somebody.

Mesh networks are certainly not a total solution to achieving privacy on the web. But it can be one more tool in creating an alternate mode for communication that is not subject to spying. One can picture the ability to join different mesh networks for different purposes, each of which provides you some privacy and which keeps some parts of your web life off the original Internet, which today suffers from cybercrime, data mining, constant surveillance, and the fear of hacking.

There are a few organizations like Commotion and The Free Network Foundation that distribute software and information on how to establish a mesh network. But so far this has been a very tiny effort being promulgated by privacy advocates. It probably will take some entrepreneurs to establish more widespread mesh networks if they are ever going to take hold as an alternate to the existing web. The tools needed to do this exist already and perhaps somebody will take the initiative to create a nationwide mesh network for messaging, chat, and texting, away from the existing web.

I think there is at least some chance that hacking will become so invasive that people will be seeking alternative ways to communicate. If mesh networks are combined with other tools like block chains, then perhaps we can all take back some of our online life from the many entities who spy on us today.

Like this:

As I mentioned in an earlier blog, I signed up with Sling TV because I wanted to see what web TV is like. My household is already a big user of Netflix, Amazon Prime, and Hulu, and I have a very good opinion of all of those services. I will admit that I don’t watch Hulu as much as the other two since I have yet to buy the premium service there, and so for now I suffer through the commercials. But all three of these services have a decent level of quality and I have rarely had problems watching what I want to watch.

I also have access to HBO Go. Comcast forced me into a small TV bundle in order to get faster Internet, and so I have the basic package that consists of the major networks plus they threw in HBO. I like the quality of the HBO online product, and in fact it seems to have better picture quality than the other web services, although it boots me from time to time.

But sadly I have not had the same experience with Sling TV. One of the reasons I got Sling TV was to watch the NCAA basketball tournament. It turns out that my favorite team, the University of Maryland, was playing two games on TNT, and these games were not available anywhere else on the web. I also caught U of M’s women’s basketball game on ESPN2.

The experience of trying to watch basketball on Sling TV was painful. It started when I first tried to log on to the service and repeatedly got the message that the feed was not currently available. It took me almost ten login attempts to make a connection. When I finally connected, the ‘reception’ was pretty good, about the same quality that comes with standard definition service on Netflix. The picture was clear enough and it looked good on my 27 inch monitor.

But after about ten minutes it started to have problems. First, I lost my connection and it took me a full ten minutes to reconnect. To a basketball fan that’s an eternity. I finally got the game back and it was pretty decent quality again. But then I started having problems with the audio. The announcers’ voices started clipping to the point where I had a hard time understanding them. Within another ten minutes the audio had also gotten almost two seconds out of synch with the video, with the voice coming in before the picture. This was really disconcerting.

I found that if I restarted the service that I could fix the voice, but I again needed multiple attempts to get reconnected. By the second half of the basketball game the audio was just so awful that I turned off the sound and listened to the rest of the game on Sirius radio while watching the video. There were times during the game when I got significant pixilation, although this tended to clear itself after a few minutes each time.

I had this same thing happen on other channels including ESPN, ESPN2, and the Food Network. The problem was not as pronounced on ESPN, but the audio problems were still there.

I have a 50 Mbps cable modem that has low latency, and I can’t remember ever having any major issues on Netflix or Amazon Prime. In hundreds of hours of viewing I may have been booted from those services maybe three times. So I know it’s not my Comcast connection. The problems I had with Sling TV are puzzling since it’s a unicast and every viewer gets the same signal at the same time. I’m curious how many other viewers had the same problems I did.

There are some good features of the service. While they advertise that you get ESPN and ESPN2, a subscription also gets you a feed into ESPN3, the online ESPN programming. I looked at several college baseball games, some wrestling, and soccer on ESPN3. The feeds I watched for past events did not have the same issues, so perhaps the problem is only with real-time feeds. I was not given access on ESPN3 for content on the SEC network, but I find that understandable.

But for now, until Sling TV figures out these issues, the service is not ready for prime time. This makes me sad because I want web TV to be successful. But my experience of watching several basketball games was horrible and was some of the worst sports viewing experiences I have ever had. This was even worse than trying to watch sports via satellite on days when the pixilation is bad. Luckily you only have to buy it one month at a time and I will come back in the fall when it’s football season and try again. I would certainly caution folks against signing up for the three month subscription they are offering without trying it first.

If web TV is going to succeed they have to be able to offer the same quality that people expect elsewhere. They are directly competing with Netflix and Amazon Prime and customers can easily compare their quality against those services. But they are also competing against the quality of normal cable TV systems and satellite. If web TV isn’t at least as good as those two alternatives they will have a hard time retaining customers.

Like this:

There is yet another new threat/opportunity for the telecom industry in WebRTC. That stands for Web Real Time Communication and is a project to create an open standard for delivering high-quality voice and data applications for a wide variety of platforms including browsers and mobile phones, all using the same set of protocols.

The most immediate use for the new standard is building direct voice and video communication applications from every major web browser. The project is being funded and developed by Google, Mozilla, and Opera. Microsoft has said that they are working towards developing a real-time WebRTC app for Internet Explorer.

From a user perspective, WebRTC will enable anybody to initiate voice and/or video communication with anybody else using a browser or using a WebRTC-enabled device. What is unique about this effort is that the brains of the communication platform will be built into the browser, meaning that an external communications program will not be required to make such a connection. This creates browser-to-browser communication and cuts out a host of existing software platforms used today to perform this function.

This means that the big browser companies are making a big play for a piece of the communications market. The WebRTC platform will put a lot of pressure on other existing applications. For example, WebRTC could become the de facto standard for unified communications. This would let the browser companies tackle this business, which is today controlled by softswitch, router, or software vendors.

WebRTC is also going to directly compete with all of the various communication platforms like GoToMeeting and Skype. I know I maintain half a dozen such platforms on my computer that I’ve needed to view slide shows from different clients or vendors. WebRTC would do away with these intermediate platforms and let anybody on a WebRTC browser communicate with somebody else with WebRTC. You should be able to have a web meeting where there are participants on Google Chrome, Mozilla Foxfire, or Internet Explorer, all viewing and discussing a slide show together from their different platforms.

In the next generation of the standard the group will be developing what they call Object-RTC, which will be a platform that will integrate the Internet of Things into the same communications platform. This will enable anybody from any browser to easily communicate with devices that are on the Object-RTC platform, making it far easier for the normal person to integrate the IoT into their daily lives. This could become the standard platform that will allow you to communicate with your IoT devices equally easily from your PC, tablet, or smartphone. This is presumably a market grab by the browser companies to make sure that the smartphone doesn’t become the only interface to the IoT.

While the WebRTC development effort is largely being funded by Google and the other browser companies, numerous other companies have been developing WebRTC applications in an effort to keep themselves relevant in the future communications market.

Since the WebRTC platform is browser-based, it’s estimated that it will be available to 6 billion devices by the end of 2019. One would think that browser-based communications will grow to be a major means of communicating by then, putting additional pressure on companies today that make a living from providing voice.

Because it’s browser-based, WebRTC is likely to have more of an initial impact on the residential market. Larger businesses today communicate using custom software packages, and as WebRTC becomes the standard those platforms will likely all incorporate the new standard. To that effect we have already seen some large companies snag some of the early WebRTC developers. For example, Telfonica acquired start-up Tokbox in 2012. More recently, the education software services company Blackboard bought Requestec. And Snapchat paid $30 million to buy WebRTC startup AddLive.

One can expect a mature WebRTC platform to transform online communications. If people widely accept WebRTC (or the one of many different programs that will use the software), then it could quickly become the standard way of communicating. What is clear is that with companies like Google, Microsoft, and Mozilla behind the effort, this new communications standard is going to become a major player in the communications business. This is going to be mean fewer minutes on the POTS telephone network. It will also put huge pressure on intermediate communications platforms like GoToMeeting, and those kind of services might eventually disappear. I remember hearing somebody say a decade ago that voice would eventually be a commodity, and this is yet another step towards making voice free.

Like this:

At a time when AT&T wants to ditch millions of copper lines, and when Verizon apparently want to phase out of the wireline business and is even selling off FiOS, CenturyLink is taking a different approach.

The company has begun building gigabit fiber in a few cities and has announced plans to build in many more. CenturyLink has already deployed gigabit fiber to some residential customers in some parts of Omaha and Las Vegas, and to some businesses in Salt Lake City. The company has announced plans to provide new residential gigabit fiber in new markets including Seattle, Portland, Salt Lake City, Denver, Minneapolis / St. Paul, and in Columbia and Jefferson City in Missouri. Additionally the company plans gigabit fiber for businesses in Spokane, Sioux Falls, Colorado Springs, Albuquerque, Phoenix and Tucson.

This initiative makes CenturyLink the only large incumbent telco that is investing in fiber. And since the cable companies are mostly upgrading speeds in response to competition, this make CenturyLink the only large ISP that is being proactive with fiber.

With that said, I have no idea how much fiber they are actually going to build. CenturyLink inherited a company from Qwest with a very ugly balance sheet and which still today does not spin off enough cash to make a huge fiber investment. And so there is the possibility that they are building a little fiber in each market for press release purposes and not intending (or able) to finance the construction of a lot of fiber in the same way that Verizon invested in FiOS.

But in reading between the lines I think they really want to invest in fiber. CenturyLink inherited possibly the worst local network in the country when they merged with Qwest. Qwest had been in marginal financial shape for so long that they had let the networks in most markets deteriorate significantly. Qwest instead invested on long-haul and large city downtown fiber to make money in transport, long distance and sales to large businesses. And they did okay in those areas and have one of the best nationwide fiber networks.

CenturyLink has the most to lose of the large ISPs. AT&T and Verizon have become cellular companies that also happen to be in the landline business. The cable companies have captured the lion’s share of the residential data market almost everywhere. But CenturyLink has no fallback if they lose landline-based revenues. They inherited a network that lost the residential battler everywhere in head-to-head competition with the cable companies. And in every large city they have significant competition for business customers from CLECs, cable companies and fiber providers.

So I think CenturyLink has hit upon the right strategy. In every market (or at least in every neighborhood) there is likely to only be one fiber provider who is willing to build to everybody. Over time, as households and businesses want more data, fiber is going to be the only long-term network that will be able to satisfy future data demand.

I keep hearing about having gigabit wireless products someday, but the physics of that product will require mini cell sites that are close to customers. And that means having a cellular network that is fed by neighborhood fiber. Anybody who thinks that the cellular companies are going to be able to supply that kind of bandwidth with the current cellular networks doesn’t understand the physics of spectrum.

I wish CenturyLink well in this endeavor. Most of the potential markets want fiber and the company will do really well if they can find the financial resources needed to build significant fiber. Their copper networks are dying and there is very little they can do about that. There are currently some industry patches on copper such as using two copper pairs joined together, but these are band-aids being applied to a dying network. Looking twenty years into the future, if CenturyLink doesn’t build fiber they won’t have much left.

I am still surprised that Verizon is selling off mature cash-cow FiOS fiber networks like they recently announced. But Verizon has obviously been taken over by the wireless guys who seem to want them out of the wireline business. But CenturyLink has no other options, so I think they either go to fiber or watch their networks and their business slowly die.

Like this:

It seems there has been a flip sometime in the last few years in how quickly predictions made by futurists have become reality. It has historically been the case that new technologies have taken longer to come to fruition than what visionaries imagined. But lately, I have read numerous articles from futurists saying that they are seeing just the opposite, and that things are becoming reality now much faster than what any experts has predicted.

I have heard for years that the rate of acceleration of the growth of human knowledge is getting faster all the time. I may be remembering the quote wrong, but I recall an article I read a few years ago that claimed that the knowledge mankind has gained in just the last few years is greater than all of the knowledge that has been gathered in all of mankind’s history.

That is an amazing claim, but there is a lot of evidence that it’s true. Consider this article that talks about the major scientific announcements that have been made public just in January and February of this year. The list is astounding. Here are just a few of the things on that list:

Scientists have discovered teixobactin, the first new antibiotic in 30 years.

The first map of the human epigenome has been completed: these are the switches that can turn individual genes in our DNA on and off.

There is a new electron microscope that can see individual atoms.

Physicists have found a way to accelerate particles to nearly the speed of light without the application of any external forces.

Researches have been able to grow human skeletal material in the lab that acts just like the real thing.

Stem cells have been used to create cells that can grow human hair.

Astronomers have found a black hole that is 12 billion times as massive as our sun.

Cosmologists have developed a new physics model that suggests there was no big bang and that the universe has existed forever.

Scientists believe there are two more planets beyond Pluto.

Every one of these claims is a big breakthrough, and yet there are so many scientific discoveries being made that most of them barely get any press. I follow tech and science and I had not heard of nearly a third of the items on this list.

I was always interested in science as a kid and I remember even at a young age avidly reading articles in places like Life Magazine that talked about the discovery of how DNA worked, the invention of polymers, or finding the early hominid Lucy fossils. It seemed like there was a major scientific breakthrough a few times each year and such things got wide coverage. As I got a little older I would read Scientific American and other sources of information about science and would see the same thing. There was progress here and there in scientific fields, but nowhere at the pace of what we are seeing today. I have no idea today how scientists stay current since there is so much happening in so many fields. It’s always been understood that any important discovery often leads to progress in other fields of study as scientists understand the implications of various discoveries.

Certainly there are good reasons for the breakthroughs today. Probably first is that we have better tools. We are able to look deeper in space with amazing light and radio telescopes; we can look at smaller things with electron microscopes. And with modern computers, we can crunch the data from experiments faster and more accurately. Science for many years was more about handling the data from studies than it was about doing the actual research.

What is most amazing is how un-amazing this all seems. Twenty or thirty years ago most of the above recent announcements would have been major news. But when we are bombarded by amazing discoveries every time we browse news articles, the amazement gets a bit dulled.

The real excitement for me is all of the areas of research that are getting close to major discoveries. Just in the medical area there are breakthroughs expected in areas like cryonics (keeping people in suspended animation), nanobots for fighting cancer and other diseases from inside the bloodstream, laboratory-made replacement organs, reversing or halting aging, and brain and memory enhancement. And every field of science and technology has its own similar list of amazing things that will probably become reality within just a few years.

Like this:

I was recently at my mother-in-law’s house and saw an example of what competition can do for the country. She lives in Kyle, Texas, which is an outer suburb of Austin. When I say outer, it’s an hour’s drive to downtown Austin.

As I was working on my laptop using her WiFi, it felt like it was faster than in previous times that I had visited here, so I ran a speed test. And sure enough, her bandwidth measured in at a little over 70 Mbps download and 10 Mbps upload.

She buys only the basic Internet product from Time Warner. I am pretty sure that in the past this was a much slower product, closer to 15 Mbps, and possibly less. But for certain her speed has been increased significantly due to competition. By now everybody knows that Austin is in the midst of significant competition with Google, Grande and AT&T each selling a gigabit data product, while Time Warner which now has speeds up to 300 Mpbs. What this competition has done is to up the game for everybody in the market.

The sad thing is that it takes competition to get the cable companies to up their game. I doubt that many other Time Warner markets around the country have base speeds of 70 Mbps, and probably none of their other markets has speeds of 300 Mbps.

I really don’t understand why the cable companies don’t just increase speeds everywhere as a way to fend off competition. One would think Google might be a lot less likely to build fiber into a market if every customer there already had 300 Mbps data speeds. The cable companies in most markets clearly have the majority of customers, and certainly have all of the customers who are interested in fast speeds. They have it within their power to be market leaders and to bring fast speeds today, so that any future competitor will have a hard time denting their lucrative markets.

Instead many of them sit and wait until the inevitable announcement of competition before they do the upgrades needed to get faster speeds. For example, Cox has announced that in Omaha and Las Vegas they will have speeds as high as a gigabit in response to fiber deployment by CenturyLink in those markets. But not all of them are waiting. For example, Charter recently doubled the speeds on most of their products. That is not the same as offering blazingly fast speeds, but it really makes a difference to boost their base residential product to 60 mbps.

I know that there is a cost to upgrading data speeds. But recently Time Warner Cable said in their annual report that they have a 97% margin on their data products, a number that opened a lot of eyes nationally. One would think that the cable companies would do anything to protect a product with margins that high and that they might spend some of that margin to fend off competition.

I have no idea how well Google does when they come into a new market. I know that when a municipal provider comes to a market they generally get 40% to 60% market penetration with their data products. But the Google product, at a premium price of $70 per month is probably not going to attract quite as many customers. Still, one has to think that they probably get at least 30% of households.

Cable companies have a lot to lose if they lose 30% or more of their customers in the large urban markets. It’s clear that the cable TV product today has very poor margins (if not negative margins) and so the future of the cable companies comes from data sales. They are in the enviable position of already having gotten most of the customers in most market and one would think they would want to jump in front of potential competition and head it off before it even starts.

But they are not acting like companies with a lot to lose. To me it feels like they are making a strategic error by not being more proactive with data speed upgrades. The cable companies are largely disliked by their customers, and they could go a long way to change that perception by unilaterally raising data speeds to be as fast as they can make them.

I am glad to see competition forcing data speed increases, but the majority of markets are not competitive. But in my mind, if the cable companies wait to increase speeds only after there has been an announcement of a coming competitor in each market, they will have lost the game. People are going to perceive that as too little, too late. And it’s a shame, because we know in Austin what a cable company can do if they are motivated by competition. I just scratch my head and wonder why maintaining markets with a 97% margin data product is not enough motivation to fight to keep the customers they already have.

Like this:

It’s a rather new phenomenon, but we are seeing the beginning of a shift to making more voice calls on WiFi networks than on cellular networks. As Americans have become more conscious about making data connections on WiFi they have opened the door to using WiFi for their voice usage.

The trend of using WiFi for voice, as it matures, could really shake up the cellular industry. The AT&T and Verizon cellphone plans are among the most profitable products sold by any corporation and that makes them a target for competitors, and a place for consumers to save money.

It’s funny how the industry has changed so much. I remember twenty years ago going to state commissions and asking, and being rejected, for $2 rate increases in local telephone rates because the regulators feared that people couldn’t afford to pay it. And yet a decade later families went from having a $30 home phone to paying three and four times that much for cell phone plans.

There are several companies that have been selling WiFi calling for the last few years. FreedomPop, which started in 2012, offers a product that uses a network of over 10 million hot spots in places like McDonald’s or Starbucks. FreedomPop’s phones will automatically join WiFi networks much like a normal cellphone automatically connects to a cell tower. Their rates are really low and for $5 a month a customer can have a WiFi-only plan that connects to the network of WiFi hot spots. There are other slightly more expensive plans that use a combination of WiFi hot spots and Sprint’s cellular network when WiFi isn’t available.

Republic Wireless has a similar set of products. For $5 a month, customers can make calls or connect to the Internet solely over WiFi. For $10 a month, they can use both WiFi and Sprint’s cellular network. Republic Wireless has developed a technique that lets customers roam between hot spots (but this roaming is more suited to walking than driving in a car).

Scratch Wireless has an even more aggressive plan and using their WiFi network for voice, text, and data is free as long as you buy their $99 Motorola Photon Q phone. They then sell pay-as-you-go access to voice on Sprint’s cellular network starting as low as $1.99 per month.

These companies are growing rapidly. FreedomPop says it is doubling its customer base roughly every four to six months; Republic Wireless says its customer base is growing 13 percent a month. But both companies are still really tiny compared to the big carriers and are mostly catering to those who live mostly around WiFi and who are cost conscious. From what I can see, both companies get rave reviews from their customers.

Cablevision recently announced a WiFi-only plan for $30 a month for non-cable customers but only $10 for bundled customers. I don’t understand their pricing, which obviously is not going to be very attractive to non-Cablevision customers. Cablevision operates an extensive network of hot spots in New York, New Jersey, and Connecticut.

The real disruptor might be Google. They announced that they are going to be offering cellular phone plans and the industry seems to think that they will be WiFi-based. Certainly in the markets where they have fiber networks they could saturate the market with outdoor WiFi hotspots and offer a true competitor to cellular. Google has always said that they think bandwidth ought to be ubiquitous, and since they don’t own cellular spectrum, they are going to have to go the WiFi route and also make a deal for off-network minutes from Sprint or T-Mobile.

One also has to think that Comcast has their eye on this. They certainly are rolling out a huge WiFi network as they turn customer routers into public hot spots.

And so the phenomenon is starting to grow. The large cellular companies say they aren’t worried about this, but one has to think that in the Boardrooms they are keeping an eye on this trend. For now there are issues with using these products. One is data security as it’s fairly well known that public WiFi hot spots are loaded with danger for users. This has to be the case whether you are hitting a hot spot with a PC or a cellphone.

I know that personally I will probably stick to a bigger company plan. When I travel it is more often to out-of-the-way places than to big cities. And those kind of places generally have coverage of some sort by the big carriers, but are often uncovered by smaller carriers like Sprint and T-Mobile. I would not like to find myself in a small town for a few days with no cellphone coverage. Other than that travel, I work at home and could easily use my own WiFi rather than pay for cellular.

For the product to be competitive, it’s also going to have to be usable on the major phones being sold. Not having this product for the iPhone or Samsung Galaxy limits the target audience. For now the small carriers like Republic load their own proprietary software on the phones they sell to users. But as that turns into a downloadable app I could see this product picking up a lot of traction in cities.

AT&T and Verizon are right to not be worried about this today. But if you look forward a few years this could grow into a significant competitor to cellular. Which, even if it doesn’t mean a loss of a lot of customers for the big companies, will mean overall lower prices for cellphone plans. That is something they ought to be worried about.