Pretty Advanced New Stuff from CCG Consulting

Main menu

Monthly Archives: November 2014

Since Thanksgiving is here I made my list of the telecom things I am thankful for this year. Here are my good thoughts for this season:

An FCC Chairman that Talks the Talk. We have a new FCC Chairman in Tom Wheeler who seems to talk the talk. He has said the right things about a whole range of topics. He wants to increase the definition of broadband to 10 Mbps. He wants to allow municipalities and anybody else to build fiber networks. He wants to make net neutrality apply to wireless as well as landline data connections. He has speculated that Comcast and Time Warner are too large to merge. He has even talked about allowing competitors to use unbundled fiber networks.

There was a big worry when he took office that he would support large cable and wireless companies due to his history as the head of those industry groups. And he still might. While he has talked the talk, nothing he has talked about has yet come to pass. All that will matter in the end is what he does, not what he says. But for now I am at least thankful that he is talking the right talk.

Moore’s Law Has Not Yet Broken. It seems like for the last fifteen years that some expert always predicts the end of Moore’s Law – the one that predicts that computer processing power will double every 18 months. But this year alone I’ve seen dozens of incremental improvements in computer power and it doesn’t look like we are anywhere near to the end of technology history as the pessimists have often predicted.

Data Speeds are Getting Faster. Network technology is improving so quickly that the incumbent providers find themselves increasing data speeds almost in spite of themselves. Of course, some of the data speed increases we have seen are the result of competition. But we are seeing gradually faster speeds in many other places as Verizon FiOS, Comcast and Time Warner have all unilaterally increased speeds.

We have a long way to go with data speeds, but as we can see In Austin, TX, the cable companies are capable of delivering 300 Mbps, but they only do so under stiff competition. Even AT&T can be prompted to build fiber when faced with losing a major market

The Country Is Waking Up to the Digital Divide. The digital divide is no long just between those who have computers and broadband and those who do not. The wider digital divide is now between communities stuck with relatively slow broadband and those with fast broadband. More and more communities who are on the wrong side of this divide are starting to demand faster broadband. Many of these communities thought they had solved the broadband issue a decade ago when they got 3 Mbps DSL or cable modems. But a decade later they find themselves with that same technology and speeds, which are no longer acceptable (and which soon may not even qualify as broadband per the FCC).

The Brains of the Network Are Moving to the Cloud. We now have the ability with software defined networking for the more expensive functions of the network to move to the cloud. One of the hardest things about bringing broadband to a rural area is that it’s not cost effective to also bring voice and cable TV. But we are seeing the beginnings of having voice switching, cable TV headends and even cellular headends moving to a cloud. This is going to turn these functions into services rather than capital requirements.

Technology is Making Everything Better. You can barely read the trade press without seeing some new technology breakthrough that will improve telecom. This year alone I have seen a dozen announcements about ways to increase the speed and efficiency of fiber. There are constant improvements in chips sets, batteries, use of spectrum, materials and processes that make it easier to deliver telecom products.

My Cellphone. I am not a sophisticated cellphone user. I don’t run dozens of apps and my main computing tools are still my desktop or my laptop. I don’t play games on the cellphone or watch videos on it. But I use my cellphone in the typical ways of keeping connected when I am away from the office. It is so convenient to be able to answer an email or look something up on the web from anywhere. But I am also thankful that I am not one of those people who sit at a restaurant with my head on my cellphone.

This Blog. I am first thankful that there are people who find this interesting enough to read. Thank you all! I I am mostly thankful for the discipline that this blog has given me and the act of writing daily has reinvigorated my creative drive.

Like this:

There is a lot of progress being made with biometrics and it should not be too many years until biometric techniques are the preferred way to authenticate transactions. The field has been around for many years, but the historical biometric technologies have been too expensive for widespread use. Today I report on some of the latest in biometric technology.

Behaviometrics. This is a new field that can keep track of people by their behavioral traits. For example, you can track computer users using their typing characteristics – everybody has a certain cadence and rhythm when typing and Behaviosec of Palo Alto has developed a technology that can verify that the person behind the keyboard is who they are supposed to be.

This is certainly one more security tool and is a good idea when giving people access to sensitive data. I do see a flaw in that people’s typing rhythms can change due to injury or other reasons, but this provides another general tool to know who is accessing your network.

Directed Advertising. Tesco, the world’s largest retail store chain is introducing facial recognition at gas pumps for some of its stores. The facial recognition will determine who you are (if you are a regular customer) or classify you by sex and age and then will display ads aimed at you. There are also billboards in Japan that change message depending on who is walking past. These are early steps in using biometrics to pinpoint advertising aimed directly at specific customers.

Facial Recognition for Payments. China will be broadly implementing a facial recognition system that will become the preferred method for authorizing payments at stores and other places. This should be deployed during 2015 and the goal is that a person’s face becomes their PIN. Validation is supposed to be nearly instantaneous and will speed up payments while also reducing fraud.

Retina Scanning App. Very high-end security systems have used retina scans for many years. But EyeVerify of Kansas City, a leader in this field has found a way to use a smartphone to verify a user by quick eye scan. This can be a way to unlock your phone, but the firm is working towards making this a way for banks to verify customers and transactions.

Pre-Crime Biometrics. The Israeli firm BioCatch uses a technology that builds a profile of users to identify questionable behavior. They build a database of where you shop, what you buy, etc. to be able to spot when somebody is doing something unusual. Banks have been doing this to some extent for years but this new technology develops a far more detailed profile than banks have used in the past to spot fraud.

Fingerprint Verification. Apple introduced fingerprint verification in 2013 to allow users to use lock or unlock the phone or sensitive content. Samsung is now working with PayPal to introduce similar technology in 25 countries to verify payment transactions.

India going Biometric. A large portion of India’s population has been undocumented in that there is no equivalent there of a social security number. So the country has launched a program and has gathered fingerprints, retina scans and photographs of 500 million of its citizens in order to develop an easier way for people to be identified. They are working towards having biometrics be the normal way to identify people and are also going to make the database available to merchants for purchase verification.

Summary. One has to wonder if the methods being used in Chana and India, for example, would fly in the West. For instance, while the Chinese systems of identifying everybody by facial recognition can make it easier there for people to buy things, it also gives the government a way to closely track where everybody goes in public. Every cash register becomes a tool for the state to track people’s movements and one has to wonder if most of the world is ready for that level of surveillance.

Certainly there is a lot of room for improvement in security and Americans in recent polls have said that identity theft is one of their largest concerns. So one can imagine that technologies like using fingerprints on a smartphone app might be a good way to add more security for purchases.

I know I would not be comfortable with directed ads where a store flashes ads meant specifically for me. We know that Google and others have built detailed databases about us, but the idea of having that shoved in your face when these databases are matched to facial recognition feels like going too far. I would probably avoid a store that flashed an ad aimed directly at me. But everybody is different and I suspect my wife would love stores to present her with specials as she shopped. This is already being done today to some extent using your cellphone’s ID and facial recognition expands this to everybody and just not those using smartphones.

Phase Changing Computer Chips. Researchers at the University of Cambridge, the Singapore A*STAR Data-Storage Institute and the Singapore University of Technology and Design have announced a new technology that could increase chip processing speed by as much as 1,000 times. This is possible by replacing silicon with a material that can switch back and forth between electrical states. Such materials are called phase changing materials (PCMs). The researchers have been using a PCM based on a chalcogenide glass, which can be melted and recrystallized in as little as half a nanosecond using appropriate voltage pulses. This process allows each tiny portion of the chip to swap between a crystalline state that is conducting and a glassy state which is insulating, meaning that the chip can be reconfigured on the fly.

In these new chips the logic operations and memory are co-located, rather than separated as they are in silicon-based computers. Currently, the smallest logic and memory devices based on silicon are about 20 nanometres in size. But we’ve reached a limit with silicon chips since electrons leak if the insulating layer is too thin. PCM devices can overcome this size-scaling limit since they have been shown to function down to about two nanometers.

Forever Batteries. Professor Yi Cui and his team of Stanford University have announced a huge breakthrough for batteries. His team has found a way to stabilize lithium, which could result in vastly improved battery performance. This is a big deal because lithium is a highly reactive substance and prone to overheating and catching on fire.

Today’s lithium batteries deal with the instability by using lithium as the cathode and silicon or graphite as the anode. But a battery with lithium for both terminals is far more efficient. Cui and his researchers solved the problem by building ‘nanospheres’, or protective layers of carbon domes on top of the pure lithium anode.

The result of the technology is a staggering improvement in battery life. Current batteries have a coulombic efficiency of about 4%, meaning that they lose 4% of their capacity to be recharged each time and they die after about 25 charges. The new batteries have a coulombic efficiency of nearly 99.99% meaning they can be recharged tens of thousands of times. And the battery is about four times more efficient in terms of the life of a charge. So this means a cellphone charge that will last for several days or an electric car that can be driven nearly 300 miles between charges. And the batteries can be recharged practically forever.

Better Storage Batteries. Imergy Power Systems is making batteries from recycled vanadium. These are large batteries used industrially to store power. The first generation batteries store 200 kilowatt hours of electricity and there are much larger batteries on the way. The big advantage of these batteries is the cost, which is down to $300 per kilowatt hour and dropping.

These are large redox flow batteries that contain a liquid electrolyte. Imergy is claiming that these batteries ought to last ‘forever’ since vanadium can operate in both the negative and positive direction without a chemical reaction.

Nanoparticle Pill. Google is working on a nanoparticle pill that can identify cancer or chemical imbalances in the body. The nanoparticles are really tiny, about 10,000 times thinner than a human hair. In the blood stream they are attracted to and attach to whatever specific chemical or protein they are seeking. By tracking where the nanoparticles congregate, doctors will be able to pinpoint clusters of cancer cells or whatever the markers are looking for.

Space Elevator. Japan’s giant construction company, Obayashi Corporation, has announced plans to build an elevator into space by the year 2050. For those of you who are not science fiction fans, a space elevator is just what it sounds like. It is the construction of a large shaft into space and provides a way to mechanically move materials into and out of space.

Obayashi says that this is going to be possible due to carbon nanotechnology. These new materials are almost a hundred times stronger than steel cables. The plans are to build an elevator that extends almost 60,000 miles into space. Once built, robotic cars could be used to climb and descend on the carbon shaft to ferry people and materials affordably into space. This would provide for an affordable way to let mankind build the huge ships needed to explore other worlds or to bring raw materials from space back to earth.

Like this:

In the last month I have seen several announcements of groups claiming they will be launching 5G cellular in the next few years. For example, both South Korea and Japan have announced plans to introduce 5G before they host the Olympics in 2018 and 2020. Three of the Chinese ministries have announced plans to jointly develop 5G. And the Isle of Man says they are going to have the first 5G network (and before you laugh, they had the second LTE network in the world).

I have written before about the inflation of claims in wireless technologies, and so I have to ask what these groups are talking about. There is nobody in the world today that is delivering wireless that comes close to meeting the 4G specification. That spec calls for the ability to deliver 100 Mbps to a moving vehicle and 1 Gbps to a stationary customer. What is being sold as 4G everywhere is significantly slower than those speeds.

For example, OpenSignal studies wireless speeds all over the world. In February 2014 they reported that the average speed of US 4G networks was only at 6.5 Mbps in the second half of 2013, down from 9.65 Mbps the year before. The US speeds have rebounded some in 2014, but even the fastest 4G networks, in Australia, average only 17 – 25 Mbps. That is a long way from 1 Gbps.

Moreover, there aren’t yet any specifications or standards for 5G, so these announcements mean nothing in since there is no 5G specification to shoot for. The process to create a worldwide 5G standard hasn’t even begun and the expectation is that a standard might be in place by 2020.

I am not even sure how much demand there is for faster wireless networks. It’s not coming from cellular data for smartphones. That business in the US has been growing about 20% per year, or doubling every five years and it’s expected to stay on that pace. New demand might come from the Internet of Things, from devices that want to use bandwidth from the cellular network. IoT usage of cellular networks is new and, for example, there are utilities now using cellular bandwidth to read meters. And while industry experts expect a huge increase in this machine-to-machine traffic by 2020 I’m not sure that it needs greater speeds.

The other thing we have to always remember with cellular traffic is that it handles only a tiny fraction of the total data used in the country today. Reports from Sandvine have shown that cellular traffic only carries about 1% of the total volume of data delivered to end users in the US today, and landline data usage is still growing faster than cellular data. This is probably due to the expensive data plans that cellular companies sell and which have taught customers to be frugal with smartphone data. But it’s also a function of the much slower speeds on 4G compared to many landline connections.

Another limiting factor on 4G, or 5G or any G getting faster is the way we allocate spectrum. In the US we dole out spectrum in tiny channels that were not designed to handle large data connections. Additionally, any given cell site is limited in the number of data connections that can be made at once.

So I am completely skeptical about these announcements of upcoming 5G networks. I am still waiting for a cellular company to actually meet the 4G standard – what we are calling 4G today is really a souped of version of 3G technology. It’s very hard to foresee any breakthroughs by 2020 that will let cell sites routinely deliver the 1 Gbps that is promised by 4G. My guess is by the time that somebody does deliver 1 Gbps to a cellphone that the breakthrough is going to be marketed as 10G.

I don’t think that any of the groups that are promising 5G by 2020 are anticipating any major breakthroughs in cellphone technology. Instead the industry is constantly making tweaks and adjustments that boost cell speeds a little more each time. All of these technology boosts are significant and we all benefit as the cellular network gets faster. But the constant little tweaks are playing hell with handset makers and with cellular companies trying to keep the fastest technology at all of their cell sites.

We are not really going to get a handle on this until we have fully implemented software defined networking. That is going to happen when the large cell companies migrate all of the brains in their networks to a few hub cell sites that will service all of the cellular transmitters in their network. This means putting the brains of the cellphone network into the cloud so that making an update to the hub will update all of the cell sites in the network. AT&T and Verizon are both moving in that direction, but it might be a decade until we see a fully cloud-based cellular network.

Like this:

I recently read the The Fourth Revolution: How the Infosphere is Reshaping Human Reality by Luciano Floridi. He is a leading figure in modern philosophy and in this book he looks at how our relationship with information technology is changing us. Floridi believes that mankind in the midst of profound change due to our interactions and immersion in computer technology.

He thinks that information technology is the fourth scientific breakthrough in our history that has fundamentally changed the way that we see ourselves in relation to the universe. The first transformational scientific breakthrough was when Copernicus shook mankind out of the belief that we were the center of the universe. The second was when Darwin showed mankind that it was not the center of the animal kingdom but had evolved alongside and was related to all other life on earth. The third revolution started when Freud showed us that we are not even transparent to ourselves and that we have an unconscious side that is not under our direct control. The fourth big change in our perception of our role in the universe has come through the development of computers and information technology. Our relationship with computers and data has shown mankind that people are not disconnected and individual agents, but instead with the web and computer technology we have become an integral part of the global environment.

He labels any technology that enables the transmission of information as an ICT (Information and Communication technology). The first ICT was writing, but we now have become inundated by ICTs such as the Internet of Things, Web 2.0, the semantic web, cloud computing, smartphone apps, augmented reality, artificial companions, driverless cars, wearble tech, virtual learning, social media and touch screens. ICTs are changing so rapidly that these ICTs will quickly become obsolete and will be replaced by many more that we can’t even imagine. Increasing computer power, smaller chip sizes and ways to handle big data mean that mankind is headed for a time when technology is indispensable to our lives, and will be integrated into our lives.

His most surprising conclusion is that this new technology and our interface with it is fundamentally changing us as people. I have recently read some literature about childhood development that corroborates this concept, in that kids who are immersed in advanced technology from birth develop differently than those before them. They literally develop different neural pathways and different brain characteristics than historical mankind. He thinks we are entering an age of not only new technology, but of a new mankind.

Floridi argues that the boundaries between life online and life offline are blurring and that our kids will always be online, even if not physically connected to a computer. We already see the beginning of this in that our roles in social networks and other online activities now don’t rely on us always being actively there. As computers become more and more a part of our we clearly will always be connected. Floridi labels this new phenomenon ‘onlife’.

Our onlife now defines a lot of our daily activities – how we shop, learn, care for our health, get entertainment, relate to other people. It affects the way we interface with the realms of law, politics, religion and finance. It even has changed the way we wage war. Floridi says that what we are experiencing as a society is more than us just using newer technologies and that the real significance is how these technologies are changing us. He says that ICTs are transforming the way that we interface with the world.

I found this book fascinating. It brings a way to understand a lot of the things we see in modern life. For instance, it gives is a way to understand why young kids seem to think differently than we do. If Floridi is right then the world is at a crucial point in its history. We still have a tiny number of primitive people in the planet that are living in pre-history. But most of the people in the planet are living in history, that is, they are from a mindset that we have had for thousands of years since the advent of writing and other forms of communication. But we also now have a generation of people who are moving into hyper-history and are becoming part of the infosphere. Children growing up in the infosphere and particularly their children will think differently than the rest of mankind. People of my generation are users of technology, but this next mankind is immersed in technology and is a part of that technology. It’s going to be interesting to see how the world deals with a generation that is fundamentally different than the rest of mankind.

Like this:

I saw an article earlier this year that said that some smaller triple-play providers have decided to get out of the cable business. Specifically the article mentioned Ringgold Telephone Company in Georgia and BTC Broadband in Oklahoma. The article said that small companies have abandoned over 53,000 customers over the last five years, with most of this being recent.

I’m not surprised by this. I have a lot of small clients in the cable business and I don’t think any of them are making money with the cable product. There are a myriad of outlays involved such as programming, capital, technical and customer service staff and software like middleware and encryption And all of these costs are climbing with programming increasing much faster than inflation. And there is pressure to keep up with the never-ending new features that come along every year like TV everywhere or massive DVR recorders. I have a hard time seeing any cable company that doesn’t have thousands of customers covering these costs.

But small cable providers are often in a bind because they operate in rural areas and compete head-to-head with a larger cable company. They feel that if they don’t offer cable that they might not survive. But it is getting harder and harder for a company who doesn’t have stiff competition to justify carrying a product line that doesn’t support itself.

I’ve written several blogs talking about how software defined networking is going to change the telecom industry. It is now possible to create one cable TV head-end, one cell site headend or one voice switch that can serve millions of customers. This makes me ask the question: why isn’t somebody offering cable TV from the cloud.

There are big companies that already are doing headend consolidation for their own customers. For instance, it’s reported that AT&T supports all of its cable customers from two headends. A company like AT&T could use those headends to provide wholesale cable connections to any service provider that can find a data pipe to connect to AT&T – be that a rural telephone company, a college campus or the owner of large apartment complexes.

This wholesale business model would swap the cost of owning and operating a headend for transport. A company buying wholesale cable would not need a headend, which can still cost well over a million dollars, nor technical staff to run it. In place of headend investment and expense they would pay for the bandwidth to connect to the wholesale headend.

As the price of transport continues to drop this idea becomes more and more practical. Many of my clients are already buying gigabit data backbones for less than what they paid a few years ago for 100 Mbps connections. The only drawback for some service providers is that they live too far of the primary fiber networks to be able to buy cheap bandwidth, but the wholesale model could work for anybody else with access to reasonably priced bandwidth.

The wholesale concept could be taken even further. One of the more expensive costs of providing cable service these days is settop boxes. A normal settop box costs over $100, one with a big DVR can cost over $300 and the average house needs two or three boxes. The cost of cloud memory storage has gotten so cheap that it’s now time to move the DVR function into the cloud. Rather than put an expensive box into somebody’s house to record TV shows it makes more sense to store video in the cloud where a terabit of storage now costs pennies. Putting cable in the cloud also offers interesting possibilities for customers. I’ve heard that in Europe that some of the cable providers give customers the ability to look backwards a week for all programming and watch anything that has been previously broadcast. This means that they store a rolling week of content in memory and provide DVR service of a sort to all customers.

The ideal cloud-based cable headend would offer line-ups made up of any mix of the channels that it carries. It would offer built in cloud DVR storage and the middleware to use it. I think that within a decade of hitting the market that such a product would eliminate the need for small headends in the country. This would shift video to become a service rather than a facility-based product.

There would still be details to work out, as there is in any wholesale product. Which party would comply with regulations? Who would get the programming contracts? But these are fairly mundane details that can be negotiated or offered in various options.

It is my hope that some company that already owns one of the big headends sees the wisdom in such a business plan. Over a decade, anybody who does this right could probably add millions of cable lines to their headend, improving their own profitability and spreading their costs over more customers. AT&T, are you listening?

Like this:

I often report on how industry experts see the future of our industry. It’s an interesting thought experiment, if nothing else, to speculate where technology is moving. In 2004 the Pew Internet Project asked 1,286 industry experts to look ten years forward and to predict what the Internet would be like in 2014. I found it really interesting to see that a significant percentage of experts got many of the predictions wrong. Here are some of the specific predictions made in 2004:

66% of the experts thought that there would be at least one devastating cyberattack within the following ten years. While there have been some dramatic hacks against companies, mostly to steal credit card numbers and related information, there have been no cyberattacks that could be categorized as crippling. The experts at the time predicted that terrorists would be able to take over power plants or do other drastic things that have never materialized.

56% thought that the internet would lead to a widespread expansion of home-schooling and telecommuting. There certainly has been growth in telecommuting, but not nearly to the extent predicted by the experts. It’s the same with home schooling, and while it’s grown there is not yet a huge and obvious advantage of home schooling over traditional schooling. The experts predicted that the quality and ease of distance learning would make home schooling an easy choice for parents and that has not yet materialized.

50% of them thought that there would be free peer-to-peer music sharing networks. Instead the recording industry has been very successful in shutting down peer-to-peer sites and there are instead services like Spotify that offer a huge variety of free music legally, paid for by advertising.

Only 32% thought that people would use the Internet to support their political bias and filter out information they disagree with. Studies now show that this is one of the major consequences of social networking, in that people tend to congregate with others who share their world view. This finding is related to the finding that only 39% thought that social networks would be widespread by 2014. The experts en masse did not foresee the wild success that would be enjoyed by Facebook, twitter and other social sites.

52% said that by 2014 that 90% of households would have broadband that was much faster than what was available in 2004. At the end of 2013 Leichtman Research reported that 83% of homes had some sort of broadband connection. That number was lower than predicted by the majority of experts, but what was even lower is the average speed that people actually purchase. Akamai reports that the average connection speed in the US at the end of 2013 was 8.7 Mbps. But this was not distributed in the expected bell curve and that average consists of a small percentage of homes with very fast connections (largely driven by Verizon FiOS and other fiber providers) but with many homes with speeds that are not materially faster than what was available in 2004. For example, Time Warner just announced this past week that they are finally increasing the speed of their base product from 3 Mbps to 6 Mbps.

32% thought that online voting would be secure and widespread by 2014. There are now a number of states that allow on-line voter registration, but only a tiny handful of communities have experimented with on-line voting. It has become obvious that there is a real potential for hacking and fraud with on-line voting.

57% of them thought that virtual classes would become widespread in mainstream education. This has become true in some cases. General K-12 education has not moved to virtual classes. Many schools have adopted distance learning to bring distant teachers into the classroom, but there has been no flood of K-12 students moving to virtual education. Virtual classes, however, have become routine for many advanced degrees. For example, there are hundreds of master degree curriculums that are almost entirely on-line and self-paced.

But the experts did get a few things right. 59% thought that there would be a significant increase in government and business surveillance. This has turned out to be true in spades. It seems everybody is now spying on us, and not just on the Internet, but with our smartphones, with our smart TVs, and even with our cars and with the IOT devices in our homes.

The Pew Institute continues to conduct similar surveys every few years and it will be interesting to see if the experts of today can do better than the experts of 2004. What those experts failed to recognize were things like the transformational nature of smartphones, the widespread phenomenon of social networking and the migration from desktops to smaller and more mobile devices. Those trends are what drove us to where we are today. In retrospect if more experts had foreseen those few major trends correctly then they probably would have also guessed more of the details correctly. Within the sample of experts there were undoubtedly some experts who guessed really well, but the results were not published by expert and so we can’t see who had the best crystal ball.

I ran across so many cool new technologies recently that I will have to stretch talking about them over two blogs this month.

Twisted Lasers. Physicists at the University of Vienna have been able to transmit a twisted laser signal through the air. This is a fairly common practice in fiber optic cables where multiple beams of lights are sent through the fiber simultaneously and which twist around each other as they bounce off the walls of the fiber.

But this is the first time that this has been accomplished through the air. The specific technology involved is called orbital angular momentum (OAM) and refers to the ability to bend light. In this case the scientists were able to make light transmit in a corkscrew pattern and they were able to intermingle two different colored beams through the air for over two miles. The technology is important because it would result in the ability to pump more data through a single microwave path. It also increases security by having multiple light paths to untangle.

Terabit Fiber. A team of scientists from Eindhoven University of Technology in the Netherlands and the University of Central Florida have developed a technology that vastly increases the bandwidth on fiber. Today the fastest commercially available fibers can transmit at 100 Gbps, but this team has demonstrated a fiber that transmits 2,550 faster than that, or 2.55 Terabits per second.

They accomplished this by combining several different existing technologies. First, they used multi-mode fiber. Normal long-haul fibers are single-mode fiber, meaning that each fiber can only support a signal from one laser source. But they used a multi-mode fiber that contained seven separate ‘cores’ or available laser paths. For now, this kind of multi-mode fiber is expensive, but the cost would drop through mass production.

The team of scientists also used several data transmission techniques to boost the speed even faster. They leveraged a technique called spatial multiplexing (SM) where data signals from multiple sources are transmitted in parallel and which can boost the speed up to 5.1 terabits per path. This is somewhat akin to time division multiplexing used for T1s that open a slot for each data bit so that everything can be packed tightly together. The team also used wavelength division multiplexing (WDM) which separates and transmits different data streams using different wavelengths of light. Together these techniques allowed them to create 50 separate paths through the fiber. This kind of breakthrough is probably a decade or more away from commercial deployment, but it lets us foresee fiber paths that can handle vast amounts of data when that is really needed such as in undersea fiber routes and inside supercomputers.

Frozen Light. Another team of researchers from Princeton report that they have been able to freeze light into a crystal. They have been able to stop light and gather it into a crystalline form. This is the first time that anybody has ever been able to stop photons.

This was accomplished by building a structure made of superconducting materials which acted like an artificial atom. They placed the artificial atom close to a superconducting wire containing photons. By the rules of quantum mechanics, the photons on the wire inherited some of the properties of the nearby atom and they began interacting with each other, a bit like particles. So far this has only been done to create very tiny crystals. But the hope is that the technology might be used to create larger crystals which would lead to a whole new category of exotic materials that will have weird properties. And who knows what that might lead to?

Personal RFID. One a more down to earth note, Robert Nelson decided to implant an NFC RFID chip into his hand. After it healed he programmed it to unlock his cell phone. All he has to do is hold the phone near his hand and it unlocks. He is investigating adding more chips and is working towards implementing activities like opening the garage door with a wave of the hand or unlocking and starting his car. He sees this technology as the ultimate in personal security since only your own chip would be able to control your devices.

Smartphone Spectrometer. Finally, I saw a device that creates a spectrometer for your cellphone. A company called Public Lab has introduced a product called the Homebrew Oil Testing kit, and the first use for this device is to find if your drinking water has any contaminants from fracking. It consists of a refractor that connects over the camera lens on a cellphone. The device uses a Blu-ray laser to shed light on the water sample you want to test. Shining the light into your water sample creates a spectrometer image which is captured by your phone.

Of course, unless you are a chemist you don’t know how to read spectrometer images, but there is an on-line database that can be quickly used to identify any contaminants in your water. Over time the device can be used to test a far wider range of pollutants and other substances, but for now the manufacturers seem to be concentrating on the fear that many people have about fracking.

Over the last few weeks C-Spire has begun rolling out gigabit fiber in Mississippi. Unlike Google which is mostly concentrating on large and fast-growing cities, C-Spire is rolling fiber out to small and rural towns throughout Mississippi. The C-Spire story is an amazing success. C-Spire is part of a holding company that includes a large wireless company, a large CLEC and a significant fiber network throughout the region. The company got started by the Creekmore family. Wade and Jimmy Creekmore are two of the nicest people in the telephone industry and they started out working at the family business which consisted of Delta and Franklin Telephone Companies, two small rural ILECs.

When the FCC distributed some of the first cellular spectrum they did so through a lottery. The company won spectrum through this lottery and started Cellular South. It’s grown to become the sixth largest cellular company in the country and many people are surprised when they visit Mississippi and find that C-Spire is more dominant there than AT&T or Verizon. The company was rebranded a few years ago as C-Spire Wireless. There used to be a number of other sizable independent wireless carriers like Alltel that have been swallowed by the two big wireless companies.

A little over a year ago C-Spire announced that it was going to roll out gigabit fiber optics to towns in the region. They modeled this after Google and towns that signed up enough potential customers qualified to get fiber. There are a number of towns that have now qualified and many others striving to get onto the C-Spire list. In the last few weeks the company began turning up gigabit services in small towns like Starkville and Ridgeland.

The company is offering 1 Gbps data service for $70 a month, a combined Internet and home phone for $90 per month, Internet and HD digital TV for $130 per month and $150 a month for the full triple play. These are bundled prices and customers who do not have C-Spire wireless will pay an additional $10 a month for each package.

It is really refreshing to see somebody investing back into the communities that supported them for many years. It’s pretty easy to contrast this to the big telcos and cable companies which are not reinvesting. C-Spire is building needed infrastructure, creating jobs and bringing a vital service. I view this as a true American success story. This is a win for both the Creekmores and for the people of Mississippi.

This is not the only place in the country where telephone company owners are reinvesting back into their community. There are hundreds of independent telephone companies and cooperatives around the country who are quietly building fiber and bringing very fast internet to some of the most rural places in the country. For example, Vermont Telephone Company grabbed headlines when they announced gigabit fiber for $35 per month. There are wide swaths of places like the Dakotas where fiber has been built to tiny towns and even to farms.

What these companies are doing is great. They are doing what businesses are expected to do, which is to modernize and grow when the opportunity is there. This is especially what regulated utilities should be doing since they have benefitted for decades from guaranteed profits from their businesses. But unfortunately for rural America most of them are served by AT&T, Verizon, CenturyLink and other large telephone companies like Frontier, Windstream and Fairpoint. These companies share at least one thing in common, which is that they are public companies.

It seems like public companies in this country are unable to pull the trigger on investing in infrastructure. The exception is Verizon who has invested many billions in FiOS, but even Verizon has stopped building new fiber and they are not investing in the small towns like Starkville. Rather than investing in rural America, the large companies are doing what they can to hold down costs there. In fact, AT&T has told the FCC that they would like to cut down all of their rural copper lines within a decade and replace them mostly with cellular.

The Creekmores aren’t building fiber just because it’s the right thing to do. They are doing this because they see a solid business plan from investing in fiber. They will make money with this venture, which is the way it is supposed to work. But the public companies like AT&T only seem to invest in fiber when they face a big competitive threat, like AT&T in Austin Texas. I get a sense that CenturyLink would build fiber if they had the financial resources, but most of the big companies are doing the opposite of reinvesting in rural places.

Unfortunately, the big companies are driven by stock prices and dividends. They don’t want to take the negative hit from making large investments because it depresses profits for a few years while you are building. And that is the real shame, because in the long run these large companies would increase profits if they reinvested the billions that they instead pay out as dividends. They would end up with fresh new networks that would make profits for the next century.

It’s going to be interesting to see how gigabit fiber transforms the small pockets of rural America that are lucky enough to get it. The broadband map in the country is a real hodgepodge because right next to some of these areas that have fiber are areas that often have no broadband at all other than cellular or satellite.

It is also going to be interesting over twenty years to see how the two different types of areas fare economically. There is a company in Minnesota, Hiawatha Broadband, that has been building fiber to small towns for a decade and they claim that every town where they have built a network has been growing while every surrounding town has shrunk in population. They have been at this about as long as anybody, and so their evidence is some of the early proof that having fiber matters. Within another decade we are going to have evidence everywhere and we will be able to compare the economic performance of rural areas with and without fiber.

Like this:

As the FCC crawls slowly towards a decision on net neutrality, I thought it would be useful to talk a bit about forbearance. Forbearance means restraining from doing something, and all of the proposals to protect net neutrality that involve Title II regulation require forbearance to some of the FCC’s rules.

The FCC is somewhat unique when it comes to regulation because Congress has given them the right for forbearance, meaning the FCC can selectively decide when to apply certain laws and regulations. Most federal agencies don’t have this power. But this makes sense for the FCC since they are regulating such diverse companies such as cable companies, telephone companies, cellular companies, fiber networks, microwave companies and a number of other niche technologies. It’s always been obvious that rules that make sense for one of these industries might not make any sense when applied to another one.

If the FCC was to put broadband providers under Title II this means subjecting them to all of the rules that are still in place from the Telecom Act of 1934 as well as many of the rules in the Telecom Act of 1996. It is the fear of having to comply with all of these rules that is causing the harsh reaction of ISPs to the idea of being regulated. (Well, that, or just the idea of being regulated at all).

Let’s look at one example of the kinds of rules that are required by the Telecom Act of 1934. That Act requires all telephone companies to issue tariffs. People tend to think of tariffs as a price list and a description of the products offered by a telephone company. But tariffs are much more than that. Tariffs include details of the way that a carrier must interact with its customer. Tariffs define things like how much notice you have to give a customer before you can disconnect them for non-payment. Tariffs require a carrier to give notice before changing rates, meaning that rates can’t be changed on the fly, but must wait for a period of time before being implemented. Tariffs also require nondiscrimination between customers, and that might be the biggest part of tariffs that scare ISPs, who routinely offer different deals to customers every day.

Additionally, every state has developed specific rules for what must be contained in tariffs filed in that state. This means that a nationwide ISP would have to file a different tariff in each state and follow different rules in each state. If forbearance is not applied to these parts of Title II then ISPs would not just be regulated by the FCC, but by each of the fifty states.

There are many parts of Title II that would also not make sense to apply to ISPs. For example, there are sections of the various Acts that look at things like protecting customers from obscene phone calls or the requirement to provide operator services that obviously don’t apply to data services.

But there are other requirements that have the ISPs running scared. For example, the Telecom Act of 1996 requires the large telephone companies to unbundle their networks and to give access of their networks to competitors. And this does not just apply to telephone lines but also to DSL. There is no reason why this could not be applied to cable companies to bring competition into the data market. And there are related rules that regulate things like collocation and that require interconnections between carriers that exchange voice and data traffic.

There are yet other portion of the Title II rules where it is not clear if forbearance ought to be applied. For example, the FCC requires jurisdictional separation of revenues and costs to determine what is under the control of the FCC versus the control of states. Would the FCC just declare broadband to be an Interstate service to keep it all under their control? That is what has been done with DSL, and yet the states are still involved in many aspects of regulating DSL.

It appears to me like the idea of forbearance in this case is going to be extremely complicated. There are repercussions for deciding to forbear or not to forbear different parts of the existing telecom rules. It’s a huge puzzle to solve, and I am going to guess that every decision to forbear or not forbear will present a chance for legal challenge.

But the FCC forbears things all of the time. In fact, there is a legal process that allows for carriers to ask for forbearance from a specific rule, and if the FCC does not act within a year then the forbearance is assumed to be granted.

We already know that Verizon and AT&T are threatening to sue the FCC should they try to regulate broadband under Title II. Even should the FCC be able to win such a challenge, they would have to expect a decade where ISPs are constantly asking for additional forbearance from whatever regulation the FCC chooses to apply to broadband. If nothing else, this sounds like a full employment act for telecom lawyers.