When ONF Executive Director Dan Pitt invited me to contribute a blog post, it brought to mind our interaction in the summer of 2012 on how to treat SDN in the seminal NFV White Paper I was then editing. The operator co-authors were keen to ensure that SDN and NFV were positioned in the paper as being complementary. This was important because we wanted to create momentum for NFV by highlighting use cases that did not require the then perceived complexity of SDN. As soon as the ETSI NFV Industry Specification Group (NFV ISG) was launched, we engaged with ONF, recognizing its key role in championing an open SDN ecosystem. And in 2014 the NFV ISG entered into an MoU with ONF to facilitate joint-work.

The vision for NFV was compelling because the benefits could be readily attained. By replacing network appliances based on proprietary hardware with virtualized network functions (VNFs) running on industry standard servers, operators could greatly accelerate time to market for new services, and operations streamlined through automation. Moreover, important NFV use cases (e.g. virtualized CPE) would not require massive systems upgrades — a huge barrier for innovation in telecoms. We are seeing this first-hand at CableLabs, where we have been able to prototype virtualized CPE for business services and home networks on a two-month development cycle.

In contrast, the simplified definition of SDN- the separation of control plane from data plane -in my mind does not adequately convey the compelling benefits of SDN. The term ‘Software Defined Networking’ should mean just that, every element of the network, including the VNFs and network control should be implemented within a fully programmable software environment, exposing open interfaces and leveraging the open source community. This is the only way to create an open ecosystem and to unleash a new and unprecedented wave of innovation in every aspect of networking.

NFV releases network functions “trapped inside hardware” (a description I stole from an HP colleague) achieving tremendous benefits. But VNFs must be dynamically configured and connected at scale to deliver tangible value. While today’s telecommunications operations support systems (OSS) are adequate for static NFV use cases, the real potential for NFV to transform networking can only be realized through SDN control. Consequently, SDN represents much more than the mere separation of control plane and data plane.

Given telecommunications networks are deployed at massive geographic scale, it is a hard sell to convince thousands, or even millions of customers that their services will be migrating to a new network platform where their services will not be quite the same but prices won’t go down. Couple that with the significant time and cost to upgrade the OSS, wide ranging operational processes changes, and the need to validate that the new platforms are sufficiently stable and reliable, not to mention the obligations of regulation, it is not surprising that there is hesitancy to contemplate significant telecommunications network transformations.

Consequently, the telecoms industry has resorted to decades of incremental network upgrades which have piled legacy functionality on top of legacy functionality to avoid the costs and risks of wholesale network and services migration. In the face of these realities, SDN was perceived to offer insufficient benefit to justify significant investment except in niche areas where it could be overlaid on top of existing systems. Furthermore, the idea of logically centralized SDN control is very scary to network designers who don’t readily understand abstract software concepts and who lose sleep striving to deliver reliable connectivity at massive scale, with relentless downward pressure on costs.

Just over two years into the NFV revolution, it is clear that the emergence of NFV has galvanized the industry to embrace software-based networking; short-circuiting a transition that might otherwise have taken years. The revelation that NFV can be deployed in digestible chunks, without massive system upgrades has forced network designers to take notice. After all, it is difficult to ignore a pervasive industry trend when vendors’ product plans have morphed into software roadmaps!

Given that NFV is now accepted by all major network operators and some have already made significant announcements, there is no turning back. Leading vendors have committed to NFV roadmaps and analysts talk about ‘when’ and not ‘if’ NFV will be deployed. More importantly, SDN and NFV are now frequently discussed in the same breath. In my mind, the distinction between NFV and SDN is becoming an artifact of history, and the terms will ultimately be subsumed by a software-based networking paradigm, which itself will emerge as an integral aspect of Cloud technology.

The emergence of NFV with SDN is accelerating the evolution of cloud technologies to satisfy the stringent requirements of software-based telecommunications networks. Whereas a web service could momentarily stall with minimal customer impact while a virtual machine reboots, some business-critical network services cannot tolerate loss of connectivity even for a few milliseconds. Therein lies both challenge and opportunity. Challenge because meeting stringent telecommunications availability and performance requirements is not easy as evidenced by the ETSI NFV ISG’s deliberations. Opportunity, because I foresee an unprecedented wave of telecommunications innovation on a par with the birth of the Internet.

Carrier-grade network resilience (e.g. 5-nines and beyond) will be achieved by pooling virtualized resources, fault management will be supplanted by autonomic self-healing networks that can not only withstand equipment failures but can even rapidly recover from large scale natural disasters by instantly migrating network capacity to remote location as demonstrated by NTT DOCOMO et al in the aftermath of the Fukushima disaster. And exciting new routing paradigms such as intent-based networking and content-based networking will become feasible in a much earlier timeframe with innovation galvanized by the potential for imminent experimentation on deployed infrastructures. I could go on…

The genie of software-based networking — where synergies between NFV and SDN result in significantly greater capability than either could deliver alone — is now truly out of the bottle. The ultimate challenge is to encourage growth of an open telecommunications ecosystem, where operators and vendors can work together to create and deliver value to their customers. Energized by the NFV ISG and ONF, among other industry groups, and open source projects that are becoming increasingly important, the reality is just around the corner.

Don Clarke is Principal Architect for Virtualization Technologies at CableLabs and Chairman of the ETSI NFV ISG Network Operator Council.

CableLabs Tek Stadium 2015 will form an integral part of NCTA’S INTX – the Internet & Television Expo — to be held May 5-7 in Chicago at The McCormick Center. We encourage you to be a part of this new destination for the digital economy.

Tek Stadium is an educational and technical exhibit designed to showcase the technology, products, services, and applications that enrich the digital universe.

With 6470 square feet of booth space, offering demo space and VIP booth tour access, CableLabs will host more than 30 demonstrations covering a wide range of areas including video, broadband, wireless, the internet of things, security and much more. Tek Stadium is your opportunity to be a part of the larger INTX.

Top Reasons why you should join us at Tek Stadium 2015

A turnkey solution for your business including booth design, setup and connectivity

Tours of Tek Stadium by industry executives, analysts, and government & policy officials to raise awareness of your business to key audiences

A crossroads for media interactions

Private meeting rooms for you to greet your target customers in a professional setting

CableLabs web and social media presence to amplify activities at Tek Stadium throughout and after the event

Join Amdocs, Envivio, ESRI, Front Porch, Irdeto, JetHead Development, S3 Group, Zodiac Interactive, just to name a few, for exposure to top companies and influential players in the industry and beyond!

The modern-day cable industry is represented by a complex framework of legacy data systems. Problems arise in the integration of these systems.

Traditionally, the industry has relied on project-based integration approaches, which are often architected in isolation and removed from a more holistic enterprise view. Cable operator business owners and technology partners are often in a situation to implement solutions that solve their immediate needs in the most pragmatic way possible, without consideration for a larger enterprise-centric strategy. Project-based integrations provide a simple and quick solution, but at the expense of long-term negative impacts on future integration attempts. These isolated solutions pile up and eventually create a data integration monster that hinders flexibility for future development and creates on-going future costs. The worst part is that every new project feeds the monster a little bit more leading to delays, increasing development costs, and a growing number of headaches.

The landscape is changing as the demand for new products, interconnected systems, and the concept of network virtualization becomes a reality. Cable operators’ product offerings are evolving and extending through configuration and personalization. Opportunities to innovate, rising consumer expectations and new competition are driving an increase in service provider system integration activity. Effective operational support requires more integration, more interfaces, and more data exchange than ever before. The solution lies in using technologies in conjunction with a shared information framework to create unity across projects, people, and data sources

Technology paired with a cable-centric information framework, which provides an enterprise view of data integration, can put the cable industry on a path to not only stopping the rising cost of point-to-point integration, but also overcoming the on-going cost incurred from past project-based integrations and stepping into the future with a holistic view on data integration.

Taking a Different Integration Approach

Rather than taking the approach of point-to-point project-based integration, greater scale can be achieved through new integration approaches.

Cable operators have an opportunity to move away from building interfaces on a per-project basis and start developing from an industry-wide specification—enabling agile data integration at the enterprise level and across the industry.

CableLabs, in collaboration with our members and leading industry suppliers, has developed a cable information framework that allows cable operators to move away from the previous approaches, and move towards an enterprise view of data integration.

Impact of a New Approach

With an enterprise information integration approach, similar to those utilized by Google, Apple and Netflix, the systems and the business get the data they need and technology groups are able to provide greater value. Technology teams are able to deploy data services that can be easily reused for all applications without rework, resulting in:

Reduced Costs: Improved efficiency from cutting development and maintenance costs

Increased Productivity: Lowered risk and improved data quality

Leveraging the cable information framework can accelerate the transition to a more agile view of data exchange. Eliminating point-to-point integrations in favor of a hub-and-spoke architecture will provide greater agility, more reuse, faster product launches, a data-driven business, and ultimately a better customer experience. With an enterprise approach to information architecture, each system can exchange information in a common format comprehensible to all applications. This loosely-coupled architecture will minimize the impact of change by avoiding the per-project approach. Using a cable information architecture will enable rapid prototyping and development and provide agility and reuse.
To find out more about how your team can utilize cable frameworks, please contact dia@cablelabs.com.

Voice traffic has consumed the airwaves of cellular and Wi-Fi networks for years now, being popular with subscribers and a revenue source for operators. Wi-Fi calling is rapidly expanding as a valued service for subscribers. Operators who offer Wi-Fi calling on their own Wi-Fi networks have a unique advantage to provide a superior Wi-Fi calling user experience. Operators have significantly expanded their networks while greatly improving performance and ease of access for subscribers. As a result, operators can successfully offer Wi-Fi only voice services, or Wi-Fi preferred services in cooperation with a macro cellular operator. Furthermore, the deployment of LTE networks has made handovers between Wi-Fi and cellular operators much more achievable compared to legacy 3G networks. What are the recent developments to be aware of? How can these developments be leveraged?

Voice over Wi-Fi

Manufacturers, operators and the WFA are taking several initiatives to make voice calling on Wi-Fi substantially more available, reliable and easier to use. Operators are expanding the size and capacity of their networks, and adding 5Ghz radios, to help ensure the sufficient network capacity. Device manufacturers are improving the voice calling friendly Wi-Fi capabilities of their smart phones. Through CableLabs testing services, we have recognized that device manufacturers are also delivering on improved Wi-Fi RF performance that enhances voice calling quality and connectivity. Read more on RF performance tests from a past blog post, “Are all Wi-Fi Channels Created Equal?” The WFA has a number of device certifications in place, and in development, that help make Wi-Fi calling much more seamless with reliable performance. Recent industry trends indicate accelerating growth of voice calls over Wi-Fi networks.

All IP Wi-Fi and Cellular Handovers Thanks to LTE

LTE is a 4G cellular air interface well suited for VoIP calling. Unlike circuit switched 3G networks, LTE provides low latency all IP mobile access. VoIP services, including VoLTE, can be deployed as an application over the LTE data service. Therefore, a single VoIP application can be used over both LTE and Wi-Fi access. Handovers among Wi-Fi and LTE networks are about managing the IP address (or network address) presented to the VoIP client on the smartphone. Operators can manage this either in their core networks, or at the VoIP application itself. Today’s options with LTE are much simpler than attempting handovers between IP based Wi-Fi networks and circuit switched based 3G voice networks.

It is not strictly necessary for the Wi-Fi operator to arrange a relationship with an LTE operator in order to provide continuous voice service to subscribers across Wi-Fi and LTE networks. A relationship can, however, provide additional levels of network integration using standardized interfaces that may improve the handover experience depending upon the capabilities of the LTE network.

Device Configuration Impacts User Experience

Device configuration has a direct impact on the user experience. Operators have deployed Over The Top VoIP applications on mobile devices to offer a compelling voice service while on either Wi-Fi or LTE networks. These applications may also provide service across Wi-Fi and LTE networks with varying degrees of user involvement. OTT applications often include their own custom dialers.

Operators that build direct relationships with device manufacturers may implement further optimizations to enhance the user experience. The mobile device can be optimized to support faster handovers using air interface parameters not normally available to applications. The device can be customized to exploit mobility and network selection optimizations that may include additional interfaces with network servers. Furthermore, the device can make use of the native dialer for voice services, regardless of Wi-Fi or LTE access..

Network Selection

The best user experience starts with network selection. When should a Wi-Fi or LTE network be automatically selected by the device? Network selection per operator policy can provide subscribers more seamless and reliable voice calls. The operator’s network can either direct the smart phone to the proper network, or the operator can provision a policy so that the device can execute network selection. Many vendors offer proprietary solutions for network selection. Wireless forums such as the Wi-Fi Alliance and 3GPP are specifying standard device interfaces to support operator provisioned network selection policy.

Summary

Wi-Fi calling is growing rapidly. More operators offer voice over Wi-Fi services. Wi-Fi operators are expanding the reach of their Wi-Fi networks while improving performance, which can provide superior service to their Wi-Fi calling subscribers. LTE networks provide an opportunity to offer simplified and more seamless all-IP handovers between Wi-Fi and cellular networks. Industry trends among operators, manufacturers and wireless forums point to greater use of voice over Wi-Fi by subscribers, which means increased opportunities for operators.

In 2013, CableLabs embarked on an ambitious project to re-imagine its Louisville, Colorado location into a work environment that better aligns with CableLabs’ growing innovative culture and strong sense of collaboration. Traditional high walled cubes and offices were no longer conducive to the level of interaction, flexibility, and innovation that CableLabs’ team members, projects, and visitors need.

CableLabs’ CEO Phil McKinney articulated three “must achieve” goals, establishing the underpinnings of the architectural design and function of the remodeled space:

Growth – Accommodate our growth within the existing building constraints

Excitement/Energy – Create a new and exciting space with team member amenities such as open space, natural light, and café style gathering areas

We remodeled while still using the building. Teams were moved across the street into overflow offices during the different remodeling phases. One year and six days from initiating construction, the three phases of remodel construction were completed within budget, with all staff relocated into their newly remodeled work environment on January 6, 2015.

In the same 79,186 sq. ft. Louisville office space, the redesigned floor plan:

Increased the number of available seats from 204 to 237 modernized workstations

Added open, flexible work areas for project, experimentation, and innovation work

Reception Area

Open Space

Modernized Workstation

Creede Conference Room

Lessons Learned

Experiment early, apply learnings - During 2013, CableLabs opened a new Silicon Valley office and lab facility in Sunnyvale, California. During this time, experiments were also set up in various parts of the Louisville, Colorado building with lower height cubicles, open style meeting space furniture, glass wall modular partitions, whiteboard wall paint, plant walls, AV, and other ideas under test. Some ideas worked and others were failures. All of those experiences were invaluable in setting the direction for the Louisville workplace redesign.

Strong team with collaborative relationships – One year construction projects rarely complete on time and within budget. The tone of the remodel project was set up front; collaboration and teamwork are essential to our success. This greatly influenced the selection of the architect, construction, project management firms, and other partners, resulting in very strong, collaborative relationships across the remodel team. This proved invaluable in many aspects of the project including logistics and coordination across the three construction phases, communicating with and intently listening to staff during the project, getting decisions made quickly, managing construction noise during business hours, resolving unexpected problems as they occurred, managing within the budget, and dust mitigation in the office areas, labs and computer rooms.

Create abundant meeting and collaborative work areas – The new workplace design very intentionally eliminated offices and lowered cube walls to create more openness and increase communications and transparency. The approach also created concerns about whether meeting areas, and quiet locations for work, phone or conference calls, and discussions were sufficient.

To foster this new collaborative style of work environment, six types of work areas were designed into the new floor plan; scheduled conference rooms, unscheduled 2-4 person huddle rooms and phone rooms, small furniture amongst cubical areas, café style open seating areas, and flexible workspaces such as the Garage and the Carport. Since occupying the remodeled space, team members report they rarely if ever have difficulty finding a place to meet, talk, or work.

Self-forming teams innovate solutions – With the move to all workstations and no offices, staff expressed concern that they would spend too much time searching for available huddle or phone rooms. A challenge went out and a small, self-directed team formed consisting of a few engineers, developers, and administrative staff. Utilizing Agile and Lean techniques, the team quickly created the Huddle web app. Using IoT motion and door sensors, floor plan jpg images, and information already in Active Directory, Huddle provides a quick at-a-glance view of available huddle and phone rooms. Huddle also makes it easy to locate team members’ work stations, meeting spaces, printer locations, and staff contact information.

A second example occurred with the redesign of the lab work area. This was not part of the original office space remodel scope, but technicians working in the lab expressed their desire to create more meeting space, decrease equipment noise, and add similar workstations to their work area. A self-forming team of lab technicians and engineers came up with the plan and budget. The team proposed and then implemented the solutions themselves, reorienting portions of the lab, decreasing noise, and repurposing a small storage area as new meeting space.

Work style change is cultural change – Changing from high cubes and offices to a more open, collaborative and flexible workspace is as much, if not more, about culture change than it is about moving to a new floor plan. The new work environment changes our work style, increasing the frequency of and number of locations for interaction. It pulls staff out of cubes and offices and into visible meeting and unscheduled work areas, increasing opportunities for creative collisions amongst staff of different disciplines and skill sets. In many ways, moving to this type of work style is something that can only be fully appreciated by experiencing and working in an open office environment. Staff with experience working in startups and other tech companies can assist in this culture change by sharing their experiences with others, and talking with coworkers about reducing distractions in this type of environment.

More photos of the new space

Torreys Peak Huddle Room

West End Break Room

Open Space

Larkspur Conference Room

Garage Conference Room

Inside Garage Conference Room

East End Break Room

Denver Conference Room

Demo Room

Cafe on the 1st Floor

Mitchell Ashley is Vice President of Information Technologies at CableLabs.

Advances in nanotechnology, internet of things, 3D printing, personalized medicine, genomics, and big data are creating a convergence that will allow lower-cost, more effective, and more convenient medical practices to become the norm over the next few years. These advances will change the medical landscape significantly, and create large opportunities for those who can integrate enabling and underlying technologies.

Environmental and Demographic Challenges

Now is a very interesting time in medical technology. Researchers at McGill University and the UCLA Fielding School of Public Health analyzed the efficacy of health care systems across the world and found the U.S. ranks 22nd out of 27 high income nations when it comes to increasing life expectancy (per dollar spent), meaning the US health care market is especially inefficient. In addition, in the next twenty years, due to an aging population, for the first time the US health care system will have fewer payers than payees – more people will be on Medicare and Medicaid than paying into the system. An aged population requires different (and frequently more expensive) services, which will add additional economic pressure on the system. For this reason, the next few years will see a shift towards demonstrable value in services, as well as shifts in the technological and entrepreneurial landscape, with a goal of providing more services at a lower cost (also called medical efficiency) – and thanks to technological innovation, this should come without compromising healthcare outcomes. Although the landscape may appear dismal, technological opportunities may save American society.

Technological Factors at Work

Several technological factors are at work right now that should help make this a reality:

Nanotechnology is becoming mature, especially as applied to in-the-field testing. Several technologies are currently being developed and tested, including a portable dengue fever test and HIV test. By 2020, most blood tests that previously required a trip to a regional lab may be available from anywhere, at a very low cost.

The Internet of Things (IoT) and 3D Printing are creating an innovation environment for devices where the cost of prototyping a new device has dropped over 75% in the last 3 years, with a similar drop in cost of end user healthcare devices. 3D printing has also allowed for the creation of customized health care, such as custom prosthetics.

Lower costs due to the above should make possible near-continuous testing of such things as blood pressure, blood glucose and hormone levels, leading to significantly better well-care outcomes for patients with diabetes, high blood pressure, and hormonal issues – the three most common chronic issues in the population.

In addition, low cost remote devices will allow better follow-up and post-procedure compliance on the part of patients. A recent study showed that average compliance after hospital stays is less than 50%, mostly because of inability to remember or follow post-care instructions. Several companies are developing software that is used both in-hospital and once the patient is home (for example, GetWell Networks), and will be integrating these systems with home care products that provide reporting and alerting on everything from outpatient activity levels to pharmaceutical consumption, allowing for far more comprehensive and effective follow-up care and significantly better outcomes.

Electronic Health Records (EHR) are rapidly getting standardized, and devices are beginning to interoperate more effectively with these systems. This allows big data analysis and patient monitoring automation at levels not previously seen.

Genomic testing is becoming available, allowing “personalized medicine” – testing against a user’s genetic information to determine whether a treatment is likely to work for an individual (rather than statistically across a broader swatch of the general population).

These technological factors will enable more efficient analysis of patient records. More efficient analysis of patient records allows allows for “continuous analysis” of medical device, pharmaceutical, treatment, and procedural effectiveness across a broad population – a continuous clinical trial for existing and emerging treatments. This would allow innovative entrepreneurial and reimbursement and treatment models on the part of the medical insurance industry – keying reimbursement rates and copayments to the efficacy of treatments in the general population. These potential reimbursement and treatment models can lower one of the key factors that increase the cost of health care adapting the standard of care based on what’s new and more effective for only a minority of the population. These technical advances also allow for personalization of medicine potentially providing incentives for pharmaceutical companies to develop test that indicate the efficacy of a medicine for a particular patient. Big data models can also help in fraud detection, approval process, and detection of cross-indicators that define populations at high risk of complication, all additional causes of inefficiency in the health care system.

Where Cable Adds Value to the Healthcare Equation

There are a number of opportunities for the Cable industry regarding these developments:

Network Services: Remote testing and monitoring requires a highly secure, private backbone for data transmission, as well as the ability to transmit large quantities of imaging data.

Inter-Clinic Connectivity: As data interoperability standards mature for medical devices, it should allow independent remote “clinics” that can interconnect with any hospital – these could exist in caregiver facilities, offices, or neighborhoods. These clinics should be able to “dial-up” to a larger care facility and interoperate securely for the duration of a care visit, without having to be a part of that facilities’ network. Again, these clinics need secure, private, high-bandwidth services.

Data Centers: Big data and machine learning requirements of healthcare will require huge amounts of data and compute, an opportunity for large-scale datacenters. In addition, these services may require the ability to anonymize data for remote application consumption, and this will be a new class of cloud service.

For more information, contact Ken Fricklas k.fricklas@cablelabs.com. Ken Fricklas is a Director of Application Technology Development at CableLabs.

Once again, this year’s CES was the biggest ever. With over 170,000 attendees, 2.2 million square feet of floor space, and over 3,600 exhibitors it was by far, the highest nerd density location in history. The densest of the dense were in Eureka Park where over 375 startups debuted the products of their unfiltered imaginations.

Those of you who have read my reports in the past are aware that I claim to cover the entire exhibit space of the Consumer Electronics Show (CES). I have done this for the last 15 years or so. This serves two purposes. First, it keeps me (and presumably you) abreast of the latest technology developments and second, it serves as my official exercise program.

This year, as last year, was all about the Internet of Things. This year, though, the technology has matured. Instead of ridiculous things like connected Geiger counters and pet trackers, this year we were treated to Bluetooth diapers and even more elaborate pet trackers.

While there weren’t really any new breakout categories at CES this year, there were significant improvements in some of the existing categories. Ultra-high-definition televisions (UHDTV or 4K TV with about 4 thousand horizontal pixels) were even more ubiquitous and featured things like OLED (organic light emitting diode) technology, high dynamic range and expanded color gamut. 3D TV made a respectable showing with glasses-free 3D technologies that didn’t immediately induce motion sickness.

High-definition audio became more real (but is still a bit pricey) with Neil Young’s Pono player and other HD Audio offerings from traditional audio equipment vendors. Object audio in the form of Dolby Atmos and DTS also became a common feature in high-end receivers and amplifiers.

The obsession with selfies literally reached new heights with drone mounted cameras and cameras that are always on and documenting our lives. Yes, there are people wearing cameras and documenting everything they do – in Vegas!

Here are some of my personal observations that may be relevant to cable television providers:

The Sling TV OTT linear video is a harbinger of a new competitive environment for video services. Video delivery technology is a shrinking differentiator for cable. Other core cable differentiators will have to be increasingly leveraged such as content aggregation, seamlessly integrated services and a reliable customer services infrastructure.

Cable infrastructure must deliver on Internet time. New features like 4K video, high-dynamic-range, and expanded color gamut must be delivered as soon as they are available. This is consistent with a move to IP-delivered video and features being integrated into televisions and other consumer-purchased equipment. A dedicated, cable-owned set-top box will become less important.

The Internet of Things requires a little adult supervision. It’s the Wild West right now with differentiation and exclusive applications being primary motivators, but that doesn’t scale. Standards can help in the long run, but for now, an application provider with extensive support resources and the ability to integrate in a rapidly evolving environment will be required. This is an opportunity for cable.

Building bridges between consumers and non-consumers (businesses, medical providers, governments, etc.) is another opportunity for cable. Cable already provides an extensive consumer solution. The producer and provider side of the equation should not be neglected.

It seems there is more cable can do with infrastructure and data. Consumers are producing video and creating personal data at an unprecedented level. They will own this data, but will need to store and manage it in a secure way. The problem of how to do this is not solved.

In general, cable is doing well in providing the connectivity consumers demand. However, it is critical that these connections are augmented with attractive services if cable is to avoid being a commoditized pipe. Looking outside the home to mobile services and car-based services is a possible path to new opportunities.

Clearly, I have provided you with a small snapshot of the things at CES and how they might be relevant to cable operators. Eleven months remain in 2015 for you to conduct your own research.

Clarke Stevens is a Principal Architect in the Applications Technologies group at CableLabs.

In-home wireless device and data explosion

Over the past decade, advances in mobile technology have been growing by leaps and bounds. Almost every house has at least 5.7 data consuming devices according to a study by the NPD group. Wireless data usage is expanding at an exponential rate and will only continue to increase in years to come. As a result of this humongous growth, there is an onus on the operators to provide a high throughput, low-latency service to their customers thereby supporting high bandwidth applications in home. A number of applications and services like Internet browsing, Video over Demand (VoD), interactive programming, 3D gaming, location based services etc. are being offered to the customers as a part of the cable services.
Not only in home but also on the access side, networks are becoming more and more complex and with the advent of new technologies, traffic offload and inter-networking is becoming more and more common. It is therefore incumbent on network operators to not only manage their access networks successfully but to also provide the necessary bandwidth and speeds to support the above mentioned applications and services in home.

How can network growth be tackled efficiently?

With advances in network architecture and deployment, there are new and efficient ways of handling the traditional radio network planning and management to meet the growing traffic volumes and network complexities in a cost effective manner. Additionally, with an increase in network elements, there is a need to manage certain functionalities of the network elements automatically.

Figure – 3GPP SON attributes

The 3rd Generation Partnership Project (3GPP) defines Self Organizing Networks (SON) as a technology to configure network elements, optimize performance and to provide self-healing capabilities in case of network interference or faulty network elements. SON is a well-established concept in the world of LTE. Infonetics Research released excerpts from its 2013 SON and Optimization Strategies: Global Service Provider Survey, for which Infonetics interviewed wireless, incumbent, and competitive operators around the world about their network optimization strategies and SON deployment plans. 87% of the network operators who responded to the survey have deployed SON in their networks [Infonetics SON report]

SON for Wi-Fi is a nascent albeit powerful technology that is being looked at by various operators all across the world. Some operators have a publicly deployed Wi-Fi SON server that manages tens of thousands of their APs while some operators are actively involved in conducting field trials.

What exactly is RRM/SON?

The primary goal of RRM/SON (Radio Resource Management / Self Organizing Networks) is to provide efficient operator managed and vendor interoperable Wi-Fi radio performance in the presence of large and dynamically changing numbers of APs and heavy user traffic. One way of achieving this is by placing a Wi-Fi SON server in the cloud/network.

The Wi-Fi SON server is capable of running RRM algorithms which gets the inputs in the form of read parameters from the network and which gives out write parameters as outputs into the network. The periodicity at which a SON server runs these algorithms is left up to operator discretion. This could range anywhere from once in three days to maybe once a week.
Additionally, a Wi-Fi SON server does the following:

Optimizes network performance by modifying the RF parameters.

Maintains a real time database of the RF parameters, which will be used as inputs to the SON algorithms.

RRM/SON allows the operators to manage the radio environment among various vendor solutions, or even unify RRM approach among certain AP vendors.

CableLabs and RRM/SON

A number of cable companies are interested in Wi-Fi RRM/SON to optimize network performance in dense residential deployments.

A CableLabs focus team, consisting of both cable companies and vendors, addressed the dense deployment problem. The focus team was instrumental in converging on and defining a set of read and write parameters, which govern the RRM/SON interface from the SON server. CableLabs has also defined the architectures and use cases for RRM/SON. More information can be found in the Wi-Fi RRM/SON technical report.

Additionally, CableLabs was involved in leading this effort with a Tier 1 mobile network operator through involvement in the Wireless Broadband Alliance (WBA).

Status of the specifications

One of the major goals and achievements of the RRM/SON effort was to update the industry specifications. RRM/SON is now a part of the WBA’s Carrier Wi-Fi Guidelines and the CableLabs Wi-Fi Gateway Management specification.

Per the WBA’s Carrier Wi-Fi Guidelines, Carrier Wi-Fi LAN (CWLAN) is defined as the carrier operated public Wi-Fi network, which is different from the consumer and enterprise networks meaning operators shall have the means to manage radio resources, including the ability to, but not limited to, manage the read, write parameters through standardized interfaces.

What is next?

CableLabs is soliciting vendor participation for hosting an interoperability event to demonstrate the effectiveness of SON algorithms and also to check RRM/SON interoperability with other vendors in terms of the read, write parameters over standard TR-069 and SNMP interfaces.

Wi-Fi & RF Performance

Over the years, Wi-Fi networks have evolved into planned, managed networks, delivering faster data rates and reliable service using outdoor Hotspots, Enterprise access points (AP), and Homespot gateways. As Wi-Fi Operators continue to rollout and expand Wi-Fi networks, ensuring reliable service and quality is a critical goal. ‘Carrier Grade Wi-Fi’ is an industry movement that will offer solutions and features to harden Wi-Fi infrastructure and sustain connectivity to achieve this. One main area of focus is RF performance of a Wi-Fi device. Maintaining consistent RF performance across all Wi-Fi devices and across all Wi-Fi channels will provide a more consistent RF link budget and dependable link balance. This will enable Wi-Fi Operators to provide reliable and quality network performance and deliver consistent data rates at expected ranges.

Making Wi-Fi Better

To facilitate the best customer experience on behalf of its members, CableLabs is collaborating extensively with the Wi-Fi industry to provide studies, standards, methodologies and solutions to establish RF performance criteria for Wi-Fi devices. To provide vendor-neutral unbiased test results to the cable industry, CableLabs tests Wi-Fi devices using industry and customized test plans using an anechoic chamber. The goal of our work is to establish RF performance criteria of a Wi-Fi device by assessing hardware design, device performance, and network performance.

The Unexplored Territory of Wi-Fi: RF Performance

CableLabs has completed an initial set of RF performance tests on Wi-Fi devices using its in-house RF anechoic chamber with a state-of-the-art Over-The-Air (OTA) measurement system. RF test measurements follow an industry-based CTIA/WFA (Cellular Telecommunications Industry Association/Wi-Fi Alliance)[1] methodology and test plan. Of particular interest is the measured RF power of a Wi-Fi device. The Wi-Fi industry has adopted a measurement from the cellular industry called ‘total radiated power’ (TRP). TRP is the measurement of the overall RF power of the Wi-Fi device in an RF free environment (i.e. anechoic chamber) taken at multiple positions (every 15 degrees) as shown in Figure 1.

TRP is a standard and repeatable method that provides RF characterization of the Wi-Fi device radio and antenna chain as a whole system. TRP is used to characterize the RF performance of transmitter portion of the device.

All Wi-Fi devices must meet ‘not-to-exceed’ regulatory limits that address safety and interference protection to co-existing and adjacent channel inhabitants in unlicensed spectrum. However, historically, there are no minimum RF performance requirements that enforce consistent performance for Wi-Fi devices. Having no minimum RF performance requirements for Wi-Fi devices can introduce inconsistency and variation into Wi-Fi network performance.

Wi-Fi Channel Performance

Results of an initial set of RF characterization measurements on five commercially available Wi-Fi Access Points are shown in Figure 2. The TRP was measured on the low, middle and high channels (1, 6, and 11) in the 2.4 GHz band. The results show a variance in TRP performance within a single access point and across APs on three commonly used channels in the 2.4 GHz band. Such variances make it challenging for a Wi-Fi Operator to provide reliable service and manage a network.

The significance of these variances to a Wi-Fi user is shown by using an RF indoor prediction tool to generate coverage differences between two channels of an AP in a common residential house floor plan. Figure 3 uses TRP measurements of AP Vendor 1 to illustrate how two different channels can have different downlink coverage at the same data rates. The heat maps shows that Channel 6 covers 20% greater 65 Mbps coverage than Channel 1 of a 2300 square foot residential home.

Wi-Fi and You, Today

While the above studies will help cable operators effectively manage the overall Wi-Fi network across a neighborhood, consumers with wireless routers in their homes can also perform some easy steps to assure the strongest signal strength. This blog provides some tips.

Mark Poletti is a Wireless Architect at CableLabs. Neeharika Allanki is an architect at CableLabs.

Last week, as I was walking down the hallway, I came across two gentlemen painting a hallway door. They paused and kindly let me through. While I waited, I caught a glimpse of a door with a sign that read, in big bold red letters, “pre-action sprinkler control”. What exactly is a “pre-action”?, I thought, as I started walking again. Is it an action that occurs before the actual action? Wouldn’t that be an action itself? I was a bit puzzled.

In fire systems, I later learned, “pre-action” is the act of validating the existence of a fire, and filling up the pipes with water, in preparation for the sprinklers to open. The system prepares itself for instantaneous flow of water (or a fire-suppressing agent).

The Role of Network Management Systems

Isn’t that what business and operations support system (BSS/OSS) applications try to accomplish: prepare the network for the instantaneous flow of data? A customer order functions as the “pre-action” to automatically and seamlessly trigger network provisioning. That’s true but, unlike pipes in a fire system, networks are subject to constant change. Businesses, for instance, want dynamic topologies and bandwidth-on-demand to support their changing needs. At the same time, they expect high reliability, high availability and low cost.

One of the main problems enterprise software systems experience is “pre-action” signals (messages) that do not propagate deep enough into the software stack to reach the network elements and enable the flow of data. In the current BSS/OSS environment, data paths lack integration capabilities, such as standard API endpoints; and data structures are incomplete or lack minimal structure. Overcoming these gaps requires labor-intensive activities such as custom, point-to-point integrations that are brittle, difficult to modify and, above all, expensive to maintain. This is the equivalent of manually filling the pipes with water, and then turning on each sprinkler head individually within a fire system.

The lack of an enterprise integration strategy also has a negative impact on the bottom line: long lead times, high integration costs, low customer satisfaction, lost business opportunities and revenue.

Poor enterprise integration can also slow the pace of innovation. It is also difficult and costly for current and future technology partners to successfully contribute new solutions to the operator’s ecosystem.

The Advent of Network Softwarization

As data networks are undergoing a fundamental shift towards virtualization and “softwarization”, enterprise integration strategies must become a reality in order for operators to remain competitive. Hardware commoditization, cloud and virtualization technologies are increasing the pace of innovation and the delivery of better products and services. At the same time, our daily life demands a higher degree of connectivity. For example, current content consumption trends demand multi-media integration. Beyond linking social and online presence with TV content, media integration requires multiple media streams, applications and network resources to allow access to content whenever and wherever the consumer is. This is the approach Netflix has embraced to go from zero to 29.8 million streaming subscribers in the US in the span of only 7 years.

As the software stack pushes down into the network, network functions become software applications. We must recognize software-defined networks (SDN) and network function virtualization (NFV) for what they are: layers of the enterprise software architecture. Several architectures have been proposed to tackle network “softwarization.” Most rely on the layers and enterprise integration boundaries depicted in Figure 1.

Figure 1. Typical Enterprise Architecture

Operators will need to go beyond SDN and NFV alone to make a business innovative and competitive. Now more than ever, the OSS/BSS layers will need to play a significant role in seamlessly incorporating new vendor-neutral network technologies into the business.

For networks to become truly agile, SDN and NFV will need to be able to seamlessly “plug” into an end-to-end enterprise integration strategy. This demands a different approach to BSS/OSS. Services will need to be provisioned, managed and monitored the way a virtualized IT function manages virtual resources, applications lifecycle, commodity hardware and data centers. Companies with large networks, such as Google, have realized the importance of the management plane (another name for OSS/BSS) and have decided to open their models and protocols in the hope of getting help solving the provisioning and orchestration problems.

At the same time, global, business-to-business networks require intra-organizational as well as inter-organizational enterprise integration patterns, APIs and data structures.

Tackling the Network Virtualization Challenge

Several organizations and trade groups have recognized the impact of network virtualization on the enterprise. They have traditionally attacked the problem from different points of view, target different audiences and provide different artifacts.

TM Forum, a consortium that helps network operators prepare for the digital economy, looks at the entire enterprise and provides abstract data models (SID, eTOM, etc) and aggregate business entities. In comparison, the Metro Ethernet Forum (MEF), with its focus on Carrier Ethernet services, remains closer to the network constructs and models. MEF has done some groundbreaking work developing data models for carrier ethernet service definition, and to a lesser extent, management. For an architectural point of view, TM Forum has taken a top down approach while MEF has take a bottom up approach.

Although both TM Forum and MEF have recently innovated beyond their traditional spaces with initiatives such as ZOOM and Third Network, cable operators need a comprehensive enterprise integration strategy that merges these approaches to enable network-driven services and business models. Such strategy demands a way to capture a unified and comprehensive view of the enterprise.

A Cable Industry Solution: CL-IM

CableLabs, its member operators and suppliers have come together to develop CableLabs Information Model (CL-IM), a cable-friendly information model that distills, reconciles and extends the work being done by TM Forum and MEF. One of the outcomes of this joint effort is a set of implementable artifacts: a reusable library of data models and application programming interfaces (APIs). The approach has proven successful and several use cases have already been implemented using these artifacts.

CableLabs members now have at their disposal actionable enterprise integration artifacts that can enable seamless data paths, from the backend to the network. Just like in a fire system, such data paths will prepare the network for the instantaneous, on-demand flow of data in response to customer demands. With instantaneous response comes agility. Quick response and agility can be the catalysts that accelerate the development of innovative applications built over virtual networks. They can also enable platforms where members, suppliers and business partners can come together to leverage positive network effects and implement high value business opportunities.

Adolfo Perez-Duran is a Lead Data Architect on the Data & Information Architecture team at CableLabs. Adolfo has over 20 years of software development and architecture experience and expertise, artificial intelligence and data analysis and currently leads the API and data modeling effort within team researching SDN and NFV at CableLabs.