Liability Issues Increase in Virtual World

NICE -- TM Forum Live! -- For all the admitted enthusiasm about network functions virtualization here, there is also a growing number of concerns being raised here, the most recent being that of how liability is shared for service failures or outages once multiple software-based functions run on logically separate hardware.

In multiple sessions on NFV, service providers have admitted there is not yet a clear understanding of how liability will be shared and how service level agreements (SLA) will be designed, delivered, and guaranteed once they move off purpose-built telecom hardware onto more agile virtualized network gear.

"When you have different providers providing virtual network functions on the same NFV infrastructure, you need to take into account the different roles," said Laurent Leboucher, VP of APIs and Digital Ecosystems for Orange (NYSE: FTE) in a Wednesday morning panel. "When something goes wrong, who is responsible? This is a new source of complexity, making fault management more difficult. We will need to manage this complexity."

The overall role of managing VNFs will fall to the network orchestrator, says Caroline Chappell, senior analyst with Heavy Reading and chair of the session at which Leboucher was speaking. "The orchestrator will have to understand the implications of where it places specific VNFs."

Those decisions will be based on factors such as liability, security, network performance and other business issues, all of which now will typically become part of the contract process. A company providing a VNF will also provide requirements for running it. Those requirements become part of the policies put in place with the network orchestrator, Chappell says. As for the hardware running the virtual functions, it is part of the NFV infrastructure, which is what provides SLAs up to those VNFs, she says.

"The service provider will have to have a well-mapped, well-managed NFV infrastructure," she adds. The hope is the industry can get a common approach to the NFV infrastructure that can be trusted.

"The liability then lies with whoever runs the NFV infrastructure, and that can be an operator or a systems integrator," Chappell says. "The role of systems integrators could change under virtualization."

The Heavy Reading analyst says the challenge will be to be able to abstract the virtual infrastructure at each layer of the network, which could potentially lead to separate SLAs and performance requirements for individual layers, creating the need for understanding who is responsible for what.

It's one more layer of complexity for the technology ultimately being counted on to simplify the network operator's efforts to bring service to market more quickly.

Not so much unclear; just obscure, maybe Well, it's unclear unless you're an attorney or insurance specialist with a modicum of data privacy knowledge. Mostly it has to do with basic contract law, although special regulations may come into play as well, depending upon the jurisdiction (as discussed here).

This is why cyber insurance is becoming more popular. Your cyber insurance carrier can help clarify your liabilities for you, and a good cyber insurance policy can cover a wide range of things you wouldn't normally think of.

You are missing the point. Let's go down the road 10 years and say there are 100 different products approved for deployment in virtual environment by 1 carrier. The carrier now has to know the valid combination of instances to run on the shared infrastructure. Even if it is ONLY sharing with itself...it still has to know.

Unless we are saying that 1 hardware server = 1 application, then we are running a number of possibly different application on the same hardware. Once you do that, the ability to prove that a server can support that load becomes difficult in a test and integration environment.

Now I have done this in a SaaS vendor in the IT space. The way we dealt with it was to build standard packages of VMs that can be run on the same server and test that as a unit. We could do that, because we were not offering the broad arrange of services that you see in a carrier. We only did this on our servers in our data centers and we only bought 1 type of server. With that we were able to predict performance across a number of load variations. Even then, we knew there were potential risks to what we were doing.

Which is sort of the problem here. To keep to the same level of assurance that is normal in the custom hardware space, lots of the desired flexibility is lost. Remember you have to be able to support this network with relatively low level people. The software equivalent of card swappers.

Again, I repeat - many of these challenges already exist inside of business critical SaaS operations in the IT space. They have been dealt with. Let me use one that most folks are familiar with....Anybody study the implementation of Gmail in your group? Its a 24/7/365 service that has scaled massively. People complain bitterly when their are interruptions. Just saying, it might be a good place to start.

Re: It's all about Federation I think that's the point operators are raising with Carol, in two dimensions. First, it's complex enough to figure out what an SLA means when the service elements are hosted components instead of fixed appliances. Second, it's harder if you assume that the pool of virtual functions and NFV Infrastructure isn't totally homogeneous; that there are different contributors perhaps demanding different configurations. I think that most operators will never support a vast mix-and-match component/resource universe for that reason; they'll offer users some preconfigured choices to let them pick what they want, and provide SLAs on the combinations they've prequalified.

Re: It's all about Federation @TomNolle: You are right to refer to it being magnified. The risk is indeed always there. Even in "traditional" or non virtualized networks, operators are not managing very well the end-to-end chain of devices, network elements and applications. They sometimes do a good job on a per segment basis, but they fail to really know their performance throughout the end to end chain at one specific moment in time, to understand how or what degrades the service. This is also why they never really commit to minimum SLA, but rather quote what the maximum can be in optimal conditions instead. So I guess it will be magnified indeed in virtualized or SDN environments.

Re: It's all about Federation I didn't make that assumption, actually. I was responding to Carol's point about the diversity of suppliers, which is different from the question of whether you can write an SLA for a VNF hosted on shared infrastruture. Whatever the crosstalk issues are with respect to VNFs on VMs, they're similar to issues of how traffic from one source impacts other users who share routers or trunks.

There is an assumption in your comment that the working of one VM on a server can not in any way impact the workings of another. That is not true. There is all kinds of separation, but if you load up instances on other VMs on the same machine then there is less CPU available for your VM. The challenge with that is that these compute peaks can happen inside the time to trigger a new instance being created to offload work.

You can't say that you are on a shared infrastructure and make an assumption that you have absolute control on the things you are sharing.

Re: It's all about Federation If that's their concern it would be interesting, because it suggests a lot more disorder in selecting and on-boarding VNFs than we now have in controlling how devices are admitted into networks. Most of the operators I've talked with are assuming that they would have a very specific process for certifying functionality in their labs, just as they have for physical devices like switches, routers, firewalls, etc. The security risk associated with relaxing VNF certification processes would IMHO be more of an issue than SLA risks.

I think the biggest risk on the SLA front is that NFV is still a kind of microcosmic process and SLAs are still an end-to-end requirement. If you can't manage all of the components of a service in a consistent way you can't guarantee it, no matter what technology you use. NFV introduces new technology choices and so it magnifies that risk, but SDN would do exactly the same thing.

That's an interesting perspective. What I'm hearing here is individual carriers talking about how they guarantee SLAs for their own customers and negeotiate with their own hardware and software suppliers when their own networks are based on VNFs from different companies running in generic hardware from someone else.

Six different communications service providers join to debate their visions of the future CSP, following a landmark presentation from AT&T on its massive virtualization efforts and a look back on where the telecom industry has been and where it's going from two industry veterans.

Level 3 Communications' Chief Security Officer Dale Drew says service providers, manufacturers and even consumers must combine to halt massive DDoS attacks using IoT devices in botnets. The solution he has in mind includes reputation-based routing by the service provider but also more secure endpoint devices and greater consumer awareness.

Chris Novak, director of the Verizon Enterprise Solutions Risk Team, explains that enterprises who don't conduct a thorough audit of their assets often leave some things unprotected because they don't know they exist. Many times these unprotected assets are part of corporate M&A activity but left unshielded they can become a hacker's playground, he tells Light ...

Adrian Scrase, CTO at standards body ETSI, talks about the various initiatives and specifications developments related to NFV, 5G and NGP (next-generation protocols) that will underpin next-gen networks.

GeSI is a global e-Sustainability Initiative organization bringing together 40 big multinational companies around the world. According to GeSI's report, information and communication technology can make the world more sustainable. Luis Neves, chairman of GeSI, shared with us his opinion at Ultra-broadband Forum (UBBF2016).

Mobile revenues are declining. Digicel, a player in the Caribbean telecommunications/entertainment space, has found a new way to engage customers and drive revenue. John Quinn, CTO of Digicel, shared with us its story at Ultra-broadband Forum (UBBF2016)

Altibox is the biggest fiber-to-the-home (FTTH) player and the largest provider of video and TV in Norway. They started out with zero customers in 2002. Now they have close to half a million households and companies attached to their FTTH business. Nils Arne, CEO of Altibox shared with us their story and insight on 5G at Ultra-broadband Forum (UBBF2016).

GeSI is a global e-Sustainability Initiative organization bringing together 40 big multinational companies around the world. According to GeSI's report, information and communication technology can make the world more sustainable. Luis Neves, chairman of GeSI, shared with us his opinion at Ultra-broadband Forum (UBBF2016).

Mobile revenues are declining. Digicel, a player in the Caribbean telecommunications/entertainment space, has found a new way to engage customers and drive revenue. John Quinn, CTO of Digicel, shared with us its story at Ultra-broadband Forum (UBBF2016)

Altibox is the biggest fiber-to-the-home (FTTH) player and the largest provider of video and TV in Norway. They started out with zero customers in 2002. Now they have close to half a million households and companies attached to their FTTH business. Nils Arne, CEO of Altibox shared with us their story and insight on 5G at Ultra-broadband Forum (UBBF2016).

At Ultra-broadband Forum, Houlin Zhao, Secretary General of ITU, discussed how important it is for countries, companies and everybody to be working together to help to build the broadband and digital economies (UBBF2016).

ETSI has created an Industry Specification Group to work on Next Generation Protocols (NGP ISG), looking at evolving communications and networking protocols to provide the scale, security, mobility and ease of deployment required for the connected society of the 21st century. The NGP ISG will identify the requirements for next generation protocols and network ...

Digital Object Architecture provides a basic information infrastructure that can facilitate interoperability between or among different systems, processes, and other information resources, including different identity management systems. Digital objects are networked objects that are named by digital object identifiers and instantiated by an infrastructure service ...

Huawei's new CloudVPN Integration Services Solution reduces the complexity of enterprise-leased lines. The new solution was a joint development between Huawei and ten other vendors, including Fortinet and Infoblox.

Join us for an in-depth interview between Steve Saunders of Light Reading and Alexis Black Bjorlin of Intel as they discuss the release of the company's Silicon Photonics platform, its performance, long-term prospects, customer expectations and much more.

Even when there's a strong pipeline of female talent in the comms industry, it tends to leak all the way to the top. McKinsey & Company says women experience pipeline leakage at three primary points: being unable to enter, being stuck in the middle or being locked out of the top. Each pipeline pain point presents its own challenges, but also opportunities to stop the leak. Wireless operator Sprint is making a conscious effort to improve its own pipeline from new recruits to the C-suite, and it wants the rest of the industry to do the same. In this Women in Comms radio show, WiC Board Member and Sprint Vice President of Enterprise Sales Nelly Pitocco will give us her take on the industry's pipeline challenges. Pitocco, who joined Sprint in May and has spent 20 years in the comms industry, will also offer solutions, share how Sprint is tackling the challenge within its own organization and take your questions live on air.