Monday, 13 August 2018

Earlier, I wrote a detailed post on how Telefonica was on a mission to connect 100 Million Unconnected with their 'Internet para todos' initiative. This video below is a good advert of what Telefinica is trying to achieve in Latin America

I recently came across a LinkedIn post on how Telefónica uses AI / ML to connect the unconnected by Patrick Lopez, VP Networks Innovation @ Telefonica. It was no brainer that this needs to be shared.

To deliver internet in these environments in a sustainable manner, it is necessary to increase efficiency through systematic cost reduction, investment optimization and targeted deployments.Systematic optimization necessitates continuous measurement of the financial, operational, technological and organizational data sets.1. Finding the unconnected

The first challenge the team had to tackle was to understand how many unconnected there are and where. The data set was scarce and incomplete, census was old and population had much mobility. In this case, the team used high definition satellite imagery at the scale of the country and used neural network models, coupled with census data as training. Implementing visual machine learning algorithms, the model literally counted each house and each settlement at the scale of the country. The model was then enriched with crossed reference coverage data from regulatory source, as well as Telefonica proprietary data set consisting of geolocalized data sessions and deployment maps. The result is a model with a visual representation, providing a map of the population dispersion, with superimposed coverage polygons, allowing to count and localize the unconnected populations with good accuracy (95% of the population with less than 3% false positive and less than 240 meters deviation in the location of antennas).2. Optimizing transport

Transport networks are the most expensive part of deploying connectivity to remote areas. Optimizing transport route has a huge impact on the sustainability of a network. This is why the team selected this task as the next challenge to tackle.The team started with adding road and infrastructure data to the model form public sources, and used graph generation to cluster population settlements. Graph analysis (shortest path, Steiner tree) yielded population density-optimized transport routes.3. AI to optimize network operations

To connect very remote zones, optimizing operations and minimizing maintenance and upgrade is key to a sustainable operational model. This line of work is probably the most ambitious for the team. When it can take 3 hours by plane and 4 days by boat to reach some locations, being able to make sure you can detect, or better, predict if / when you need to perform maintenance on your infrastructure. Equally important is how your devise your routes so that you are as efficient as possible. In this case, the team built a neural network trained with historical failure analysis and fed with network metrics to provide a model capable of supervising the network health in an automated manner, with prediction of possible failure and optimized maintenance route.I think that the type of data driven approach to complex problem solving demonstrated in this project is the key to network operators' sustainability in the future. It is not only a rural problem, it is necessary to increase efficiency and optimize deployment and operations to keep decreasing the costs.

Finally, its worth mentioning again that I am helping CW (Cambridge Wireless) organise their annual CW TEC conference on the topic 'The inevitable automation of Next Generation Networks'. There are some good speakers and we will have similar topics covered from different angles, using some other interesting approaches. The fees are very reasonable so please join if you can.

Thursday, 25 January 2018

Prof. Andy Sutton, Principal Network Architect, Architecture & Strategy, TSO, BT, provided an update on 5G Network Architecture & Design last year which was also the most popular post of 2017 on 3G4G blog. This year again, he has delivered an update on the same topic at IET '5G - State of Play' conference. He has kindly shared the slides (embedded below) that are available to download from Slideshare.

Friday, 1 July 2016

Many of my readers would be aware that UK is probably the first country to have decided to move its emergency services network from an existing TETRA network to a commercial LTE network operated by EE.

While some people have hailed this as a very bold move in the right direction, there is no shortage of critics. Around 300,000 emergency services users will share the same infrastructure used by over 30 million general users.

Steve Whatson, deputy director Delivery, Emergency Services Mobile Communications Programme (ESMCP) – the organisation within the UK Home Office procuring ESN – assured delegates that ESN will match the existing dedicated Airwave emergency services communication network in terms of coverage for roads, outdoor hand portable devices and marine coverage. Air to ground (A2G) will extend its reach from 6,000ft to 12,000ft.Whatson also pointed out that coverage is not one single piece, but will comprise a number of different elements, which all need to mesh into one seamless network run by the ESN Lot 3 Mobile Services (main 4G network) provider – EE.This includes: EE’s main commercial 4G network; Extended Area Services (hard-to-reach areas of the UK where new passive sites are to be built under a separate contract and then equipped with EE base stations); air-to-ground; London Underground; Crossrail; marine coverage (to 12 nautical miles); and special coverage solutions.EE is currently rolling out new 4G sites – it will eventually have some 19,500 sites – and is upgrading others with 800MHz spectrum, which propagates over longer distances and is better at penetrating buildings than its other 4G spectrum holdings. Crucially for ESN, it is also switching on a Voice over LTE (VoLTE) capability, starting with the UK’s main cities....Mission critical networks must be always available and have levels of resilience far in excess of commercial networks. Speaking exclusively to Wireless in early May, Tom Bennett, group director Technology Services, Architecture & Devices at EE, said: ‘We already achieve a very high availability level, but what the Home Office was asking for effectively was about a 0.3% increase against our existing commercial availability levels.‘Now for every 0.1% increase in availability there is a significant investment because you are at the extreme top end of the curve where it is harder and harder to make a noticeable difference.’There are very specific requirements for coverage and availability of the ESN network for the UK road system. Bennett says: ‘Mobile is based on a probability of service. No more than 1% of any constabulary’s roads are allowed to be below 75% availability, and on major roads it is 96% availability. A coverage gap in this context is no more than 1km.’The current Airwave network has approximately 4,000 sites, many with back-up generators on site with fuel for seven days of autonomous running if the main power is cut, along with a range of resilient backhaul solutions.Bennett says that out of EE’s 18,500 sites it has about the same number of unique coverage sites (ie. no overlapping coverage) – around 4,000. ‘As part of our investment programme, those unique coverage sites will need a significant investment in the causes of unavailability – ie. resilient backhaul and back-up batteries.’He explains that EE has undertaken a lot of analysis of what causes outages on its network, and it has combined that data with the Home Office’s data on where the natural disasters in the UK have occurred over the past 10 years.From this, EE is able to make a reasonable assessment of which sites are likely to be out of action due to flooding or other disasters for more than three or four days. ‘For those sites – and it is less than 4,000 – you need generators too, because you may not be able to physically access the sites for some days,’ says Bennett.For obvious reasons, the unique coverage sites are mostly in rural areas. But as Bennett points out, the majority of cases where the emergency services are involved is where people are – suburban and urban areas.‘In these areas EE has overlapping coverage from multiple sites to meet the capacity requirements, so if a site goes down, in the majority of cases we have compensation coverage. A device can often see up to five tower sites in London, for example,’ he says.Having adequate backhaul capacity – and resilient backhaul at that – is vital in any network. Bennett says EE is installing extra backhaul, largely microwave and fibre, but other solutions will also be used including satellite and LTE relay from base station to base station – daisy chaining. On 9 May 2016, EE announced a deal with satellite provider Avanti to provide satellite backhaul in some areas of the UK.Additional coverage and resilience will be offered by RRVs (rapid response vehicles), which EE already has in its commercial network today, for example, to provide extra capacity in Ascot during the racing season.‘We would use similar, although not exactly the same technology for disaster recovery and site/service recovery, but with all the backhaul solutions,’ says Bennett. ‘Let’s say we planned some maintenance or upgrade work that involved taking the base station out for a while.‘We’d talk to the chief inspector before the work commences. If he says, there’s no chance of doing that tonight, we can put the RRV there, and provided we maintain coverage, we can carry out the work. RRVs are a very good tool for doing a lot of things.’At the British APCO event, Mansoor Hanif, director of Radio Access Networks at EE said it was looking at the possibility of using ‘airmasts’ to provide additional coverage. Meshed small cells, network in a box and repeater solutions are becoming available for these ‘airmasts’, which will provide coverage from balloons, or UAVs – tethered drones with power cables and optical fibre connected to them.

Mansoor Hanif, Director of RAN at EE gave a presentation on this at Critical Communications World 2016 and has also given an interview. Both are embedded below.

Wednesday, 28 May 2014

The last post on Network sharing by NEC was surprisingly popular so I thought its worth doing a case study by Orange in Poland on how they successfully managed to share their network with T-Mobile. Full presentation embedded as follows:

Applications have diverse requirements on the mobile network in terms of throughput, relative use of uplink vs. downlink, latency and variability of usage over time. While the underlying IP based Layer 3 infrastructure attempts to meet the needs of all the applications, significant network capacity is lost to inefficient use of the available resources. This inefficiency stems primarily from the non-deterministic nature of the aggregate requirements on the network from the numerous applications and their traffic flows live at any time.

This reduction in network utilization can be mitigated by incorporating application awareness into network traffic management through use of Application or Service Layer optimization technologies. A Service Layer optimization solution would incorporate awareness of:

1) device capabilities such as screen size and resolution;

2) user characteristics such as billing rates and user location;

3) network capabilities such as historic and instantaneous performance and;

4) application characteristics such as the use of specific video codecs and protocols by an application such as Video on Demand (VOD) to ensure better management of network resources.

Further, in a service provider’s network the optimization function may be deployed in either the core network and/or edge aggregation locations. When Service Layer optimization entities in the network are deployed at both core and edge locations, they may operate in conjunction with each other to form a hierarchy with adequate level of processing to match the traffic volume and topology. Such a hierarchy of network entities is especially effective in the case of caching.

The 3GPP standard network architecture defines a number of elements such as QoS levels that are understood and implemented in the network infrastructure. However, much of this network capability is not known or packaged for use in the Service Layer by application developers. One approach to resolving this discrepancy may be to publish standard Service Layer APIs that enable application developers to request network resources with specific capabilities and also to get real-time feedback on the capabilities of network resources that are in use by the applications. Such APIs may be exposed by the network to the cloud or may be exposed to application clients resident on mobile devices through device application platforms and SDKs. The network APIs being defined by the Wholesale Application Community are an example of the recognition of the need for such Service Layer visibility into network capabilities. Future versions of the WAC standards will likely incorporate and expose network Quality of Service (QoS) capabilities.

Why does Optimization matter? A good answer to this question is provided in Telecoms.com article as follows:

For many people, says Constantine Polychronopoulos, founder and chief technology officer of mobile internet infrastructure specialist Bytemobile, the definition of optimisation as it relates to mobile networks is too narrow; restricted to compressing data or to the tweaking of the radio access network in a bid to improve throughput. While these are key elements of optimisation, he says, the term ought to be interpreted far more broadly. “The best way for us to think of optimisation,” he says, “is as a set of synergistic technologies that come together to address everything that has to do with improving network and spectrum utilisation and user experience. If you stretch the argument, it includes pretty much every thing that matters. This holistic, end-to-end approach to optimisation is the hallmark of Bytemobile’s solutions. Point products tend to be costly and difficult or impossible to evolve and maintain.”

And optimisation matters, he says, because the boom in mobile data traffic experienced in some of the world’s most advanced mobile markets represents a serious threat to carrier performance and customer satisfaction. US operator and pioneer iPhone partner AT&T is a case in point, Polychronopoulos says.

“If you look at what’s been said by Ralph de la Vega (president and CEO of AT&T Mobility) and John Donovan (the firm’s CTO), they have seen a 5,000- per cent increase in data traffic over the past two years. The data points from other operators are similar,” he continues. “They see an exponential growth of data traffic with the introduction of smartphones, in particular the iPhone.”

Operators may have received what they’d been wishing for but the scale of the uptake has taken them by surprise, Polychronopoulos says. The type of usage consumers are exhibiting can be problematic as well. Bytemobile is seeing a great deal of video-based usage, which can often be a greater drain on network resource than web browsing. Given the increasing popularity of embedding video content within web pages, the problem is becoming exacerbated.

Dr. Polychronopoulos is keen to point out that there are optimisation opportunities across different layers of the OSI stack—Bytemobile offers solutions that will have an impact on layers three (the IP layer) through seven (the application layer). But he stresses that some of the most effective returns from optimisation technologies come from addressing the application layer, where the bulk of the data is to be found.

“An IP packet can be up to 1,500 bytes long,” he says. “So at layer three, while you can balance packet by packet, there is only so much you can do to optimise 1,500 bytes. At the top layer, the application can be multiple megabytes or gigabytes if you’re watching video. And when you’re dealing with those file sizes in the application layer, there is a whole lot more you can do to reduce the amount of data or apply innovative delivery algorithms to make the content more efficient,” he says.

By optimising content such as video, Polychronopoulos says, significant gains can be made in spectral and backhaul network utilisation. A range of options are open to operators, he says, with some techniques focused on optimising the transport protocol, and others designed to reduce the size of the content.

“With video, we can resize the frame, we can reduce the number of frames, we can reduce the resolution of the frame or apply a combination of the above in a way that does not affect the video quality but greatly improves network efficiencies,” he says. “So if you go to a site like YouTube and browse a video, you might download something like 100MB of data. But if you were to go through a platform like ours, you may download only 50MB when the network is congested and still experience not only the same video quality, but also fluid video playback without constant re-buffering stalls.”

It is possible, he explains, to run these solutions in a dynamic way such that data reduction engages only when the network is congested. If a user seeks to access high-volume data like video during the network’s quiet time, the reduction technologies are not applied. But when things are busier, they kick in automatically and gradually. This could have an application in tiered pricing strategies. Operators are looking at such options in a bid to better balance the cost of provisioning mobile data services with the limited revenue stream that they currently generate because of the flat rate tariffs that were used to stimulate the market in the first place. Being able to dynamically alter data reduction and therefore speed of delivery depending on network load could be a useful tool to operators looking to charge premium prices for higher quality of service, Polychronopoulos says.

If it is possible to reduce video traf- fic in such a way that data loads are halved but the end user experience does not suffer proportionally, the question arises as to why operators would not simply reduce everything, whether the network was busy or not. Polychronopoulos argues that in quiet times there are no savings to be made by reducing the size of content being transported.

“The operator has already provisioned the network one way or another,” he says, “so there is a certain amount of bandwidth and a certain amount of backhaul capacity. When the network is not congested, the transport cost is already sunk. When it becomes congested, though, you get dropped calls and buffering and stalled videos and the user experience suffers. That’s where optimisation shines. Alternatively, media optimisation can be factored in during toplevel network provisioning when the savings in CAPEX can be extremely compelling.”

While LTE is held up by some within the industry as the panacea to growing demand for more mobile broadband service, Polychronopoulos is unconvinced. If anything, he says, the arrival of the fourth generation will serve only to exacerbate the situation.

“LTE is going to make this problem far more pronounced, for a number of reasons,” he says. “As soon as you offer improved wireless broadband, you open the door to new applications and services. People are always able to come up with new ways of inundating any resource, including bandwidth. We’re going to see more data-driven applications on mobile than we see on the typical desktop, because the mobile device is always with you.” And while LTE promises greater spectral efficiency than its 3G forebears, Polychronopoulos says, the fact that spectrum remains a finite resource will prove ever more problematic as services evolve.

“We’re reaching the limits of spectral efficiency,” he says. “Shannon’s Law defines the limit as six bits per Hertz, and while we may be moving to higher-bandwidth wireless broadband, spectrum remains finite. To offer 160Mbps, you have to allocate twice the amount of spectrum than in 3G, and it’s a very scarce and very expensive resource.”

Operators have been wrong to focus exclusively on standards-based solutions to network optimisation issues, Polychronopoulos says. In restricting themselves to 3GPP-based solutions, he argues that they have missed what he describes as “the internet component of wireless data.” Internet powerhouses like Google, Yahoo and Microsoft (which he dubs ‘the GYM consortium’) have established a model that he says is a great threat to the mobile operator community in that it establishes a direct consumer relationship and disregards the “pipe” (wireless broadband connection) used to maintain that relationship.

“The operators have to accelerate the way they define their models around wireless data so that they’re not only faster than the GYM consortium in terms of enabling popular applications, but smarter and more efficient as well,” he says. Dr. Polychronopoulos then makes a popular case for the carriers’ success: “The operators have information about the subscriber that no other entity in the internet environment can have; for example, they know everything the subscriber has done over the lifetime of their subscription and the location of each event. They don’t have to let this data outside of their networks, so they are very well positioned to win the race for the mobile internet.”

Tuesday, 14 December 2010

HetNets are hot. I hear about them in various contexts. Its difficult to find exactly what they are and how they will work though. There is a HetNets special issue in IEEE Communications Magazine coming out next year but that's far away.

I found an interesting summary on HetNets in Motorola Ezine that is reproduced below:

“The bigger the cell site, the less capacity per person you have,” said Peter Jarich, research director with market intelligence firm Current Analysis. “If you shrink coverage to a couple of blocks, you are having that capacity shared with a fewer number of people, resulting in higher capacity and faster data speeds.”

This is a topic the international standards body, the Third Generation Partnership Project (3GPP), has been focusing on to make small cells part of the overall cellular network architecture.

“What we’re seeing is a natural progression of how the industry is going to be addressing some of these capacity concerns,” said Joe Pedziwiatr, network systems architect with Motorola. “There is a need to address the next step of capacity and coverage by introducing and embracing the concepts of small cells and even looking at further advances such as better use of the spectrum itself.”

As such, discussion regarding this small-cell concept has emerged into what is called heterogeneous networks, or Het-Net, for short. The idea is to have a macro wireless network cooperating with intelligent pico cells deployed by operators to work together within the macro network and significantly improve coverage and augment overall network capacity. Small cells can also be leveraged to improve coverage and deliver capacity inside buildings. Indoor coverage has long been the bane of mobile operators. Some mobile operators are already leveraging this concept, augmenting their cellular service offering with WiFi access to their subscriber base in order to address the in-building coverage and capacity challenges facing today’s cellular solutions.

Pedziwiatr said this Het-Net structure goes far beyond what is envisioned for femtocells or standard pico cells for that matter. Introducing a pico cell into the macro network will address but just one aspect of network congestion, namely air interface congestion. The backhaul transport network may become the next bottleneck. Finally, if all this traffic hits the core network, the congestion will just have shifted from the edge to the core.

“This requires a system focus across all aspects of planning and engineering,” Pedziwiatr said. “We’re trying to say it goes beyond that of a femto. If someone shows up at an operator and presents a pico cell, that is just one percent of what would be needed to provide true capacity relief for the macro network.”

Femtocells, otherwise known as miniature take-home base stations, are obtained by end users and plugged into a home or office broadband connection to boost network signals inside buildings. A handful of 3G operators worldwide are selling femtocells as a network coverage play. For the LTE market, the Femtocell Forum is working to convince operators of the value of a femtocell when it comes to better signal penetration inside buildings and delivering high-bandwidth services without loading the mobile network. This is possible, because the backhaul traffic runs over the fixed line connection. However, this femtocell proposition largely relies on end user uptake of them—not necessarily where operators need them, unless they install femtocells themselves or give end users incentives to acquire them.

As with any new concept, there are challenges to overcome before Het-Nets can become reality. Het-Nets must come to market with a total cost of ownership that is competitive for an operator to realize the benefit of providing better capacity, higher data speeds, and most of all, a better end-user experience said Chevli.

“The level of total cost of ownership has to be reduced. That is where the challenge is for vendors to ensure that any new solution revalidates every existing tenet of cellular topology and evolve it to the new paradigm being proposed,” Chevli said. “You can’t increase the number of end nodes by 25X and expect to operate or manage this new network with legacy O&M paradigms and a legacy backhaul approach.”

One of the issues is dealing with interference and Het-Net network traffic policies. “How do you manage all of these small cell networks within the macrocell network?” asked Jarich. “Right now if you have a bunch of femtocells inside a house, there is this concept that the walls stop the macrocell signals from getting in and out. You get a separation between the two. Go outdoors with small cells underlying bigger cells and you get a lot more interference and hand-off issues because devices will switch back and forth based on where the stronger signal is.”

Pedziwiatr said for a Het-Net to work, it would require a change in node management, whereby an operator isn’t burdened with managing big clusters of small cells on an individual basis. “We see elements of SON (self organizing networks), self discovery and auto optimization that will have to be key ingredients in these networks. Otherwise operators can’t manage them and the business case will be a lot less attractive,” he said.

Fortunately, the industry has already been working with and implementing concepts of SON in LTE network solutions. In the femtocell arena also, vendors have been incorporating some elements and concepts of SON so that installing them is a plug-and-play action that automatically configures the device and avoids interference. But even then, Het-Nets will require further SON enhancement to deal with new use cases, such as overlay (macro deployment) to underlay (pico deployments) mobility optimization.

When it comes to LTE, SON features are built into the standard, and are designed to offer the dual benefit of reducing operating costs while optimizing performance. SONs will do this by automating many of the manual processes used when deploying a network to reduce costs and help operators commercialize their new services faster. SON will also automate many routine, repetitive tasks involved in day-to-day network operations and management such as neighbor list discovery and management.

Other key sticking points are deployment and backhaul costs. If operators are to deploy many small cells in a given area, deploying them and backhauling their traffic should not become monumental tasks.

Chevli and Pedziwiatr envision Het-Nets being deployed initially in hot zone areas – where data traffic is the highest – using street-level plug-and-play nodes that can be easily installed by people with little technical know-how.

“Today, macro site selection, engineering, propagation analysis, rollout and optimization are long and expensive processes, which must change so that installers keep inventories of these units in their trucks, making rollout simple installations and power-ups,” said Pedziwiatr. “These will be maintained at a minimum with quick optimization.”

The notion of backhauling traffic coming from a large cluster of Het-Net nodes could also stymie Het-Nets altogether. Chevli said that in order to keep costs down, Het-Net backhaul needs to be a mix of cost-effective wireless or wired backhaul technology to aggregate traffic from what likely will be nodes sitting on lamp posts, walls, in-building and other similar structures. The goal then is to find a backhaul point of presence to aggregate the traffic and then put that traffic on an open transport network in the area.

Backhaul cost reductions may also be a matter of finding ways to reduce the amount of backhaul forwarded to the core network, Pedziwiatr said. These types of solutions are already being developed in the 3G world to cope with the massive data traffic that is beginning to crush networks. For traffic such as Internet traffic, which doesn’t need to travel through an operator’s core network, offloading that traffic as close to the source as possible would further drive down the cost of operation through the reduction of backhaul and capacity needs of the core network.

In the end, with operators incorporating smaller cells as an underlay to their macro network layer rather than relying on data offloading techniques such as femtocells and WiFi that largely depend on the actions of subscribers and impacted by the surrounding cell operating in the same unlicensed frequency, Het-Nets in licensed spectrum will soon become the keystone in attacking the ever-present congestion issue that widely plagues big cities and this is only likely to get worse over time.

Thursday, 22 April 2010

Femtocells are not really becoming Picocells but when you read about the new features coming up in Femtocells, you can imagine why operators are embracing Femtocells.

A typical Picocell, offers limited coverage but the same capacity as a macro-cell and can cost between £5000 to £10000. A Femtocell overs very limited coverage and very few users but its dirt cheap.

What if a compromise Femtocell is made that can solve both the coverage/capacity and price then its a win win situation for everyone.

This is where "Metro Femtocells" come into picture. They can be called by different names but lets stick to Metro Femtos.

Ubiquisys's press release about the Colo-Node HSPA Femtocell shows us the direction in which the device manufacturers are moving. It allows 16 users (as opposed to 4) and the range of 2km (as opposed to couple 100 metres). Picochip has already released a chip that can serve 32 users at 2km range. These femto's are Release-7 compliant with 42Mbps peak dl and and 11Mbps peak ul.

The good thing is that they may be soon used to fill the coverage black holes but that can also mean that the operators may stop putting lot of effort in Network optimisations.

Tuesday, 16 February 2010

SON concepts are included in the LTE (E-UTRAN) standards starting from the first release of the technology (Rel-8) and expand in scope with subsequent releases. A key goal of 3GPP standardization is the support of SON features in multi-vendor network environments. 3GPP has defined a set of LTE SON use cases and associated SON functions. The standardized SON features effectively track the expected LTE network evolution stages as a function of time. With the first commercial networks expected to launch in 2010, the initial focus of Rel-8 has been functionality associated with initial equipment installation and integration.

The scope of the first release of SON (Rel-8) includes the following 3GPP functions, covering different aspects of the eNodeB self-configuration use case:

• Automatic Inventory

• Automatic Software Download

• Automatic Neighbor Relations

• Automatic PCI Assignment

The next release of SON, as standardized in Rel-9, will provide SON functionality addressing more maturing networks. It includes the following additional use cases:

• Coverage & Capacity Optimization

• Mobility optimization

• RACH optimization

• Load balancing optimization

Other SON-related aspects that are being discussed in the framework of Rel-9 include improvement on the telecom management system to increase energy savings, a new OAM interface to control home eNodeBs, UE reporting functionality to minimize the amount of drive tests, studies on self testing and self-healing functions and minimization of drive testing. It should be clarified that SON-related functionality will continue to expand through the subsequent releases of the LTE standard.

The SON specifications have been built over the existing 3GPP network management architecture, reusing much functionality that existed prior to Rel-8. These management interfaces are being defined in a generic manner to leave room for innovation on different vendor implementations. More information on the SON capabilities in 3GPP can be found in 3G Americas’ December 2009 white paper, The Benefits of SON in LTE.

SON technologies have been introduced in Rel-8/Rel-9 to help decrease the CAPEX and OPEX of the system. LTE-Advanced further enhances the SON with the following features:

Coverage and Capacity Optimization. Coverage and Capacity Optimization techniques are currently under study in 3GPP and will provide continuous coverage and optimal capacity of the network. The performance of the network can be obtained via key measurement data and adjustments can then be made to improve the network performance. For instance, call drop rates will give an initial indication of the areas within the network that have insufficient coverage and traffic counters can be used to identify capacity problems. Based on these measurements, the network can optimize the performance by trading off capacity and coverage.

Mobility Robustness Optimization. Mobility Robustness Optimization aims at reducing the number of hand over related radio link failures by optimally setting the hand over parameters. A secondary objective is to avoid the ping-pong effect or prolonged connection to a non-optimal cell.

Mobility Load Balancing. Related to Mobility Robustness is Mobility Load Balancing, which aims to optimize the cell reselection and handover parameters to deal with unequal traffic loads. The goal of the study is to achieve this while minimizing the number of handovers and redirections needed to achieve the load balancing.

RACH Optimization. To improve the access to the system, RACH Optimization has been proposed to optimize the system parameters based upon monitoring the network conditions, such as RACH load and the uplink interference. The goal is to minimize the access delays for all the UEs in the system and the RACH load.

In addition to the enhanced SON technologies described above, minimization of manual drive testing functionality is also currently under examination in 3GPP to enhance and minimize the effort for optimization of the LTE-Advance network. The main goal is to automate the collection of UE measurement data. In so doing, it will minimize the need for operators to rely on manual drive tests to optimize the network. In general, a UE that is experiencing issues, such as lack of coverage, traffic that is unevenly distributed or low user throughput, will automatically feed back measurement data to the network which may be used by the network as a foundation for network optimization.