Friday, 1 September 2017

Bell Labs, which has played a significant role in telecoms history and has a very glorious list of achievements created a collection of short films highlighting the brilliant minds who created the invisible nervous system of our society. Some of you may be aware that Bell Labs is now a part of Nokia but was previously part of Alcatel-Lucent, Lucent and AT&T before that.

The playlist with 5 videos is embedded below and short details of the videos follows that.

Video 1: IntroductionIntroducing 'Future Impossible', a collection of short films highlighting the brilliant minds who created the invisible nervous system of our society, a fantastic intelligent network of wires and cables undergirding and infiltrating every aspect of modern life.Video 2: The Shannon LimitIn 1948, father of communications theory Claude Shannon developed the law that dictated just how much information could ever be communicated down any path, anywhere, using any technology. The maximum rate of this transmission would come to be known as the Shannon Limit. Researchers have spent the following decades trying to achieve this limit and to try to go beyond it.Video 3: The Many Lives of CopperIn the rush to find the next generation of optical communications, much of our attention has moved away from that old standby, copper cabling. But we already have miles and miles of the stuff under our feet and over our heads. What if instead of laying down new optical fiber cable everywhere, we could figure out a way to breathe new life into copper and drive the digital future that way?Video 4: The Network of YouIn the future, every human will be connected to every other human on the planet by a wireless network. But that’s just the beginning. Soon the stuff of modern life will all be part of the network, and it will unlock infinite opportunities for new ways of talking, making and being. The network will be our sixth sense, connecting us to our digital lives. In this film, we ponder that existence and how it is enabled by inventions and technologies developed over the past 30 years, and the innovations that still lie ahead of us.Video 5: Story of LightWhen Alexander Graham Bell discovered that sound could be carried by light, he never could have imagined the millions of written text and audio and video communications that would one day be transmitted around the world every second on a single strand of fiber with the dimensions of a human hair.Follow the journey of a single text message zipping around the globe at the speed of light, then meet the researchers that have taken up Bell’s charge.

For anyone interested, Wikipedia has a good detailed info on Bell Labs history here.

Sunday, 25 October 2015

Continuing with the updates from 5G RAN workshop, part 1 and part 2 here.

Dish network wants to have a satellite based 5G network. A recent article from Light Reading shows the following:

Dish states that there are misconceptions about what satellite technology can deliver for 5G networks. Essentially Dish says that satellites will be capable of delivering two-way communications to support 5G.A hybrid ground and space 5G network would use small satellites that each use a "spot beam" to provide a dedicated area of two-way coverage on the ground. This is different than the old model of using one satellite with a single beam to provide a one-way service like a TV broadcast over a landmass.Dish argues that newer, smaller satellites, equipped with the latest multi-antenna arrays (MIMO) would allow for "ubiquitous connectivity through hybrid satellite and terrestrial networks," the operator writes. In this model, satellites could connect areas that it would be hard to network otherwise like mountains and lakes.

The presentation from Dish is as follows:

Alcatel-Lucent provided a whitepaper along with the presentation. The paper provides an interesting view of 5G from their point of view. Its embedded below:

The presentation from Kyocera focused on TD-LTE which I think will play a prominent role in 5G. In case of wide channels, TD-LTE can help predict the channel accurately, which is a drawback for FDD at high frequencies. Their presentation is available here.

The presentation from NEC focussed on different technologies that will play a role in 5G. Their presentation is available here.

The final presentation we will look at this time is by the South Korean operator, KT. What is interesting to see is that in the part 1 we saw in the chairman's summary that 5G will come in two phases; Rel-15 will be phase 1 and Rel-16 will be phase 2. In the summary slide in KT's presentation, it looks like they are going to consider Rel-14 as 5G. Its not at all surprising considering that Verizon has said that they want to commercialise 5G by 2017, even though 5G will not be fully specified according to 3GPP by then. Anyway, here is the presentation by KT.

Sunday, 14 June 2015

People often ask at various conferences if TD-LTE is a fad or is it something that will continue to exist along with the FDD networks. TDD networks were a bit tricky to implement in the past due to the necessity for the whole network to be time synchronised to make sure there is no interference. Also, if there was another TDD network in an adjacent band, it would have to be time synchronised with the first network too. In the areas bordering another country where they might have had their own TDD network in this band, it would have to be time synchronised too. This complexity meant that most networks were happy to live with FDD networks.

In 5G networks, at higher frequencies it would also make much more sense to use TDD to estimate the channel accurately. This is because the same channel would be used in downlink and uplink so the downlink channel can be estimated accurately based on the uplink channel condition. Due to small transmit time intervals (TTI's), these channel condition estimation would be quite good. Another advantage of this is that the beam could be formed and directed exactly at the user and it would appear as a null to other users.

This is where 8T8R or 8 Transmit and 8 Receive antennas in the base station can help. The more the antennas, the better and narrower the beam they can create. This can help send more energy to users at the cell edge and hence provide better and more reliable coverage there.

How do these antennas look like? 8T8R needs 8x Antennas at the Base Station Cell, and this is typically delivered using four X-Polar columns about half wavelength apart. I found the above picture on antenna specialist Quintel's page here, where the four column example is shown right. At spectrum bands such as 2.3GHz, 2.6GHz and 3.5GHz where TD-LTE networks are currently deployed, the antenna width is still practical. Quintel’s webpage also indicates how their technology allows 8T8R to be effectively emulated using only two X-Polar columns thus promising Slimline antenna solutions at lower frequency bands. China Mobile and Huawei have claimed to be the first ones to deploy these four X-Pol column 8T8R antennas. Sprint, USA is another network that has been actively deploying these 8T8R antennas.

Sprint's deployment of 8T8R (eight-branch transmit and eight-branch receive) radios in its 2.5 GHz TDD LTE spectrum is resulting in increased data throughput as well as coverage according to a new report from Signals Research. "Thanks to TM8 [transmission mode 8] and 8T8R, we observed meaningful increases in coverage and spectral efficiency, not to mention overall device throughput," Signals said in its executive summary of the report.The firm said it extensively tested Sprint's network in the Chicago market using Band 41 (2.5 GHz) and Band 25 (1.9 GHz) in April using Accuver's drive test tools and two Galaxy Note Edge smartphones. Signals tested TM8 vs. non-TM8 performance, Band 41 and Band 25 coverage and performance as well as 8T8R receive vs. 2T2R coverage/performance and stand-alone carrier aggregation.Sprint has been deploying 8T8R radios in its 2.5 GHz footprint, which the company has said will allow its cell sites to send multiple data streams, achieve better signal strength and increase data throughput and coverage without requiring more bandwidth.The company also has said it will use carrier aggregation technology to combine TD-LTE and FDD-LTE transmission across all of its spectrum bands. In its fourth quarter 2014 earnings call with investors in February, Sprint CEO Marcelo Claure said implementing carrier aggregation across all Sprint spectrum bands means Sprint eventually will be able to deploy 1900 MHz FDD-LTE for uplink and 2.5 GHz TD-LTE for downlink, and ultimately improve the coverage of 2.5 GHz LTE to levels that its 1900 MHz spectrum currently achieves. Carrier aggregation, which is the most well-known and widely used technique of the LTE Advanced standard, bonds together disparate bands of spectrum to create wider channels and produce more capacity and faster speeds.

Alcatel-Lucent has a good article in their TECHzine, an extract from that below:

Field tests on base stations equipped with beamforming and 8T8R technologies confirm the sustainability of the solution. Operators can make the most of transmission (Tx) and receiving (Rx) diversity by adding in Tx and Rx paths at the eNodeB level, and beamforming delivers a direct impact on uplink and downlink performance at the cell edge.By using 8 receiver paths instead of 2, cell range is increased by a factor of 1.5 – and this difference is emphasized by the fact that the number of sites needed is reduced by nearly 50 per cent. Furthermore, using the beamforming approach in transmission mode generates a specific beam per user which improves the quality of the signal received by the end-user’s device, or user equipment (UE). In fact, steering the radiated energy in a specific direction can reduce interference and improves the radio link, helping enable a better throughput. The orientation of the beam is decided by shifting the phases of the Tx paths based on signal feedback from the UE. This approach can deliver double the cell edge downlink throughput and can increase global average throughput by 65 per cent.These types of deployments are made possible by using innovative radio heads and antenna solutions. In traditional deployments, it would require the installation of multiple remote radio heads (RRH) and multiple antennas at the site to reach the same level of performance. The use of an 8T8R RRH and a smart antenna array, comprising 4 cross-polar antennas in a radome, means an 8T8R sector deployment can be done within the same footprint as traditional systems.

Anyone interested in seeing pictures of different 8T8R antennas like the one above, see here. While this page shows Samsung's antennas, you can navigate to equipment from other vendors.

Finally, if you can provide any additional info or feel there is something incorrect, please feel free to let me know via comments below.

Sunday, 12 April 2015

We have talked about the unlicensed LTE (LTE-U), re-branded as LTE-LAA many times on this blog and the 3G4G Small Cells blog. In fact some analysts have decided to call the current Rel-12 non-standardised Rel-12 version as LTE-U and the standardised version that would be available as part of Release-13 as LTE-LAA.

There is a lot of unease in the WiFi camp because LTE-LAA may hog the 5GHz spectrum that is available as license-exempt for use of Wi-Fi and other similar (future) technologies. Even though LAA may be more efficient as claimed by some vendors, it would reduce the usage for WiFi users in that particular spectrum.

As a result, some vendors have recently proposed LTE/WiFi Link Aggregation as a new feature in Release-13. Alcatel-Lucent, Ruckus Wireless and Qualcomm have all been promoting this. In fact Qualcomm has a pre-MWC teaser video on Youtube. The demo video is embedded as follows:

The Korean operator KT was also involved in demoing this in MWC along with Samsung and Qualcomm. They have termed this feature as LTE-Hetnet or LTE-H.

As can be seen the data is split/combined in PDCP layer. While this example above shows the practical implementation using C-RAN with Remote Radio Head (RRH) and BaseBand Unit (BBU) being used, the picture at the top shows LTE Anchor in eNodeB. There would be a need for an ideal backhaul to keep latency in the eNodeB to minimum when combining cellular and WiFi data.

The above table shows comparison between the 3 main techniques for increasing data rates through aggregation; CA, LTE-U/LAA and LTE-H/LWA. While CA has been part of 3GPP Release-10 and is available in more of less all new LTE devices, LTE-U and LTE-H is new and would need modifications in the network as well as in the devices. LTE-H would in the end provide similar benefits to LTE-U but is a safer option from devices and spectrum point of view and would be a more agreeable solution by everyone, including the WiFi community.

A final word; last year we wrote a whitepaper laying out our vision of what 4.5G is. I think we put it simply that in 4.5G, you can use WiFi and LTE at the same time. I think LTE-H fulfills that vision much better than other proposals.

Tuesday, 3 February 2015

I had the pleasure of speaking at the CW (Cambridge Wireless) event ‘5G: A Practical Approach’. It was a very interesting event with great speakers. Over the next few weeks, I will hopefully add the presentations from some of the other speakers too.

In fact before the presentation (below), I had a few discussions over the twitter to validate if people agree with my assumptions. For those who use twitter, maybe you may want to have a look at some of these below:

Friday, 21 November 2014

The new network represents two world-beating achievements for Inmarsat and its partners. It will be the world’s first truly hybrid aviation network, consisting of an S-band satellite (Europasat), constructed by Thales Alenia Space, and a Europe-wide S-band ground network. Over the integrated network, based on state-of-the-art LTE technology and access to sufficient spectrum resources, Inmarsat will be offering airlines the world’s fastest in-flight broadband connectivity service with speeds up to 75Mbps, far in excess of the limited capabilities of North American ATG systems.

Alcatel-Lucent and Inmarsat will work together to develop the ground infrastructure component of the new Europe-wide network. Alcatel-Lucent has proven expertise in the development of 4G LTE-based air-to-ground technology and was the world’s first company to field trial this technology in 2011. The initial contract awarded to Alcatel-Lucent will see the global telecommunications equipment company adapting their existing 4G LTE technology to support the S-band spectrum.

Sunday, 19 October 2014

Before we look at what 4.5G is, lets look at what is not 4.5G. First and foremost, Carrier Aggregation is not 4.5G. Its the foundation for real 4G. I keep on showing this picture on Twitter

I am sure some people much be really bored by this picture of mine that I keep showing. LTE, rightly referred to as 3.9G or pre-4G by the South Korean and Japanese operators was the foundation of 'Real' 4G, a.k.a. LTE-Advanced. So who has been referring to LTE-A as 4.5G (and even 5G). Here you go:

Back in June, we published a whitepaper where we referred to 4.5G as LTE and WiFi working together. When we refer to LTE, it refers to LTE-A as well. The standards in Release-12 allow simultaneous use of LTE(-A) and WiFi with selected streams on WiFi and others on cellular.

Some people dont realise how much spectrum is available as part of 5GHz, hopefully the above picture will give an idea. This is exactly what has tempted the cellular community to come up with LTE-U (a.k.a LA-LTE, LAA)

In a recent event in London called 5G Huddle, Alcatel-Lucent presented their views on what 4.5G would mean. If you look at the slide above, it is quite a detailed view of what this intermediate step before 5G would be. Some tweets related to this discussion from 5G Huddle as follows:

So 5G will be evolution of 4.5G using DC & CA - but much better signalling, more energy efficient as new air interface for IoT
— Real Wireless (@real_wireless) September 23, 2014

Finally, in a recent GSMA event, Huawei used the term 4.5G to set out their vision and also propose a time-frame as follows:

While in Alcatel-Lucent slide, I could visualise 4.5G as our vision of LTE(-A) + WiFi + some more stuff, I am finding it difficult to visualise all the changes being proposed by Huawei. How are we going to see the peak rate of 10Gbps for example?

I have to mention that I have had companies that have told me that their vision of 5G is M2M and D2D so Huawei is is not very far from reality here.

We should keep in mind that this 4G, 4.5G and 5G are the terms we use to make the end users aware of what new cellular technology could do for them. Most of these people understand simple terms like speeds and latency. We may want to be careful what we tell them as we do not want to make things confusing, complicated and make false promises and not deliver on them.

xoxoxo Added on 2nd January 2015 oxoxox

Chinese vendor ZTE has said it plans to launch a ‘pre-5G’ testing base station in 2015, commercial use of which will be possible in 2016, following tests and adjustment. Here is what they think pre-5G means:

Tuesday, 29 October 2013

Access Network Discovery and Selection Function (ANDSF) is still evolving and with the introduction of Hotspot 2.0 (HS 2), there is a good possibility to provide seamless roaming from Cellular to Wi-Fi, Wi-Fi to Wi-Fi and Wi-Fi to Cellular.

There is a good paper (not very recent) by Alcatel-Lucent and BT that explains these roaming scenarios and other ANDSF policies related information very well. Its embedded below:

Tuesday, 15 October 2013

Software Defined Networking (SDN) and Network Function Virtualization (NFV) are the two recent buzzwords taking the telecoms market by storm. Every network vendor now has some kind of strategy to use this NFV and SDN to help operators save money. So what exactly is NFV? I found a good simple video by Spirent that explains this well. Here it is:

To add a description to this, I would borrow an explanation and a very good example from Wendy Zajack, Director Product Communications, Alcatel-Lucent in ALU blog:

Let’s take this virtualization concept to a network environment. For me cloud means I can get my stuff where ever I am and on any device – meaning I can pull out my smart phone, my iPad, my computer – and show my mom the latest pictures of her grand kids. I am not limited to only having one type of photo album I put my photos in – and only that. I can also show her both photos and videos together – and am not just limited to showing her the kids in one format and on one device.

Today in a telecom network is a lot of equipment that can only do one thing. These machines are focused on what they are do and they do it really well – this is why telecom providers are considered so ‘trusted.’ Back in the days of landline phones even when the power was out you could always make a call. These machines run alone with dedicated resources. These machines are made by various different vendors and speak various languages or ‘protocols’ to exchange information with each other when necessary. Some don’t even talk at all – they are just set-up and then left to run. So, every day your operator is running a mini United Nations and corralling that to get you to access all of your stuff. But it is a United Nations with a fixed number of seats, and with only a specific nation allowed to occupy a specific seat, with the seat left unused if there was a no-show. That is a lot of underutilized equipment that is tough and expensive to manage. It also has a shelf life of 15 years… while your average store-bought computer is doubling in speed every 18 months.

Virtualizing the network means the ability to run a variety of applications (or functions) on a standard piece of computing equipment, rather than on dedicated, specialized processors and equipment, to drive lower costs (more value), more re-use of the equipment between applications (more sharing), and a greater ability to change what is using the equipment to meet the changing user needs (more responsiveness). This has already started in enterprises as a way to control IT costs and improve the performance and of course way greener.

To give this a sports analogy – imagine if in American football instead of having specialists in all the different positions (QB, LB, RB, etc), you had a bunch of generalists who could play any position – you might only need a 22 or 33 man squad (2 or 3 players for every position) rather than the normal squad of 53. The management of your team would be much simpler as ‘one player fits all’ positions. It is easy to see how this would benefit a service provider – simplifying the procurement and management of the network elements (team) and giving them the ability to do more, with less.

Dimitris Mavrakis from Informa wrote an excellent summary from the IIR SDN and NFV conference in Informa blog here. Its worth reading his article but I want to highlight one section that shows how the operators think deployment would be done:

The speaker from BT provided a good roadmap for implementing SDN and NFV:

Start with a small part of the network, which may not be critical for the operation of the whole. Perhaps introduce incremental capacity upgrades or improvements in specific and isolated parts of the network.

Integrate with existing OSS/BSS and other parts of the network.

Plan a larger-scale rollout so that it fits with the longer-term network strategy.

Deutsche Telecom is now considered to be deploying in the first phase, with a small trial in Hrvatski Telecom, its Croatian subsidiary, called Project Terrastream. BT, Telefonica, NTT Communications and other operators are at a similar stage, although DT is considered the first to deploy SDN and NFV for commercial network services beyond the data center.

Stage 2 in the roadmap is a far more complicated task. Integrating with existing components that may perform the same function but are not virtualized requires east-west APIs that are not clearly defined, especially when a network is multivendor. This is a very active point of discussion, but it remains to be seen whether Tier-1 vendors will be willing to openly integrate with their peers and even smaller, specialist vendors. OSS/BSS is also a major challenge, where multivendor networks are controlled by multiple systems and introducing a new service may require risking several parameters in many of these OSS/BSS consoles. This is another area that is not likely to change rapidly but rather in small, incremental steps.

The final stage is perhaps the biggest barrier due to the financial commitment and resources required. Long-term strategy may translate to five or even 10 years ahead – when networks are fully virtualized – and the economic environment may not allow such bold investments. Moreover, it is not clear if SDN and NFV guarantee new services and revenues outside the data center or operator cloud. If they do not, both technologies – and similar IT concepts – are likely to be deployed incrementally and replace equipment that reaches end-of-life. Cost savings in the network currently do not justify forklift upgrades or the replacement of adequately functional network components.

There is also a growing realization that bare-metal platforms (i.e., the proprietary hardware-based platforms that power today’s networks) are here to stay for several years. This hardware has been customized and adapted for use in telecom networks, allowing high performance for radio, core, transport, fixed and optical networks. Replacing these high-capacity components with virtualized ones is likely to affect performance significantly and operators are certainly not willing to take the risk of disrupting the operation of their network.

A major theme at the conference was that proprietary platforms (particularly ATCA) will be replaced by common off-the-shelf (COTS) hardware. ATCA is a hardware platform designed specifically for telecoms, but several vendors have adapted the platform to their own cause, creating fragmentation, incompatibility and vendor lock-in. Although ATCA is in theory telecoms-specific COTS, proprietary extensions have forced operators to turn to COTS, which is now driven by IT vendors, including Intel, HP, IBM, Dell and others.

ETSI has just published first specifications on NFV. Their press release here says:

ETSI has published the first five specifications on Network Functions Virtualisation (NFV). This is a major milestone towards the use of NFV to simplify the roll-out of new network services, reduce deployment and operational costs and encourage innovation.

These documents clearly identify an agreed framework and terminology for NFV which will help the industry to channel its efforts towards fully interoperable NFV solutions. This in turn will make it easier for network operators and NFV solutions providers to work together and will facilitate global economies of scale.

The IT and Network industries are collaborating in ETSI's Industry Specification Group for Network Functions Virtualisation (NFV ISG) to achieve a consistent approach and common architecture for the hardware and software infrastructure needed to support virtualised network functions. Early NFV deployments are already underway and are expected to accelerate during 2014-15. These new specifications have been produced in less than 10 months to satisfy the high industry demand – NFV ISG only began work in January 2013.

NFV ISG was initiated by the world's leading telecoms network operators. The work has attracted broad industry support and participation has risen rapidly to over 150 companies of all sizes from all over the world, including network operators, telecommunication equipment vendors, IT vendors and technology providers. Like all ETSI standards, these NFV specifications have been agreed by a consensus of all those involved.

The five published documents (which are publicly available via www.etsi.org/nfv) include four ETSI Group Specifications (GSs) designed to align understanding about NFV across the industry. They cover NFV use cases, requirements, the architectural framework, and terminology. The fifth GS defines a framework for co-ordinating and promoting public demonstrations of Proof of Concept (PoC) platforms illustrating key aspects of NFV. Its objective is to encourage the development of an open ecosystem by integrating components from different players.

Work is continuing in NFV ISG to develop further guidance to industry, and more detailed specifications are scheduled for 2014. In addition, to avoid the duplication of effort and to minimise fragmentation amongst multiple standards development organisations, NFV ISG is undertaking a gap analysis to identify what additional work needs to be done, and which bodies are best placed to do it.

Monday, 23 September 2013

I was talking about push to share back in 2007 here. Now, in a recent presentation (embedded below) from ALU, eMBMS has been suggested as a a solution for PTT like services in case of Public safety case. Not sure if or when we will see this but I hope that its sooner rather than later. Anyway, the presentation is embedded below. Feel free to add your comments: