Long-Term Evolution (LTE) is the next step in the GSM evolutionary path beyond 3G technology, and it is strongly positioned to be the dominant global standard for 4G cellular networks. This chapter provides an overview of the LTE (Long-Term Evolution) radio interface.

This chapter is from the book

This chapter is from the book

In Part I, we discussed the inherent challenges and associated technical solutions in designing a broadband wireless network. From here onward, we describe the technical details of the LTE specifications. As a starting point, in this chapter we provide an overview of the LTE radio interface. The 3rd Generation Partnership Project (3GPP) defines a separable network structure, that is, it divides the whole network into a radio access network (RAN) and a core network (CN), which makes it feasible to evolve each part independently. The Long-Term Evolution (LTE) project in 3GPP focuses on enhancing the UMTS Terrestrial Radio Access (UTRA)—the 3G RAN developed within 3GPP, and on optimizing 3GPP's overall radio access architecture. Another parallel project in 3GPP is the Evolved Packet Core (EPC), which focuses on the CN evolution with a flatter all-IP, packet-based architecture. The complete packet system consisting of LTE and EPC is called the Evolved Packet System (EPS). This book focuses on LTE, while EPC is discussed only when necessary. LTE is also referred to as Evolved UMTS Terrestrial Radio Access (E-UTRA), and the RAN of LTE is also referred to as Evolved UMTS Terrestrial Radio Access Network (E-UTRAN).

The radio interface of a wireless network is the interface between the mobile terminal and the base station, and thus in the case of LTE it is located between the RAN–E-UTRAN and the user equipment (UE, the name for the mobile terminal in 3GPP). Compared to the UMTS Terrestrial Radio Access Network (UTRAN) for 3G systems, which has two logical entities—the Node-B (the radio base station) and the radio network controller (RNC)—the E-UTRAN network architecture is simpler and flatter. It is composed of only one logical node—the evolved Node-B (eNode-B). The RAN architectures of UTRAN and E-UTRAN are shown in Figure 6.1. Compared to the traditional Node-B, the eNode-B supports additional features, such as radio resource control, admission control, and mobility management, which were originally contained in the RNC. This simpler structure simplifies the network operation and allows for higher throughput and lower latency over the radio interface.

The LTE radio interface aims for a long-term evolution, so it is designed with a clean slate approach as opposed to High-Speed Packet Access (HSPA), which was designed as an add-on to UMTS in order to increase throughput of packet switched services. HSPA is a collection of High-Speed Downlink Packet Access (HSDPA) and High-Speed Uplink Packet Access (HSUPA). The clean slate approach allows for a completely different air interface, which means that advanced techniques, including Orthogonal Frequency Division Multiplexing (OFDM) and multiantenna transmission and reception (MIMO), could be included from the start of the standardization of LTE. For multiple access, it moves away from Code Division Multiple Access (CDMA) and instead uses Orthogonal Frequency Division Multiple Access (OFDMA) in the downlink and Single-Carrier Frequency Division Multiple Access (SC-FDMA) in the uplink. All these techniques were described in detail in Part I, so in Part II we assume a basic knowledge of a wireless system, antenna diversity, OFDMA, and other topics covered in Part I.

In this chapter, we provide an introduction to the LTE radio interface, and describe its hierarchical channel structure. First, an overview of the LTE standard is provided, including design principles, the network architecture, and radio interface protocols. We then describe the purpose of each channel type defined in LTE and the mapping between channels at various protocol layers. Next, the downlink OFDMA and uplink SC-FDMA aspects of the air interface are described, including frame structures, physical resource blocks, resource allocation, and the supported MIMO modes. This chapter serves as the foundation for understanding the physical layer procedures and higher layer protocols of LTE that are described in the chapters to follow.

6.1 Introduction to LTE

As mentioned previously, LTE is the next step in the evolution of mobile cellular systems and was standardized as part of the 3GPP Release 8 specifications. Unlike 2G and 3G cellular systems1 that were designed mainly with voice services in mind, LTE was designed primarily for high-speed data services, which is why LTE is a packet-switched network from end to end and has no support for circuit-switched services. However, the low latency of LTE and its sophisticated quality of service (QoS) architecture allow a network to emulate a circuit-switched connection on top of the packet-switched framework of LTE.

6.1.1 Design Principles

The LTE standard was designed as a completely new standard, with new numbering and new documentation, and it is not built on the previous versions of 3GPP standards. Earlier elements were brought in only if there was a compelling reason for them to exist in the new standard. The basic design principles that were agreed upon and followed in 3GPP while designing the LTE specifications include:2

Network Architecture: Unlike 3G networks, LTE was designed to support packet-switched traffic with support for various QoS classes of services. Previous generations of networks such as UMTS/HSPA and 1xRTT/EvDO also support packet-switched traffic but this was achieved by subsequent add-ons to the initial version of the standards. For example, HSPA, which is a packet-switched protocol (packet-switched over the air), was built on top of the Release 99 UMTS network and as a result carried some of the unnecessary burdens of a circuit-switched network. LTE is different in the sense that it is a clean slate design and supports packet switching for high data rate services from the start. The LTE radio access network, E-UTRAN, was designed to have the minimum number of interfaces (i.e., the minimum number of network elements) while still being able to provide efficient packet-switched transport for traffic belonging to all the QoS classes such as conversational, streaming, real-time, non-real-time, and background classes.

Data Rate and Latency: The design target for downlink and uplink peak data rates for LTE are 100 Mbps and 50 Mbps, respectively, when operating at the 20MHz frequency division duplex (FDD) channel size. The user-plane latency is defined in terms of the time it takes to transmit a small IP packet from the UE to the edge node of the radio access network or vice versa measured on the IP layer. The target for one-way latency in the user plane is 5 ms in an unloaded network, that is, if only a single UE is present in the cell. For the control-plane latency, the transition time from a camped state to an active state is less than 100 ms, while the transition time between a dormant state and an active state should be less than 50 ms.

Performance Requirements: The target performance requirements for LTE are specified in terms of spectrum efficiency, mobility, and coverage, and they are in general expressed relative to the 3GPP Release 6 HSPA.

- Spectrum Efficiency The average downlink user data rate and spectrum efficiency target is three to four times that of the baseline HSDPA network. Similarly, in the uplink the average user data rate and spectrum efficiency target is two to three times that of the baseline HSUPA network. The cell edge throughput, measured as the 5th percentile throughput, should be two to three times that of the baseline HSDPA and HSUPA.

- Mobility The mobility requirement for LTE is to be able to support handoff/mobility at different terminal speeds. Maximum performance is expected for the lower terminal speeds of 0 to 15 km/hr, with minor degradation in performance at higher mobile speeds up to 120 km/hr. LTE is also expected to be able to sustain a connection for terminal speeds up to 350 km/hr but with significant degradation in the system performance.

- Coverage For the cell coverage, the above performance targets should be met up to 5 km. For cell ranges up to 30 km, a slight degradation of the user throughput is tolerated and a more significant degradation for spectrum efficiency is acceptable, but the mobility requirements should be met. Cell ranges up to 100 km should not be precluded by the specifications.

- MBMS Service LTE should also provide enhanced support for the Multimedia Broadcast and Multicast Service (MBMS) compared to UTRA operation.

Radio Resource Management: The radio resource management requirements cover various aspects such as enhanced support for end-to-end QoS, efficient support for transmission of higher layers, and support for load sharing/balancing and policy management/enforcement across different radio access technologies.

Deployment Scenario and Co-existence with 3G: At a high level, LTE shall support the following two deployment scenarios:

- Standalone deployment scenario, where the operator deploys LTE either with no previous network deployed in the area or with no requirement for interworking with the existing UTRAN/GERAN (GSM EDGE radio access network) networks.

- Integrating with existing UTRAN and/or GERAN deployment scenario, where the operator already has either a UTRAN and/or a GERAN network deployed with full or partial coverage in the same geographical area.

Flexibility of Spectrum and Deployment: In order to become a truly global standard, LTE was designed to be operable under a wide variety of spectrum scenarios, including its ability to coexist and share spectrum with existing 3G technologies. Service providers in different geographical regions often have different spectrums in terms of the carrier frequency and total available bandwidth, which is why LTE was designed to have a scalable bandwidth from 1.4MHz to 20MHz. In order to accommodate flexible duplexing options, LTE was designed to operate in both frequency division duplex (FDD) and time division duplex (TDD) modes.

Interoperability with 3G and 2G Networks: Multimode LTE terminals, which support UTRAN and/or GERAN operation, should be able to support measurement of, and handover from and to, both 3GPP UTRAN and 3GPP GERAN systems with acceptable terminal complexity and network performance.

6.1.2 Network Architecture

Figure 6.2 shows the end-to-end network architecture of LTE and the various components of the network. The entire network is composed of the radio access network (E-UTRAN) and the core network (EPC), both of which have been defined as new components of the end-to-end network in Release 8 of the 3GPP specifications. In this sense, LTE is different from UMTS since UMTS defined a new radio access network but used the same core network as the previous-generation Enhanced GPRS (EDGE) network. This obviously has some implications for the service providers who are upgrading from a UMTS network to LTE. The main components of the E-UTRAN and EPC are

eNode-B: The eNode-B (also called the base station) terminates the air interface protocol and is the first point of contact for the UE. As already shown in Figure 6.1, the eNode-B is the only logical node in the E-UTRAN, so it includes some functions previously defined in the RNC of the UTRAN, such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management.

Mobility Management Entity (MME): MME is similar in function to the control plane of legacy Serving GPRS Support Node (SGSN). It manages mobility aspects in 3GPP access such as gateway selection and tracking area list management.

Serving Gateway (Serving GW): The Serving GW terminates the interface toward E-UTRAN, and routes data packets between E-UTRAN and EPC. In addition, it is the local mobility anchor point for inter-eNode-B handovers and also provides an anchor for inter-3GPP mobility. Other responsibilities include lawful intercept, charging, and some policy enforcement. The Serving GW and the MME may be implemented in one physical node or separate physical nodes.

Packet Data Network Gateway (PDN GW): The PDN GW terminates the SGi interface toward the Packet Data Network (PDN). It routes data packets between the EPC and the external PDN, and is the key node for policy enforcement and charging data collection. It also provides the anchor point for mobility with non-3GPP accesses. The external PDN can be any kind of IP network as well as the IP Multimedia Subsystem (IMS) domain. The PDN GW and the Serving GW may be implemented in one physical node or separated physical nodes.

S1 Interface: The S1 interface is the interface that separates the E-UTRAN and the EPC. It is split into two parts: the S1-U, which carries traffic data between the eNode-B and the Serving GW, and the S1-MME, which is a signaling-only interface between the eNode-B and the MME.

X2 Interface: The X2 interface is the interface between eNode-Bs, consisting of two parts: the X2-C is the control plane interface between eNode-Bs, while the X2-U is the user plane interface between eNode-Bs. It is assumed that there always exists an X2 interface between eNode-Bs that need to communicate with each other, for example, for support of handover.

The specific functions supported by each component and the details about reference points (S1-MME, S1-U, S3, etc.) can be found in [1]. For other nodes in Figure 6.2, the Policy and Charging Rules Function (PCRF) is for policy and charging control, the Home Subscriber Server (HSS) is responsible for the service authorization and user authentication, and the Serving GPRS Support Node (SGSN) is for controlling packet sessions and managing the mobility of the UE for GPRS networks. The topics in this book mainly focus on the E-UTRAN and the LTE radio interface.

6.1.3 Radio Interface Protocols

As in other communication standards, the LTE radio interface is designed based on a layered protocol stack, which can be divided into control plane and user plane protocol stacks and is shown in Figure 6.3. The packet flow in the user plane is shown in Figure 6.4. The LTE radio interface protocol is composed of the following layers:

Radio Resource Control (RRC): The RRC layer performs the control plane functions including paging, maintenance and release of an RRC connection-security handling-mobility management, and QoS management.

Packet Data Convergence Protocol (PDCP): The main functions of the PDCP sublayer include IP packet header compression and decompression based on the RObust Header Compression (ROHC) protocol, ciphering of data and signaling, and integrity protection for signaling. There is only one PDCP entity at the eNode-B and the UE per bearer.3

Radio Link Control (RLC): The main functions of the RLC sublayer are segmentation and concatenation of data units, error correction through the Automatic Repeat reQuest (ARQ) protocol, and in-sequence delivery of packets to the higher layers. It operates in three modes:

- The Transparent Mode (TM): The TM mode is the simplest one, without RLC header addition, data segmentation, or concatenation, and it is used for specific purposes such as random access.

- The Unacknowledged Mode (UM): The UM mode allows the detection of packet loss and provides packet reordering and reassembly, but does not require retransmission of the missing protocol data units (PDUs).

- The Acknowledged Mode (AM): The AM mode is the most complex one, and it is configured to request retransmission of the missing PDUs in addition to the features supported by the UM mode.

There is only one RLC entity at the eNode-B and the UE per bearer.

Medium Access Control (MAC): The main functions of the MAC sublayer include error correction through the Hybrid-ARQ (H-ARQ) mechanism, mapping between logical channels and transport channels, multiplexing/demultiplexing of RLC PDUs on to transport blocks, priority handling between logical channels of one UE, and priority handling between UEs by means of dynamic scheduling. The MAC sublayer is also responsible for transport format selection of scheduled UEs, which includes selection of modulation format, code rate, MIMO rank, and power level. There is only one MAC entity at the eNode-B and one MAC entity at the UE.

Physical Layer (PHY): The main function of PHY is the actual transmission and reception of data in forms of transport blocks. The PHY is also responsible for various control mechanisms such as signaling of H-ARQ feedback, signaling of scheduled allocations, and channel measurements.

In Chapter 7 to Chapter 9, we focus on the PHY layer, also referred to as layer 1 of the Open Systems Interconnection (OSI) reference model. Higher layer processing is described in Chapter 10.