Abstract

Constantly increasing demand for throughput and quality in wireless communication systems leads to continuous research of wise radio resource management, because of the
scarce availability of frequency bands and the consequent capacity limitations. In addition, technology evolution is addressed towards spectral efficient techniques that can offer higher data rates. This is the case of OFDMA (Orthogonal
Frequency-Division Multiple Access), introduced by 3GPP as the technology for future Long Term Evolution (LTE). However,
given the current penetration of legacy technologies such as UMTS (Universal Mobile Telecommunications System), operators will have to deal with the coexistence of multiple Radio Access Technologies (RATs), so that the exploitation of the
complementarities between technologies through Joint Radio Resource Management (JRRM) mechanisms will be needed. In
this paper we propose a novel dynamic JRRM algorithm for LTE-UMTS coexistence scenarios. The proposed mechanism is
based on Reinforcement Learning (RL) which is considered to be a good candidate to achieve cognition in future reconfigurable
networks. The proposed solution implements autonomous RL agents in each base station which decide on the allocation of the most suitable RAT to each user. We give a detailed description of the solution and analyze the behavior under various load
conditions. We also demonstrate the capability of the algorithm to adjust in dynamic scenarios.