Entropy region and network information theory

Citation

Abstract

This dissertation takes a step toward a general framework for solving network information theory problems by studying the capacity region of networks through the entropy region.
We first show that the capacity of a large class of acyclic memoryless multiuser information theory problems can be formulated as convex optimization over the region of entropy vectors of the network random variables. This capacity characterization is universal, and is advantageous over previous formulations in that it is single letter. Besides, it is significant as it reveals the fundamental role of the entropy region in determining the capacity of network information theory problems.
With this viewpoint, the rest of the thesis is dedicated to the study of the entropy region, and its consequences for networks. A full characterization of the entropy region has proven to be a very challenging problem, and thus, we mostly consider inner bound constructions. For discrete random variables, our approaches include characterization of entropy vectors with a lattice-derived probability distribution, the entropy region of binary random variables, and the linear representable region. Through these characterizations, and using matroid representability results, we study linear coding capacity of networks in different scenarios (e.g., binary operations in a network, or networks with two sources).
We also consider continuous random variables by studying the entropy region of jointly Gaussian random variables. In particular, we determine the sufficiency of Gaussian random variables for characterizing the entropy region of 3 random variables in general. For more than 3 random variables, we point out the set of minimal necessary and sufficient conditions for a vector to be an entropy vector of jointly Gaussian random variables.
Finally, in the absence of a full analytical characterization of the entropy region, it is desirable to be able to perform numerical optimization over this space. In this regard, we propose a certain Monte Carlo method that enables one to numerically optimize entropy functions of discrete random variables, and also the achievable rates in wired networks. This method can be further adjusted for decentralized operation of networks. The promise of this technique is shown through various simulations of several interesting network problems.