JavaScript is disabled for your browser. Some features of this site may not work without it.

NOTE: This item is not available outside the Texas A&M University network. Texas A&M affiliated users who are off campus can access the item through
Ezproxy authentication
or by using TAMU VPN. Non-affiliated individuals should request a copy through their local library's interlibrary loan service.

View/Open

Date

Author

Metadata

Abstract

The bipartite graph of an LDPC code is typically constructed at random. The memory required to store the randomly constructed parity-check matrix at the decoder can be prohibitive in some applications. When a system has a set of different LDPC codes, each of the parity-check matrices need to be stored in the receiver. Another disadvantage of random constructions methods is that it results in prominent error floors. In this thesis, a novel technique is presented for constructing the bipartite graph of LDPC codes using maximum length linear congruential sequences generated by a simple recursion instead of random construction techniques. For this class of codes it is proved that regular LDPC codes can be constructed without any cycles of length shorter than 6. Simulation results show that these codes provide almost the same performance of a constrained random construction that explicitly avoids cycles of length less than 6. These codes show a better or comparable performance with LDPC codes based on finite geometries. These codes are easily constructed for low rates and require longer lengths for higher rates. A constrained construction technique for constructing irregular LDPC codes is studied and its performance compared with randomly constructed codes. In the second part of the thesis the message passing LDPC decoder is altered for reduced complexity. Updations of log-likelihood values(LLR) at bit nodes are stopped once they reach a threshold of reliability. The effect of threshold on complexity reduction and performance degradation is studied. It is found that the complexity is far less when compared to systems that do not have a mechanism to stop the iterations. When compared with systems that employ cyclic redundancy check or parity checks to stop the iterations, this scheme results in only a small reduction of 20 to 35 percent in complexity at the cost of a small performance degradation.

URI

Description

Due to the character of the original source materials and the nature of batch digitization, quality control issues may be present in this document. Please report any quality issues you encounter to digital@library.tamu.edu, referencing the URI of the item.Includes bibliographical references (leaf 63).Issued also on microfiche from Lange Micrographics.