In various applications researchers are presented with the problem of recovering an unknown vector x from an underdetermined linear system. Examples include error correction in coding theory and linear regression in statistics. Underdetermined systems produce an infinite number of solutions, so the focus of applications is to find the "simplest" solution for x which corresponds to the solution that has the fewest non-zero elements. In this thesis we analyze different approaches to solving the problem of recovering a sparse vector from an underdetermined linear system. In particular a new condition on the data matrix A for guaranteeing exact recovery is presented. This result comes from a linear algebraic approach and proves that a general condition for ensuring exact recovery involves including a specific vector within the row space of the matrix A. In addition we compare two well-known model selection techniques, the Dantzig selector and Lasso method, used to solve underdetermined systems in a statistical context. Through numerical experiments we isolate specific situations in which one method outperforms the other.