This thesis describes CAM (Categories, Agreement, and Morphology), a computer model of several aspects of language acquisition. CAM is based on the Semantic Bootstrapping Hypothesis (Pinker, 1984), and CAM also respects other widely accepted psychological constraints such as (1) no negative evidence and (2) no memory of previous inputs. CAM learns in a largely bottom-up manner, learning parts of categories first, then context-free grammar rules based on these categories, and finally agreemnt rules on top of the context-free grammar rules. CAM duplicates the partial order relations observed by R. Brown (Brown, 1973) in children for the progressive, the plural, the third-person regular, and the auxiliary verbs. CAM solves the negative evidence problem for agreement rule learning: though it receives no negative evidence in the input, it is nonetheless capable of providing both positive and (internally generated) negative examples to its built-in Boolean learning algorithm, which creates the agreement rules. CAM learns parts of both English and Cheyenne, a highly morphological American Indian language. Detailed procedures for syntactic category inference are discussed in this thesis, as well as proposals for the integration of semantic bootstrapping and syntax-driven syntactic category inference into one system. This thesis also shows how the form of X-bar Theory is influenced by acquisition, parsability, and syntactic category inference. Finally, the full range of grammars learnable by CAM is described in precise, mathematical detail. Three results are shown: (1) CAM's correctness, i.e., its ability to correctly identify a target grammar from inputs based on that grammar, (2) its order invariance over different input orders, and (3) its robustness: the ability to correctly learn a target language from a vastly more complex input language.