Learning and Exploiting Relational Structure for Efficient Inference

View/Open

Author

Metadata

Abstract

One of the central challenges of statistical relational learning is the tradeoff between expressiveness and computational tractability. Representations such as Markov logic can capture rich joint probabilistic models over a set of related objects. However, inference in Markov logic and similar languages is #P-complete. Most existing tractable statistical relational representations are very limited in expressiveness. This dissertation explores two strategies for dealing with intractability while preserving expressiveness. The first strategy is to exploit the approximate symmetries frequently found in relational domains to perform approximate lifted inference. We provide error bounds for two approaches for approximate lifted belief propagation. We also describe propositional and lifted inference algorithms for repeated inference in statistical relational models. We describe a general approach for expected utility maximization in relational domains, making use of these algorithms. The second strategy we explore is learning rich relational representations directly from data. First, we propose a method for learning multiple hierarchical relational clusterings, unifying several previous approaches to relational clustering. Second, we describe a tractable high-treewidth statistical relational representation based on Sum-Product Networks, and propose a learning algorithm for this language. Finally, we apply state-of-the-art tractable learning methods to the problem of software fault localization.