چکیده انگلیسی

Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Bayesian networks are one of the most widely used class of these models. Some of the inference and learning tasks in Bayesian networks involve complex optimization problems that require the use of meta-heuristic algorithms. Evolutionary algorithms, as successful problem solvers, are promising candidates for this purpose. This paper reviews the application of evolutionary algorithms for solving some NP-hard optimization tasks in Bayesian network inference and learning.

مقدمه انگلیسی

Probability theory has provided a sound basis for many of scientific and engineering tasks. Artificial intelligence, and more specifically machine learning, is one of the fields that has exploited probability to develop new theorems and algorithms. A popular class of probabilistic graphical models (PGMs), Bayesian networks, first introduced by Pearl [105], combine graph and probability theories to obtain a more comprehensible representation of the joint probability distribution. This tool can point out useful modularities in the underlying problem and help to accomplish the reasoning and decision making tasks especially in uncertain domains. The application of these useful tools has been further improved by different methods proposed for PGM inference [86] and automatic induction [23] from a set of samples.
Meanwhile, the difficult and complex problems existing in real-world applications have increased the demand for effective meta-heuristic algorithms that are able to achieve good (and not necessarily optimal) solutions by performing an intelligent search of the space of possible solutions. Evolutionary computation is one of the most successful of these algorithms that has achieved very good results across a wide range of problem domains. Applying their nature-inspired mechanisms, e.g., survival of the fittest or genetic crossover and mutation, on a population of candidate solutions, evolutionary approaches like genetic algorithms [59] have been able to perform a more effective and diverse search of the vast solution space of difficult problems.
Some of the most relevant inference and learning problems in Bayesian networks are formulated as the optimization of a function. These problems usually have an intractable complexity and therefore are a potential domain for the application of meta-heuristic methods. The aim of this paper is to review how evolutionary algorithms have been applied for solving some of the combinatorial problems existing in the inference and learning of Bayesian networks.
The paper is organized as follows. Section 2 introduces Bayesian networks and reviews some of the inference and learning methods proposed for them. Section 3 presents the framework of evolutionary algorithms and discusses how they work. The main review of how evolutionary algorithms are used in Bayesian network learning and inference is given in Section 4. Finally, Section 5 concludes the paper

نتیجه گیری انگلیسی

Bayesian networks are an important class of probabilistic graphical models, which have proven to be very useful and effective for reasoning in uncertain domains. They have been successfully used in machine learning tasks like classification and clustering. They are studied at length over the last three decades and many methods have been proposed to automate their learning and inference. Nevertheless, many of these methods involve difficult combinatorial search problems that directly affects their widespread use, especially for large problem instances, and thus require advanced search techniques like meta-heuristics.
Evolutionary algorithms, are general-purpose stochastic search methods inspired from natural evolution and have been frequently applied to solve many complex real-world problems. Different types of solutions from bit strings to program trees can be evolved within this framework in search for better solutions. A relatively new type of these algorithms, estimation of distribution algorithms, uses probabilistic modeling (and possibly Bayesian networks) to capture problem regularities and use them for new solution generation. They have been shown to solve problems that are considered challenging for traditional evolutionary algorithms.
Because of their advantages, different types of evolutionary algorithms have been used in Bayesian networks learning and inference tasks. A wide range of tasks like triangulation of the moral graph in Bayesian network inference, abductive inference, Bayesian network structure learning in difference search spaces, Bayesian classifier learning and learning dynamic Bayesian networks from stream data have employed evolutionary algorithms, which has led to significant improvements in the computational time and performance.
This topic is still an active field of research and with the intrinsic complexity of Bayesian network tasks, evolutionary algorithms are always a potential competitor. Especially, estimation of distribution algorithms with their ability to account for the interactions between variables seem to be a promising approach for further study. So far, several works have empirically compared the conventional approaches to Bayesian network tasks (see for example [132] for a comparison between several Bayesian network learning methods). An interesting future work that can complement this review is to perform similar empirical comparison of the evolutionary approaches presented here, on standard datasets.