abstract = "One way of obtaining accurate yet comprehensible
models is to extract rules from opaque predictive
models. When evaluating rule extraction algorithms, one
frequently used criterion is consistency; i.e. the
algorithm must produce similar rules every time it is
applied to the same problem. Rule extraction algorithms
based on evolutionary algorithms are, however,
inherently inconsistent, something that is regarded as
their main drawback. In this paper, we argue that
consistency is an over valued criterion, and that
inconsistency can even be beneficial in some
situations. The study contains two experiments, both
using publicly available data sets, where rules are
extracted from neural network ensembles. In the first
experiment, it is shown that it is normally possible to
extract several different rule sets from an opaque
model, all having high and similar accuracy. The
implication is that consistency in that perspective is
useless; why should one specific rule set be considered
superior? Clearly, it should instead be regarded as an
advantage to obtain several accurate and comprehensible
descriptions of the relationship. In the second
experiment, rule extraction is used for probability
estimation. More specifically, an ensemble of extracted
trees is used in order to obtain probability estimates.
Here, it is exactly the inconsistency of the rule
extraction algorithm that makes the suggested approach
possible.",