Abstract

The predictive accuracy of a learning algorithm can be split into specificity and sensitivity, amongst other decompositions. Sensitivity, also known as completeness, is the ratio of true positives to the total number of positive examples, while specificity is the ratio of true negative to the total negative examples. In top-down learning methods of inductive logic programming, there is generally a bias towards sensitivity, since the learning starts from the most general rule (everything is positive) and specialises by excluding some of the negative examples. While this is often useful, it is not always the best choice: for example, in novelty detection, where the negative examples are rare and often varied, they may well be ignored by the learning. In this paper we introduce a method that attempts to remove the bias towards sensitivity by fortifying the model by computing and then including in the model some descriptions of the negative data even if they are considered redundant by the normal learning algorithm. We demonstrate the method on a set of standard datasets for description logic learning and show that the predictive accuracy increases.

Related Material

@InProceedings{pmlr-v29-Tran13,
title = {Improving Predictive Specificity of Description Logic Learners by Fortification},
author = {An Tran and Jens Dietrich and Hans Guesgen and Stephen Marsland},
booktitle = {Proceedings of the 5th Asian Conference on Machine Learning},
pages = {419--434},
year = {2013},
editor = {Cheng Soon Ong and Tu Bao Ho},
volume = {29},
series = {Proceedings of Machine Learning Research},
address = {Australian National University, Canberra, Australia},
month = {13--15 Nov},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v29/Tran13.pdf},
url = {http://proceedings.mlr.press/v29/Tran13.html},
abstract = {The predictive accuracy of a learning algorithm can be split into specificity and sensitivity, amongst other decompositions. Sensitivity, also known as completeness, is the ratio of true positives to the total number of positive examples, while specificity is the ratio of true negative to the total negative examples. In top-down learning methods of inductive logic programming, there is generally a bias towards sensitivity, since the learning starts from the most general rule (everything is positive) and specialises by excluding some of the negative examples. While this is often useful, it is not always the best choice: for example, in novelty detection, where the negative examples are rare and often varied, they may well be ignored by the learning. In this paper we introduce a method that attempts to remove the bias towards sensitivity by fortifying the model by computing and then including in the model some descriptions of the negative data even if they are considered redundant by the normal learning algorithm. We demonstrate the method on a set of standard datasets for description logic learning and show that the predictive accuracy increases.}
}

%0 Conference Paper
%T Improving Predictive Specificity of Description Logic Learners by Fortification
%A An Tran
%A Jens Dietrich
%A Hans Guesgen
%A Stephen Marsland
%B Proceedings of the 5th Asian Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2013
%E Cheng Soon Ong
%E Tu Bao Ho
%F pmlr-v29-Tran13
%I PMLR
%J Proceedings of Machine Learning Research
%P 419--434
%U http://proceedings.mlr.press
%V 29
%W PMLR
%X The predictive accuracy of a learning algorithm can be split into specificity and sensitivity, amongst other decompositions. Sensitivity, also known as completeness, is the ratio of true positives to the total number of positive examples, while specificity is the ratio of true negative to the total negative examples. In top-down learning methods of inductive logic programming, there is generally a bias towards sensitivity, since the learning starts from the most general rule (everything is positive) and specialises by excluding some of the negative examples. While this is often useful, it is not always the best choice: for example, in novelty detection, where the negative examples are rare and often varied, they may well be ignored by the learning. In this paper we introduce a method that attempts to remove the bias towards sensitivity by fortifying the model by computing and then including in the model some descriptions of the negative data even if they are considered redundant by the normal learning algorithm. We demonstrate the method on a set of standard datasets for description logic learning and show that the predictive accuracy increases.

TY - CPAPER
TI - Improving Predictive Specificity of Description Logic Learners by Fortification
AU - An Tran
AU - Jens Dietrich
AU - Hans Guesgen
AU - Stephen Marsland
BT - Proceedings of the 5th Asian Conference on Machine Learning
PY - 2013/10/21
DA - 2013/10/21
ED - Cheng Soon Ong
ED - Tu Bao Ho
ID - pmlr-v29-Tran13
PB - PMLR
SP - 419
DP - PMLR
EP - 434
L1 - http://proceedings.mlr.press/v29/Tran13.pdf
UR - http://proceedings.mlr.press/v29/Tran13.html
AB - The predictive accuracy of a learning algorithm can be split into specificity and sensitivity, amongst other decompositions. Sensitivity, also known as completeness, is the ratio of true positives to the total number of positive examples, while specificity is the ratio of true negative to the total negative examples. In top-down learning methods of inductive logic programming, there is generally a bias towards sensitivity, since the learning starts from the most general rule (everything is positive) and specialises by excluding some of the negative examples. While this is often useful, it is not always the best choice: for example, in novelty detection, where the negative examples are rare and often varied, they may well be ignored by the learning. In this paper we introduce a method that attempts to remove the bias towards sensitivity by fortifying the model by computing and then including in the model some descriptions of the negative data even if they are considered redundant by the normal learning algorithm. We demonstrate the method on a set of standard datasets for description logic learning and show that the predictive accuracy increases.
ER -