Table 1: Feature subset selected using prediction sensitivity.

"... In PAGE 5: ... According to the high mean/low variance criterion, most prominent for this example are the features 38, 39, 40, 45. Their names and brief descriptions are shown in Table1 . These features are indeed meaningful for the land attack.... ..."

"... In PAGE 5: ... Furthermore, we presented a novel feature subset selection strategy that minimized the feature space without the loss of information. Table2 indicates that through the first pass of our feature subset selection strategy we managed to reduce the feature set from 254 moments to a much smaller feature subset comprising just 47 salient moments, whilst achieving a slight increase in the classification accuracy. The second pass of our feature subset selection strategy, involves the use of a Markov Blanket as a filter model to the 47 features.... ..."

Table 4: Feature Subset Selection Results with Constant Parameters

"... In PAGE 9: ... This gave a total of 1132 feature vectors in the input to the genetic algorithm. To show the general effectiveness of genetic feature selection on this problem, Table4 shows the results of five separate runs of the genetic algorithm with RIPPER with identical parameters used for each run. The number of attributes is significantly reduced while the accuracy is maintained.... ..."

Cited by 8

Table 4: Feature Subset Selection Results with Constant Parameters

"... In PAGE 9: ... This gave a total of 1132 feature vectors in the input to the genetic algorithm. To show the general effectiveness of genetic feature selection on this problem, Table4 shows the results of five separate runs of the genetic algorithm with RIPPER with identical parameters used for each run. The number of attributes is significantly reduced while the accuracy is maintained.... ..."

Table 4: Feature Subset Selection Results with Constant Parameters

"... In PAGE 9: ... This gave a total of 1132 feature vectors in the input to the genetic algorithm. To show the general effectiveness of genetic feature selection on this problem, Table4 shows the results of five separate runs of the genetic algorithm with RIPPER with identical parameters used for each run. The number of attributes is significantly reduced while the accuracy is maintained.... ..."

Table 5: Accuracy and number of features for using feature subset selection on the full-text classifier.

"... In PAGE 10: ...4 Comparison to full-text classifier Another obvious question that should be answered is a comparison between the predictive accu- racies of the link-based page classifiers to those of a classifier that uses the words occurring on the page as a feature. Table5 shows a comparison between the link-based classifier using all four... ..."

Cited by 6

Table 5: Accuracy and number of features for using feature subset selection on the full-text classifier.

"... In PAGE 10: ...4 Comparison to full-text classifier Another obvious question that should be answered is a comparison between the predictive accu- racies of the link-based page classifiers to those of a classifier that uses the words occurring on the page as a feature. Table5 shows a comparison between the link-based classifier using all four... ..."

Table 4: Feature selection strategies. A is a subset of features selected for correspondence estima- tions; B is a subset of features used for interesting voxel selection. Interesting voxel selection Feature selection for correspondence estimation random

Table 4 Error Rate for different Feature Subsets

"... In PAGE 19: ... The rule de- tected positive patient records 214 and 230 as outliers within the test set. The results are presented in the first row of Table4 . Comparing results obtained by explicit and rule-based outlier detection it can be noted that the later approach selected the same records as the former approach but that it also detected two more records: 165 and 185.... In PAGE 20: ... The limit values between positive and negative classes are based on induced rules in previous experiments. Table4 in its second and third row includes results obtained for positive classes defined by con- ditions a75a23a76 a75a23a77a23a78a23a79 a80a48a81a83a82 a84 a85 a86 a87a15a87 and a88a48a89a23a90a50a79 a80a48a81a83a82 a84 a91 a86 a87a15a87 respec- tively. In the left column are records detected as outliers in the test set by explicit approach while in the right one are detected by the rule-based method.... In PAGE 27: ...9 95.1 Table4 shows the prediction accuracies of the two methods. Re- duced data set A2, from which the multivariate outliers and the examples with the most univariate outlier values were removed, was the worst nearest neighbour classifier.... In PAGE 53: ... From these subsets Decision Master induced decision trees and calculated the error rate based on cross validation. We observed that a better error rate can be reached if the decision tree is only induced from a subset of features, see Table4 and Figure 4. The method used in this paper does not tell us what is the right number of features.... ..."