1. South African Heart Disease

Let’s look at applying these three methods to some empirical data. We’ll first start by using a data set from Rousseauw (1983) describing a retrospective sample of males in a heart-disease high-risk region of the Western Cape, South Africa. This data set has 462 observations on 9 predictors and 2 classes. We’ll be using the e1071 package for SVM, the klaR package for KDC, and the class package for k-NN.

1.1 Pre-processing

The data are a mix of categorical and numerical. The numeric features are systolic blood pressure (sbp), cumulative tobacco usage (tobacco), cholesterol (ldl), body adiposity index (adiposity), type-a behavior (typea), body mass index (obesity), alcohol consumption (alcohol), and age (age). The categorical feature is family history (famhist). Below is a listing of the summary statistics.

We first want to separate the observations into a training set and a testing set. We’ll randomly select 362 observations as the training set, and the remaining 100 observations will serve as the testing set.

Since the data are over a range of different values we first need to pre-process the data. The data has no NA values but we are going to need to center and scale the data to mean 0 and standard deviation 1. We need to do this because KNN and SVM calculate the distances between points so features that are not on the same scale will skew the data. This can have the effect of overscaling or underscaling the effects of certain features when instead we want each feature to contribute to the classifier equally.

In order to do this, we want to first scale the training set by subtracting and dividing each feature by its respective mean and standard deviation. Then we’ll subtract and divide the features in the testing set by the same mean and standard deviation. It is important that we use the values from the training set to scale the testing set because in a real world situation we only have the training set to scale our new observations on.

1.2 K-Nearest Neighbors

Let’s first try applying k-nearest neighbors with k = 5. Since KNN calculates distances between points we need to convert our categorical variable famhist to a numerical representation. We’ll just simply use the as.numeric() function to do this.

After converting, we can find a classifier using the knn() in the class package. This function both trains the classifier using the training set and makes class predictions on the testing set. We’ll also give a confusion matrix for the training set and testing set.

So by using a k-nearest neighbor classifier, we obtain a 75% training accuracy rate. More importantly however, is that the test accuracy rate is 72%. Unfortunately though, our false negative rate is rather high at 65%. In comparison the false positive rate is 13%. A high false negative rate is not good in this context because it means patients who have heart disease are not being diagnosed, which can be deadly.

1.3 Kernel Density Classification

Now let’s try applying a Naive Bayes classifier by generating a kernel density estimate. The NaiveBayes() function in the R package klaR makes this easy by including a usekernel argument that when set generates a kernel density estimate using density() and then computes the classifier based on that.

So with a kernel density estimate and a Naive Bayes classifier we achieve a 74% training accuracy rate, which is higher than our KNN training accuracy. But we also get a 74% test accuracy rate. Unfortunately though, our false positive rate has gone up to 21% while the false negative rate went down to 38%. Ideally, if the false negative rate were to lower, we would want the true positive or negative rates to get higher instead of the false positive. However, in this context, a higher false positive rate could be acceptable because it could just indicate that further tests would need to be performed to diagnose a patient.

1.4 Linear SVM

Let’s first start by applying a linear decision boundary on the training data. We’ll perform 10-fold cross validation on a range of cost parameters and make a prediction for the test data using the best model.

So as we can see, this achieves an 72% accuracy rate on training data. This is not as good as the training accuracy rates from KNN and KDC, but we get a 79% accuracy rate on test data. This is a significant improvement from the KNN and KDC classifiers, and we expect that polynomial and radial kernels will perform better. Additionally, our false positive rate has gone down to 14% and our false negative rate remains at 38%. So far, this is our best model.

1.5 Polynomial SVM

The linear decision boundary is an improvement on the KNN and KDC classifiers so maybe using a polynomial kernel will be even better. As in the linear kernel, we’ll perform 10-fold cross validation on the cost parameter, but also the degree of kernel and gamma.

So with a polynomial kernel of degree 2, cost 0.015625, and gamma 2, the training data accuracy actually went up to 78% but the testing data accuracy rate went down to 78% compared to the linear kernel. This is interesting, as one would expect the test accuracy rate to go up with a polynomial kernel. Additionally, since the training rates and testing rates are the same, this indicates that the classifier did not overfit the data. Notably though, our false positive rate went down to 10% and the false negative rate went up to 52%. This is not good, so I would rule this out as a good classifier.

1.6 Radial SVM

Maybe a radial kernel will increase our test data accuracy rate. Again, we’ll perform 10-fold cross validation, but this time the parameters are cost and gamma.

So with a radial kernel, the training data accuracy rate is 80%. Additionally, the test data accuracy rate went up to 81%. This is the best accuracy rate we have achieved thus far. The false negative rate did go up slightly compared to KDC and Linear SVM, to 41%, but the false positive rate remained at 10%, which was the lowest false positive rate achieved.

2. Analysis

Classification Method

Training Rate

Testing Rate

KNN

75%

72%

KDC

74%

74%

Linear SVM

72%

79%

Polynomial SVM

78%

78%

Radial SVM

80%

81%

Classification Method

False Positive Rate

False Negative Rate

KNN

13%

65%

KDC

21%

38%

Linear SVM

14%

38%

Polynomial SVM

10%

52%

Radial SVM

10%

41%

We see that the highest possible test accuracy rate we get is 81% with an average of about 80% for SVMs and 74% and 72% for KNN and KDC. Ideally, we would like this to be higher, especially because the false negative rate is pretty high across all methods which could lead to disastrous consequences given the context of the classification.

What this indicates is that there is some class imbalance, and if we look at the full data set, we can confirm this, with only about 35% of observations belonging to the positive class. What happens in this situation for SVMs is that since our misclassification cost C is the same for both classes, a higher proportion of positive observations will be classified as negative, leading to a higher false negative rate. In order to rectify this, we can assign weights to each class, with the positive class having a higher weight. This doesn’t always work though. If we assign class weights on our best SVM model,

Recommended articles

The LibreTexts libraries are Powered by MindTouch®and are supported by the National Science Foundation under grant numbers 1246120, 1525057, and 1413739 and the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions, and Merlot. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Have questions or comments? For more information contact us at info@libretexts.org.