Decision Tree Classifier implementation in R

Decision Tree Classifier implementation in R

The decision tree classifier is a supervised learning algorithm which can use for both the classification and regression tasks. As we have explained the building blocks of decision tree algorithm in our earlier articles. Now we are going to implement Decision Tree classifier in R using the R machine learning caret package.

To get more out of this article, it is recommended to learn about the decision tree algorithm. If you don’t have the basic understanding on Decision Tree classifier, it’s good to spend some time on understanding how the decision tree algorithm works.

Why use the Caret Package

To work on big datasets, we can directly use some machine learning packages. The developer community of R programming language has built the great packages Caret to make our work easier. The beauty of these packages is that they are well optimized and can handle maximum exceptions to make our job simple. We just need to call functions for implementing algorithms with the right parameters.

Cars Evaluation Data Set Description

The Cars Evaluation data set consists of 7 attributes, 6 as feature attributes and 1 as the target attribute. All the attributes are categorical. We will try to build a classifier for predicting the Class attribute. The index of target attribute is 7th.

1

buying

vhigh, high, med, low

2

maint

vhigh, high, med,low

3

doors

2, 3, 4, 5 , more

4

persons

2, 4, more

5

lug_boot

small, med, big.

6

safety

low, med, high

7

Car Evaluation – Target Variable

unacc, acc, good, vgood

The above table shows all the details of data.

Car Evaluation Problem Statement:

To model a classifier for evaluating the acceptability of car using its given features.

Decision Tree classifier implementation in R with Caret Package

R Library import

For implementing Decision Tree in r, we need to import “caret” package & “rplot.plot”. As we mentioned above, caret helps to perform various tasks for our machine learning work. The “rplot.plot” package will help to get a visual plot of the decision tree.

importing rplot

R

1

2

library(caret)

library(rpart.plot)

In case if you face any error while running the code. Frist install the package rplot.plot using the command install.packages(“rpart.plot”)

Data Import

For importing the data and manipulating it, we are going to use data frames. First of all, we need to download the dataset. You can download the dataset from here. All the data values are separated by commas. After downloading the data file, you need to set your working directory via console else save the data file in your current working directory.

You can get the path of your current working directory by running getwd() command in R console. If you wish to change your working directory then the setwd(<PATH of New Working Directory>) can complete our task.

For importing data into an R data frame, we can use read.csv() method with parameters as a file name and whether our dataset consists of the 1st row with a header or not. If a header row exists then, the header should be set TRUE else header should set to FALSE.

For checking the structure of data frame we can call the function str() over car_df:

R

1

2

3

4

5

6

7

8

9

>str(car_df)

'data.frame':1728obs.of7variables:

$V1:Factorw/4levels"high","low","med",..:4444444444...

$V2:Factorw/4levels"high","low","med",..:4444444444...

$V3:Factorw/4levels"2","3","4","5more":1111111111...

$V4:Factorw/3levels"2","4","more":1111111112...

$V5:Factorw/3levels"big","med","small":3332221113...

$V6:Factorw/3levels"high","low","med":2312312312...

$V7:Factorw/4levels"acc","good","unacc",..:3333333333...

The above output shows us that our dataset consists of 1728 observations each with 7 attributes.

To check top 5-6 rows of the dataset, we can use head().

1

2

3

4

5

6

7

8

>head(car_df)

V1 V2 V3 V4 V5 V6 V7

1vhigh vhigh22small low unacc

2vhigh vhigh22small med unacc

3vhigh vhigh22small high unacc

4vhigh vhigh22med low unacc

5vhigh vhigh22med med unacc

6vhigh vhigh22med high unacc

All the features are categorical, so normalization of data is not needed.

Data Slicing

Data slicing is a step to split data into train and test set. Training data set can be used specifically for our model building. Test dataset should not be mixed up while building model. Even during standardization, we should not standardize our test set.

Spliting the Dataset

R

1

2

3

4

set.seed(3033)

intrain<-createDataPartition(y=car_df$V7,p=0.7,list=FALSE)

training<-car_df[intrain,]

testing<-car_df[-intrain,]

The set.seed() method is used to make our work replicable. As we want our readers to learn concepts by coding these snippets. To make our answers replicable, we need to set a seed value. During partitioning of data, it splits randomly but if our readers will pass the same value in the set.seed() method. Then we both will get identical results.

The caret package provides a method createDataPartition() for partitioning our data into train and test set. We are passing 3 parameters. The “y” parameter takes the value of variable according to which data needs to be partitioned. In our case, target variable is at V7, so we are passing car_df$V7 (heart data frame’s V7 column).

The “p” parameter holds a decimal value in the range of 0-1. It’s to show that percentage of the split. We are using p=0.7. It means that data split should be done in 70:30 ratio. The “list” parameter is for whether to return a list or matrix. We are passing FALSE for not returning a list. The createDataPartition() method is returning a matrix “intrain” with record’s indices.

By passing values of intrain, we are splitting training data and testing data.
The line training <- car_df[intrain,] is for putting the data from data frame to training data. Remaining data is saved in the testing data frame, testing <- car_df[-intrain,]

For checking the dimensions of our training data frame and testing data frame, we can use these:

R

1

2

#check dimensions of train & test set

dim(training);dim(testing);

Preprocessing & Training

Preprocessing is all about correcting the problems in data before building a machine learning model using that data. Problems can be of many types like missing values, attributes with a different range, etc.

To check whether our data contains missing values or not, we can use anyNA() method. Here, NA means Not Available.

1

2

>anyNA(car_df)

[1]FALSE

Since it’s returning FALSE, it means we don’t have any missing values.

Dataset summarized details

For checking the summarized details of our data, we can use the summary() method. It will give us a basic idea about our dataset’s attributes range.

1

2

3

4

5

6

>summary(car_df)

V1 V2 V3 V4 V5 V6 V7

high:432high:4322:4322:576big:576high:576acc:384

low:432low:4323:4324:576med:576low:576good:69

med:432med:4324:432more:576small:576med:576unacc:1210

vhigh:432vhigh:4325more:432

Training the Decision Tree classifier with criterion as information gain

Caret package provides train() method for training our data for various algorithms. We just need to pass different parameter values for different algorithms. Before train() method, we will first use trainControl() method. It controls the computational nuances of the train() method.

1

2

3

4

5

6

trctrl<-trainControl(method="repeatedcv",number=10,repeats=3)

set.seed(3333)

dtree_fit<-train(V7~.,data=training,method="rpart",

parms=list(split="information"),

trControl=trctrl,

tuneLength=10)

We are setting 3 parameters of trainControl() method. The “method” parameter holds the details about resampling method. We can set “method” with many values like “boot”, “boot632”, “cv”, “repeatedcv”, “LOOCV”, “LGOCV” etc. For this tutorial, let’s try to use repeatedcv i.e, repeated cross-validation.

The “number” parameter holds the number of resampling iterations. The “repeats ” parameter contains the complete sets of folds to compute for our repeated cross-validation. We are using setting number =10 and repeats =3. This trainControl() methods returns a list. We are going to pass this on our train() method.

Before training our Decision Tree classifier, set.seed().

For training Decision Tree classifier, train() method should be passed with “method” parameter as “rpart”. There is another package “rpart”, it is specifically available for decision tree implementation. Caret links its train function with others to make our work simple.

We are passing our target variable V7. The “V7~.” denotes a formula for using all attributes in our classifier and V7 as the target variable. The “trControl” parameter should be passed with results from our trianControl() method.

You can check the documentation rpart by typing
?rpart . We can use different criterions while splitting our nodes of the tree.

To select the specific strategy, we need to pass a parameter “parms” in our train() method. It should contain a list of parameters for our rpart method. For splitting criterions, we need to add a “split” parameter with values either “information” for information gain & “gini” for gini index. In the above snippet, we are using information gain as a criterion.

Trained Decision Tree classifier results

We can check the result of our train() method by a print dtree_fit variable. It is showing us the accuracy metrics for different values of cp. Here, cp is complexity parameter for our dtree.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

>dtree_fit

CART

1212samples

6predictor

4classes:'acc','good','unacc','vgood'

No pre-processing

Resampling:Cross-Validated(10fold,repeated3times)

Summary of sample sizes:1091,1090,1091,1092,1091,1091,...

Resampling results across tuning parameters:

cp Accuracy Kappa

0.011235960.86004470.6992474

0.014044940.84876330.6710345

0.018960670.83092660.6307181

0.019662920.82954920.6284956

0.022471910.81303810.5930024

0.023876400.81166740.5904830

0.053370790.77725990.5472383

0.061797750.77453000.5470675

0.075842700.74672120.3945498

0.084269660.72027170.1922830

Accuracy was used toselect the optimal model using the largest value.

The finalvalue used forthe model was cp=0.01123596.

Plot Decision Tree

We can visualize our decision tree by using prp() method.

1

prp(dtree_fit$finalModel,box.palette="Reds",tweak=1.2)

The decision tree visualization shown above indicates its structure. It shows the attribute’s selection order for criterion as information gain.

Prediction

Now, our model is trained with cp = 0.01123596. We are ready to predict classes for our test set. We can use predict() method. Let’s try to predict target variable for test set’s 1st record.

1

2

3

4

5

6

7

>testing[1,]

V1 V2 V3 V4 V5 V6 V7

2vhigh vhigh22small med unacc

>predict(dtree_fit,newdata=testing[1,])

[1]unacc

Levels:acc good unacc vgood

For our 1st record of testing data classifier is predicting class variable as “unacc”. Now, its time to predict target variable for the whole test set.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

>test_pred<-predict(dtree_fit,newdata=testing)

>confusionMatrix(test_pred,testing$V7)#check accuracy

Confusion Matrix andStatistics

Reference

Prediction acc good unacc vgood

acc10219363

good6403

unacc503180

vgood11108

Overall Statistics

Accuracy:0.8372

95%CI:(0.8025,0.868)

No Information Rate:0.686

P-Value[Acc>NIR]:3.262e-15

Kappa:0.6703

Mcnemar'sTestP-Value:NA

Statistics by Class:

Class:acc Class:good Class:unacc Class:vgood

Sensitivity0.82260.1666670.89830.57143

Specificity0.85200.9817070.96910.97610

Pos Pred Value0.63750.3076920.98450.40000

Neg Pred Value0.93820.9602390.81350.98790

Prevalence0.24030.0465120.68600.02713

Detection Rate0.19770.0077520.61630.01550

Detection Prevalence0.31010.0251940.62600.03876

Balanced Accuracy0.83730.5741870.93370.77376

The above results show that the classifier with the criterion as information gain is giving 83.72% of accuracy for the test set.

Training the Decision Tree classifier with criterion as gini index

Let’s try to program a decision tree classifier using splitting criterion as gini index. It is showing us the accuracy metrics for different values of cp. Here, cp is complexity parameter for our dtree.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

>set.seed(3333)

>dtree_fit_gini<-train(V7~.,data=training,method="rpart",

parms=list(split="gini"),

trControl=trctrl,

tuneLength=10)

>dtree_fit_gini

CART

1212samples

6predictor

4classes:'acc','good','unacc','vgood'

No pre-processing

Resampling:Cross-Validated(10fold,repeated3times)

Summary of sample sizes:1091,1090,1091,1092,1091,1091,...

Resampling results across tuning parameters:

cp Accuracy Kappa

0.011235960.86002220.6966316

0.014044940.84930280.6704178

0.018960670.80554730.5650697

0.019662920.80224150.5587148

0.022471910.78852570.5254510

0.023876400.78742830.5242579

0.053370790.77807970.5286806

0.061797750.77396320.5354177

0.075842700.74672120.3945498

0.084269660.72027170.1922830

Accuracy was used toselect the optimal model using the largest value.

The finalvalue used forthe model was cp=0.01123596.

Plot Decision Tree

We can visualize our decision tree by using prp() method.

1

>prp(dtree_fit_gini$finalModel,box.palette="Blues",tweak=1.2)

Prediction

Now, our model is trained with cp = 0.01123596. We are ready to predict classes for our test set.
Now, it’s time to predict target variable for the whole test set.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

>test_pred_gini<-predict(dtree_fit_gini,newdata=testing)

>confusionMatrix(test_pred_gini,testing$V7)#check accuracy

Confusion Matrix andStatistics

Reference

Prediction acc good unacc vgood

acc10916346

good5700

unacc703200

vgood3108

Overall Statistics

Accuracy:0.8605

95%CI:(0.8275,0.8892)

No Information Rate:0.686

P-Value[Acc>NIR]:<2.2e-16

Kappa:0.7133

Mcnemar'sTestP-Value:NA

Statistics by Class:

Class:acc Class:good Class:unacc Class:vgood

Sensitivity0.87900.291670.90400.57143

Specificity0.85710.989840.95680.99203

Pos Pred Value0.66060.583330.97860.66667

Neg Pred Value0.95730.966270.82010.98810

Prevalence0.24030.046510.68600.02713

Detection Rate0.21120.013570.62020.01550

Detection Prevalence0.31980.023260.63370.02326

Balanced Accuracy0.86810.640750.93040.78173

The above results show that the classifier with the criterion as gini index is giving 86.05% of accuracy for the test set. In this case, our classifier with criterion gini index is giving better results.