Transcription

1 Journal of Asian Scientific Research journal homepage: COMPARISON OF THREE CLASSIFICATION ALGORITHMS FOR PREDICTING PM25 IN HONG KONG RURAL AREA Yin Zhao School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia Yahya Abu Hasan School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia ABSTRACT Data mining is an approach to discover knowledge from large data Pollutant forecasting is an important problem in the environmental sciences This paper tries to use data mining methods to forecast fine particles (PM25) concentration level in a new town of Hong Kong rural area There are several classification algorithms available in data mining, such as Artificial Neural Network (ANN), Boosting, k-nearest Neighbours (k-nn), and so forth All of them are popular machine learning algorithms in various data mining areas, which including environmental data mining, educational data mining, financial data mining, etc This paper builds PM25 concentration level predictive models based on ANN, Boosting (ie AdaBoostM1), k-nn by using R packages The data set includes 2009 to 2011 period meteorological data and PM25 data The PM25 concentration is divided into 2 levels: low and high The critical point is 25μg/ (24 hours mean The parameters of both models are selected by multiple cross validation According to 100 replications of 10-fold cross validation, the testing accuracy of AdaBoostM1 is around 0846~0868, which is the best result among three algorithms in this paper Keywords: Artificial Neural Network (ANN), AdaBoostM1, k-nearest Neighbours (k-nn), PM25 prediction, Data Mining, Machine Learning INTRODUCTION Air pollution is a major problem in the world, especially in some developing countries and business cities One of the most important pollutants is particulate matter Particulate matter (PM) can be defined as a mixture of fine particles and droplets in the air and this can be characterized by their sizes PM25 refers to particulate matter whose size is 25 micrometers or smaller Due to its effect 715

2 on health, it is crucial to prevent the pollution getting worse in a long run According to WHO's report, the mortality in cities with high levels of pollution exceeds that observed in relatively cleaner cities by 15 20% (WHO, 2011) Data mining provides many methods for building predictive models in various areas, which including regression, classification, cluster analysis, association analysis, and so on Data mining projects are often structured around the specific needs of an industry sector or even tailored and built for a single organization A successful data mining project starts from a well defined question or need PM25 prediction models can be divided into two groups: one is to build a regression or related model in order to capture the exact numeric value in future (ie next day or hours); another one is to use classification method building a model for predicting the level of concentration We will use the latter one in this paper, that is, classification model Classification result will tell people what is the PM25 concentration level in the next day instead of the concrete value, this should be helpful for us to understand the pollution situation rather than the exact number Forecasting of air quality is much needed in a short term so that necessary preventive action can be taken during episodes of air pollution Considering that our target data set is from a rural area of Hong Kong, so we try to find a strict standard of PM25 as the split criterion WHO s Air Quality Guideline (AQG) says the mean of PM25 concentration in 24-hour level should be less than 25μg/ (WHO, 2005), although Hong Kong s proposed Air Quality Objectives (AQOs) is 75μg/ right now (AQO, 2012) As a result, we use WHO s critical value as our standard point The number of particulate at a particular time is dependent on many environmental factors, especially the meteorological data, say, air pressure, rainfall, humidity, air temperature, wind speed, and so forth In this paper, we try to build models for predicting next day's PM25 mean concentration level by using three popular machine learning algorithms, which are, Artificial Neural Network (ANN), k- Nearest Neighbours (k-nn), and Boosting (ie AdaBoostingM1) ANN is inspired by attempts to simulate biological neural system It may contain many intermediary layers between its input and output layers and may use types of activation functions (Rojas, 1996) k-nn is one of the simplest techniques in classification problems, that is, it only specifies the training data and then predicts the class of a new value by looking for the k observations in the training set that are closest to the new value Boosting based on decision tree method which is a basic classifier in many tree-based algorithms (eg Random Forest, C50, etc) The simple decision tree is a weak classifier; hence a smart technique is to combine them to be a stronger one For instance, assuming the error rate of a weak classifier is 049, but if we put 1001 same weak learners to be one learner and make them vote by weighted (simple majority rule), the error rate will be 1 ( ) =026 (ie binomial distribution) Thus, Boosting can be considered as the learner of a mass of decision trees being combined and weighted voting 716

3 An important issue in data mining is not only to analyse data but also to see them, so we choose R (Ihaka and Gentleman, 1996) as our analysis tool in this paper R is an open source programming language and software environment for statistical computing and graphics It is widely used for data analysis and statistical computing projects In this paper, we will use some R packages as our analysis tools, namely nnet package (Ripley, 2013), kknn package (Schliep and Hechenbichler, 2013), and RWeka package (Hornik et al, 2013) Moreover, we also use some packages for plotting figures, such as reshape2 package and ggplot2 package (Wickham, 2013a; 2013b) The remainder of the paper is organized as follow: Section 2 gives brief reviews of these three algorithms Section 3 and 4 will describe the data and the experiments At last, the conclusion and discussion will be given in Section 5 METHODOLOGY Artificial Neural Network (ANN) ANN is formed by a set of computing units (ie the neurons) linked to each other Each neuron executes two consecutive calculations: a linear combination of its inputs, followed by a nonlinear computation of the result to obtain its output value that is then fed to other neurons in the network Each of the neuron connections has an associated weight Constructing an artificial neural network consists of establishing architecture for the network and then using an algorithm to find the weights of the connections between the neurons The network may contain many intermediary layers between its input and output layers Such intermediary layers are called hidden layers and the nodes embedded in these layers are called hidden nodes The network may use types of activation functions, such as linear, sigmoid (logistic), and hyperbolic tangent functions, etc In R nnet package, the sigmoid function is default for classification model Its expression is shown below: x e f( x) 1 e The back-propagation (ie BP) algorithm is used in layered feed-forward ANN (Hagan et al, 1996) The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated Let D = {( ) i=1, 2,, N be the set of training examples The goal of the ANN learning algorithm is to determine a set of weights w that minimize the total sum of squared errors N 1 E( w) ( y ˆ i yi) 2 i 1 x 2 where y ˆi is the output value by performing a weighted sum on its input And the weight update formula used by the gradient descent method: 717

4 w j Ew ( ) wj w j where is the learning rate There are two phases in each iteration of the BP algorithm: The forward phase During the forward phase, outputs of the neurons at level k are computed prior to computing the outputs at level k+1 The backward phase During the backward phase, the weights at level k+1 are updated before the weights at level k are updated This BP algorithm allows us to use the error for neurons at layer k+1 to estimate the errors for neurons at layer k k-nearest Neighbours (k-nn) k-nn (Cover and Hart, 1967) is one of the simplest data mining algorithms and it belongs to the class of so-called lazy learners k-nn does not actually obtain a model from training data but simply store the data sets Its main work happens at prediction time Given a new test case, its prediction is obtained by searching for similar cases in the training data that was stored The k most similar training cases (ie neighbours) are used to obtain the prediction for the given test case When we talk about neighbours we are implying that there is a distance or proximity measure that we can compute between samples based on the independent variables There are many distance functions, but a rather frequent selection is the Minkowski distance function that is defined as n r 1 (, ) ( k k ) r k 1 d x y x y where n is the number of dimensions (ie attributes) and and are, respectively, the k th attributes of x and y There are two important particular cases: r=1 Manhattan distance A common example is the Hamming distance, which is the number of bits that are different between two objects that have only binary attributes It is the method of calculating the distance among nominal variables in k-nn algorithm r=2 Euclidean distance It is used to calculate the distance among numeric variables in k-nn algorithm Once the nearest-neighbours list is obtained, the test example is classified based on the majority class of its nearest neighbours This approach makes the algorithm sensitive to the choice of k if every neighbour has the same impact An alternative way to reduce the impact of k is to weight the influence of each nearest neighbour according to its distance: Thus, training examples that are located far away from a given test example z=( ) have a weaker impact on the classification compared to those that are located close to z Using the distance-weighted voting scheme, the class label can be determined as follows: Distance-Weighted Voting: ' y wi I v yi v ( xi, yi ) Dz arg max ( ) 718

5 where is the set of k closest training examples to z, v is a class label, is the class label for one of the nearest neighbours, and I is an indicator function that returns the value 1 if its argument is true and 0 otherwise Boosting (AdaBoostM1) Boosting is an ensemble classification method Firstly, it uses voting to combine the output of individual models Secondly, it combines models in the same type, that is, decision tree or stump However, boosting assigns a weight to each example and may adaptively change the weight at the end of each boosting round There are many variants on the idea of boosting algorithm, and we will describe a widely used method called AdaBoostM1 (Freund and Schapire, 1995) which is designed specifically for classification problem AdaBoost stands for adaptive boosting, it increases the weights of incorrectly classified examples and decreases the ones of those classified correctly The AdaBoostM1 algorithm is summarized below: Given: ( ), ( ),, ( ) where Initialize For t=1, 2,, T: Train weak learner using distribution Get weak hypothesis : X with error Pr [ h ( x ) y ] t i~ Dt t i i If, then the weights are reverted back to their original uniform values 1/m Choose Update: { where is a normalization factor, that is, will be a probability distribution Output the final hypothesis: DATA PREPARATION H( x) sign( h ( x)) T t 1 t t All of data for the period were obtained from Hong Kong Environmental Protection Department (HKEPD) and Hong Kong Met-online The air monitoring station is Tung Chung Air Monitoring Station (Latitude 22 17'19"N, Longitude '35"E) which is in a new town of Hong Kong, and the meteorological monitoring station is Hong Kong International Airport Weather Station (Latitude 22 18'34"N, Longitude '19"E) which is the nearest station from Tung Chung As mentioned in Section 1, accurately predicting high PM25 concentration is of most value from a public health standpoint, thus, the response variable has two classes, which are, 719

6 Low indicating the daily mean concentration of PM25 is equal or below 25μg/, and High representing above 25μg/ Figure 1 shows that the days of two levels in The best situation is in 2010 which has the most days of low level, while the worst is in 2011 Figure 2 shows the box plot of these three years We can learn that there are many outliers in both 2009 and 2010, which means there are many serious pollution days This situation is from various reasons, for instance, the China mainland pollution influenced or appearing a pollution point in a certain time, etc On the other hand, there is no obvious outlier in 2011 and it has the largest IQR We convert all hourly air data to daily mean values, additionally, we add some other related values, which are the Open value (ie at 0 o clock), the Close value (ie at 23 o clock), the Low value (ie the lowest value of a day), and the High value (ie the highest value of a day) The meteorological data is the original daily data and some of them including low value, high value and mean value We certainly cannot ignore the effects of seasonal changes and human activities; hence we add two time variables, namely the month (Figure 3) and the day of week (Figure 4) Figure 3 clearly shows that PM25 concentration reaches a low level from May to September, during which is the summer or the rainy season in Hong Kong But from October to next April the pollutant is serious, especially in October, December and January We should know that the rainfall may not be an important factor in the experiment as the response variable is the next day s PM25, and it is easy to understand that rainy season includes variant meteorological factors Figure 4 presents the trends of people s activities in some sense We learn that the air pollution is serious on Saturday and Sunday, while the lowest level appears on Tuesday This situation is difficult to explain exactly, a proper reason may be the monitoring station locates at a living district in rural area and human activities are mostly at the weekend But the Week factor is not enough satisfied as its trend is too smooth, that is, classification tools prefer to classify much waved variables than the smooth ones At last, there are 1065 observations by deleting all NAs and 18 predictor variables (Table 1) and 1 response variable which is the next day s PM25 concentration level In summary, the percentage of Low level is around 458% and High level is around 542%, respectively Note that the goal of the predictive model is to obtain the prediction accuracy above randomly guessing, namely 542% in this project, otherwise it will be failure EXPERIMENTS The experiments include four parts: the first three parts will respectively show how to train and test each model by multiple times of 10-fold cross validation (10-fold CV), we will choose the best parameter using in the last section which will test the performance and stability of each model by 100 replications of 10-fold CV ANN nnet package is the R package we used in this paper One of the most important parameters of ANN is to select the number of nodes in the hidden layer There is no stable theory to calculate the 720

7 nodes, so the best way is to search it in a proper range Generally speaking, the number of nodes should not be more than the predictor variables We use 10 replications of 10-fold CV to calculate the training and testing accuracy when the nodes are from 1 to 20 and the number of iteration is 500 The result is shown in Table 2 We learn that the best number is size = 3, whose testing accuracy is 0845 Figure 5 shows the trends of accuracy by changing the number of nodes in the hidden layer from 1 to 20 and the testing method is still 10 replications of 10-fold CV We find that either training accuracy or testing accuracy increases no stable, that is, the best accuracy on testing set appearing at size = 3 while the training set at size = 19 ANN may appear over-fitting when the hidden nodes is large, which means when the training accuracy increasing and the testing accuracy decreasing rapidly But it does not appear over-fitting in our experiment, or say at least it has a proper performance within 20 hidden nodes In summary, we will use size = 3 in the last section k-nn We will use kknn package as k-nn analysis tool in this paper An important issue in k-nn algorithm is how to select the proper number of nearest neighbours k, while there is no standard method to calculate it exactly If k is too small, then the result can be sensitive to noise points On the other hand, if k is too large, then the neighbourhood may include too many points from other classes Unquestionably, an odd number for k is desirable, that is, the numbers of the nearest neighbours in the set {1, 3, 5, 7, } Empirically, selecting k no more than the square root of the samples is a proper choice for k-nn Similar as ANN, we will use 10 replications of 10-fold CV to select the best k from 1 to round ( ) = 33 The result is shown in Table 3 And the trends of accuracy by changing the nearest neighbours is shown in Figure 6 We can learn that the accuracy is very close when k = 15, 17, and 19 (ie they are equivalent when remain 3 decimals) The testing accuracy line seems to be more smooth than ANN in Figure 5 In contrast, the training accuracy line decreases rapidly while k becomes larger Again, there is no over-fitting in k-nn as we mentioned above We choose k = 17 as the parameter in the last section (actually it is the highest accuracy when remain more decimals) Boosting We will use RWeka package for building AdaBoostM1 model in this paper Similar as ANN and k-nn, we try to obtain the best number of parameter at first In AdaBoostM1 we have to select the proper iterations which will influence the weighted power in the model We set it from 1 to 100 and still use 10 replications of 10-fold CV Some of the results are shown in Table 4 We find that the highest accuracy is at the 80 th iteration, whose testing accuracy is 0862 Figure 7 indicates the trends of accuracy by changing iterations in AdaBoostM1 We can see that both the training and testing accuracy are low at the first iterations, that is, the basic classifier (ie stump) is weak The model becomes more powerful when iteration increases According to Figure 7 we learn that the training accuracy increases more stable than the testing set, the latter waves seriously Alternatively, one can choose other basic classifiers in AdaBoostM1 algorithm, for instance, C45 (ie J48 in RWeka package) and so forth Generally speaking, if the basic classifier is enough powerful then the boosting model will be better, too 721

8 Comparison This section try to test the performance and the stability of three models, that is, a good algorithm should not only obtain high accuracy but also perform stable in process We compare all algorithms by using 100 replications of 10-fold CV with the result shown in Table 5 We learn that AdaBoostM1 obtains the best result and its accuracy is around 0846~0868 More precisely, its median accuracy is even better than the highest accuracy of either ANN or k-nn Figure 8 shows the violin plot of this result A violin plot is a combination of a box plot and a kernel density plot We can see ANN has a long tail which means its accuracy waves seriously AdaBoostM1 and k- NN are much more stable than ANN, especially AdaBoostM1 performs much more better than others CONCLUSION In this paper, we build PM25 concentration levels predictive models by using three popular machine learning algorithms, which are ANN, k-nn and AdaBoostM1 The dataset, which is from a rural area in Hong Kong, includes 1065 rows and 19 columns by deleting all missing values Based on all experiments, the conclusion is shown below Either of three algorithms needs to set proper parameters in the model by multiple times 10-fold CV (we set 10 replications in this paper), this can be maximum limit reducing random error in the model For ANN, selecting the hidden nodes should not be more than the number of variables For k-nn, one can search the suitable k among an odd set but no more than the square root of the samples For AdaboostM1, iteration is an important parameter related the weighted power and it should be searched in a wide range (eg 1 to 100) In order to avoid over-fitting, the selecting criterion should be the testing accuracy but not the training accuracy According to 100 replications of 10-fold CV, the best result is from AdaBoostM1, which not only obtains the highest accuracy but also performs more stable than others In practice, researchers can change other basic classifiers (eg C45) or add new parameters (eg rule based) in AdaBoostM1 algorithm ANN performs clearly unstable and it may not be a suitable tool for PM25 prediction models k-nn is also a stable model though its accuracy is lower than AdaBoostM1 A more accurate distance function may be enhancing k-nn s performance, for instance, high dimension functions Additionally, more powerful weighted is also very important (such like boosting) 722

Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

The Fifth International Conference on e-learning (elearning-2014), 22-23 September 2014, Belgrade, Serbia BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL SNJEŽANA MILINKOVIĆ University

82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described

Data Mining Unit # 6 Sajjad Haider Fall 2014 1 Nonlinear Classification Classes may not be separable by a linear boundary Suppose we randomly generate a data set as follows: X has range between 0 to 15

AUTOMATION OF ENERGY DEMAND FORECASTING by Sanzad Siddique, B.S. A Thesis submitted to the Faculty of the Graduate School, Marquette University, in Partial Fulfillment of the Requirements for the Degree

New Ensemble Combination Scheme Namhyoung Kim, Youngdoo Son, and Jaewook Lee, Member, IEEE Abstract Recently many statistical learning techniques are successfully developed and used in several areas However,

Neural Networks & Boosting Bob Stine Dept of Statistics, School University of Pennsylvania Questions How is logistic regression different from OLS? Logistic mean function for probabilities Larger weight

Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm

Send Orders for Reprints to reprints@benthamscience.ae 766 The Open Electrical & Electronic Engineering Journal, 2014, 8, 766-771 Open Access Research on Application of Neural Network in Computer Network

INTRODUCTION TO NEURAL NETWORKS Pictures are taken from http://www.cs.cmu.edu/~tom/mlbook-chapter-slides.html http://research.microsoft.com/~cmbishop/prml/index.htm By Nobel Khandaker Neural Networks An

OpenStax-CNX module: m42090 1 Introduction to Logistic Regression Dan Calderon This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 Abstract Gives introduction

Clustering 1/46 Agenda Introduction Distance K-nearest neighbors Hierarchical clustering Quick reference 2/46 1 Introduction It seems logical that in a new situation we should act in a similar way as in

Predicting borrowers chance of defaulting on credit loans Junjie Liang (junjie87@stanford.edu) Abstract Credit score prediction is of great interests to banks as the outcome of the prediction algorithm

Local classification and local likelihoods November 18 k-nearest neighbors The idea of local regression can be extended to classification as well The simplest way of doing so is called nearest neighbor

Neural Networks Neural network is a network or circuit of neurons Neurons can be Biological neurons Artificial neurons Biological neurons Building block of the brain Human brain contains over 10 billion

Vol. 2/3, December 02 18 Classification and Regression by randomforest Andy Liaw and Matthew Wiener Introduction Recently there has been a lot of interest in ensemble learning methods that generate many

2.3 Advanced analytics at your hands Neural Designer is the most powerful predictive analytics software. It uses innovative neural networks techniques to provide data scientists with results in a way previously

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS 1 AND ALGORITHMS Chiara Renso KDD-LAB ISTI- CNR, Pisa, Italy WHAT IS CLUSTER ANALYSIS? Finding groups of objects such that the objects in a group will be similar

Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support

Comparison of K-means and Backpropagation Data Mining Algorithms Nitu Mathuriya, Dr. Ashish Bansal Abstract Data mining has got more and more mature as a field of basic research in computer science and

Data Mining Lab 5: Introduction to Neural Networks 1 Introduction In this lab we are going to have a look at some very basic neural networks on a new data set which relates various covariates about cheese

An Introduction to Data Mining for Wind Power Management Spring 2015 Big Data World Every minute: Google receives over 4 million search queries Facebook users share almost 2.5 million pieces of content

Cross-validation for detecting and preventing overfitting Note to other teachers and users of these slides. Andrew would be delighted if ou found this source material useful in giving our own lectures.

Bias-Variance Theory Decompose Error Rate into components, some of which can be measured on unlabeled data Bias-Variance Decomposition for Regression Bias-Variance Decomposition for Classification Bias-Variance

6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural

Bank Customers (Credit) Rating System Based On Expert System and ANN Project Review Yingzhen Li Abstract The precise rating of customers has a decisive impact on loan business. We constructed the BP network,

Predicting air pollution level in a specific city Dan Wei dan1991@stanford.edu 1. INTRODUCTION The regulation of air pollutant levels is rapidly becoming one of the most important tasks for the governments

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j What is Kiva? An organization that allows people to lend small amounts of money via the Internet

MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!