Naive Bayesian Model

Introduction to Naïve Bayes

It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors. Naive Bayes model is easy to build and particularly useful for very large datasets. Along with simplicity, Naive Bayes is known to outperform even the most-sophisticated classification methods. It proves to be quite robust to irrelevant features, which it ignores. It learns and predicts very fast and it does not require lots of storage. So, why is it then called naive? The naive was added to the account for one assumption that is required for Bayes to work optimally: all features must be independent of each other. In reality, this is usually not the case; however, it still returns very good accuracy in practice even when the independent assumption does not hold.

4 Applications of Naive Bayes Algorithms

Below are the common applications of Naive Bayes Algorithms:

Free Step-by-step Guide To Become A Data Scientist

Subscribe and get this detailed guide absolutely FREE

Real-time Prediction: AsNaive Bayes is super fast, it can be used for making predictions in real time.

Multi-class Prediction: This algorithm can predict the posterior probability of multiple classes of the target variable.

Text classification/ Spam Filtering/ Sentiment Analysis: Naive Bayes classifiers are mostly used in text classification (due to their better results in multi-class problems and independence rule) have a higher success rate as compared to other algorithms. As a result, it is widely used in Spam filtering (identify spam e-mail) and Sentiment Analysis (in social media analysis, to identify positive and negative customer sentiments)

Recommendation System: Naive Bayes Classifier along with algorithms like Collaborative Filtering makes a Recommendation System that uses machine learning and data mining techniques to filter unseen information and predict whether a user would like a given resource or not.

Mathematical Overview (Probability model):

Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of independence between every pair of features. Given a class variable y and a dependent feature vector x1 through xn, Bayes’ theorem states the following relationship:

We now have a relationship between the target and the features using Bayes Theorem along with a Naive Assumption that all features are independent.

Constructing the NB Classifier from the Probability model:

So far we have derived the independent feature model, that is, the Naive Bayes probability model. The Naive Bayes classifier combines this model with a decision rule; this decision rule will decide which hypothesis is most probable. Picking the hypothesis that is most probable is known as the maximum a posteriori or MAP decision rule. The corresponding classifier, a Bayes classifier, is the function that assigns a class label to y as follows:

Since P(x1, …, xn) is constant given the input, we can use the following classification rule:

We can use Maximum A Posteriori (MAP) estimation to estimate P(y) and P(xi | y); the former is then the relative frequency of class y in the training set.

There are different naive Bayes classifiers that differ mainly by the assumptions they make regarding the distribution of P(xi | y). When dealing with continuous data, a typical assumption is that the continuous values associated with each class are distributed according to a Gaussian distribution. So we will use the Gaussian Naïve Bayes.

Problem Statement:

To build a simple generative classification model called Naive Bayes for predicting the quality of the car given few of other car attributes.

4. Identify the target variable

data['class'],class_names = pd.factorize(data['class'])

The target variable is marked as a class in the data frame. The values are present in string format. However, the algorithm requires the variables to be coded into its equivalent integer codes. We can convert the string categorical values into an integer code using factorize method of the pandas library.

It is easy to apply and predicts the class of test data set fast. It also performs well in multi-class prediction

When the assumption of independence holds, a Naive Bayes classifier performs better compared to the other models like logistic regression as you need less training data.

It performs well in the case of categorical input variables compared to a numerical variable(s). For the numerical variable, a normal distribution is assumed (bell curve, which is a strong assumption).

Algorithm Disadvantages:

If the categorical variable has a category (in test data set), which was not observed in training data set, then the model will assign a 0 (zero) probability and will be unable to make a prediction. This is often known as “Zero Frequency”. To solve this, we can use the smoothing technique. One of the simplest smoothing techniques is called Laplace estimation.

On the other side naive Bayes is also known as a bad estimator, so the probability outputs from predict_proba are not to be taken too seriously.

Another limitation of Naive Bayes is the assumption of independent predictors. In real life, it is almost impossible to get a set of predictors which are completely independent.

This model has an accuracy score of 71% since this is a very simplistic dataset with distinctly separable classes. That’s how to implement Naive-Bayes with scikit-learn. Load your favorite dataset and give it a try!. From here on, all you need is practice.

Related

Abhay Kumar, lead Data Scientist – Computer Vision in a startup, is an experienced data scientist specializing in Deep Learning in Computer vision and has worked with a variety of programming languages like Python, Java, Pig, Hive, R, Shell, Javascript and with frameworks like Tensorflow, MXNet, Hadoop, Spark, MapReduce, Numpy, Scikit-learn, and pandas.