All Courses

Understanding Naive Bayes algorithm in depth

Shashank Shanu

a year ago

Naive Bayes Algorithm
Table of Content
  • What is Bayes Theorem?
  • Working of Naive Bayes algorithm?
  • Types Naïve Bayes Algorithms
  • Pros and Cons of Naive Bayes
  • Some of the applications of Naive Bayes
  • Implementation of Naïve Bayes in Python.
             Imagine you are a data scientist and your manager told you to build a classification model. You collected all the required data and started working on it. Your data was very large with millions of data points. Suddenly your manager came to you and told you that your deadline for the project is reduced and you have to submit it in 2 days.
At this situation, what will you do? You have a very large amount of data with very less features in your data set.
In that case if I had to make such a model I would have used ‘Naive Bayes’, as Naïve bayes is considered to be one of a fastest algorithm when it comes for classification tasks. 
So, in this article, I will try to give an in-depth explanation of Naïve bayes algorithm, its types, how we can implement it into python, some of the pro’s and cons of this algorithm.
So, let’s start…
Naive Bayes is one of the mostly used machine learning model that is used when we are having large volumes of data, even if you are working with data that has millions of data records the recommended approach is Naive Bayes. It gives very good results while performing sentimental analysis. It is a fast and uncomplicated classification algorithm.
This model is mainly used for classification tasks and the model is based on “Bayes Theorem”. This algorithm makes an assumption of independence among predictors.
Let’s me give you some of the assumptions made by Naïve bayes algorithm.
  • It assumes that the presence of a particular features in a class is not dependent to the presence of the any other features in the dataset.
  • Each features or independent variables are given same weight or importance.
Let me given you an example which helps you to understand it better.
“A fruit can be considered as an orange if it is orange in color, round in shape and about 3-4 inches in diameter.”
Even if all these features depend on each other or the others feature’s, all of these properties independently contribute to the probability that this fruit is an orange and that is why it is called as “Naïve” which implies lack of wisdom or judgement.
Naïve Bayes algorithm is very easy to build and very useful when we have large datasets. Along with its simplicity, Naïve Bayes is known to outperform even highly sophisticated classification models.
Now as we get some idea about Naïve Bayes algorithm so to understand the naive Bayes classifier in a better way, first we need to understand the Bayes theorem. So, let’s first understand the Bayes Theorem.

What is Bayes Theorem?

          Bayes Theorem works on conditional probability. Conditional probability is the probability that something will happen, given that something else has already happened/occurred. It helps us in calculating the probability of an event using its prior knowledge. The formulae can be given as:  
Conditional probability:
Let’s me give you, what each symbol represents in the formulae:
  • P(E|H): It represents the probability of the evidence given that hypothesis is true.  
  • P(H|E): It represents the probability of the hypothesis given that the evidence is true.
  • P(H): It represents the probability of hypothesis H being true. This is known as the prior probability.
  • P(E): It represents the probability of the evidence.
Note: In Naïve Bayes, we assume that all the predictors are independent of each other.
As of now, you understood what is Bayes theorem. Now, let move forward and understand with an example of how naïve Bayes algorithm works. 

Working of Naive Bayes algorithm?

         Let’s suppose, we have a training data set of weather and their corresponding target variable ‘Play’ (suggesting possibilities of playing). Our task is to classify whether players will play or not based on weather condition.
Steps involved:
Step 1: We have to first convert our data set into a frequency table as shown below.
Step 2: In the second step, we have to create Likelihood table as shown below by finding the probabilities like Overcast probability = 0.29 and probability of playing is 0.64.
Likelihood table
Step 3: Finally, we have to use Naive Bayes equation to find out the posterior probability for each class. The class with the highest posterior probability is the outcome of the prediction.
Problem: Players will play if weather is sunny. Is this statement is correct?
We can solve this problem using the above method of the posterior probability.
P (Yes | Sunny) = P (Sunny | Yes) * P(Yes) / P (Sunny)
Here, we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P(Yes)= 9/14 = 0.64
Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability.
There is 60% change that players will play when the weather is Sunny.
Therefore, predicted class is “Yes”.
Naive Bayes also uses a similar method to predict the probability of different class based on various attributes. This algorithm is mostly used in text classification and with problems having multiple classes.

Types Naïve Bayes Algorithms:

         Based on the data types of our variables, features or predictor, we may need to use a particular types of Naïve Bayes algorithm.
1)  Bernoulli Naïve Bayes
It is used when our features are “Binary/Boolean” such that they take only two values likes 0 or 1, Yes or No, or true or false.
2)  Multinomial Naïve Bayes
It is used when the features are of “discrete in nature”.
For example, it is used in text analysis, where we take the count of each word in given documents and try to predict the class/label.
3) Gaussian Naïve Bayes
It is used when the predictors are of continuous data type, while using gaussian naïve bayes make an assumption that they follow normal distribution or Gaussian distribution.

Pros and Cons of Naive Bayes

Pros:
  • Naïve Bayes is a highly extensible algorithm which is very fast.
  • It can be used for both binaries as well as multiclass classification.
  • It has mainly three different types of algorithms that are GaussianNB, MultinomialNB, BernoulliNB.
  •  It is a famous algorithm for spam email classification.
  •   It can be easily trained on small datasets and can be used for large volumes of data as well.
Cons:
  • The main disadvantage of the NB is considering all the variables independent that contributes to the probability.

Some of the applications of Naive Bayes

  • Real time Prediction:  Being a fast learning algorithm it can be used to make predictions in real-time as well.
  • Multiclass Classification:  It can be used for multi-class classification problems also.
  • Text Classification: As it has shown good results in predicting multi-class classification so it has more success rates compared to all other algorithms. As a result, it is majorly used in sentiment analysis & spam detection.
As of now, we get a clear picture about what is naïve bayes? How its works? And the types of Naïve Bayes. Let see how we can implement it in python.

Implementation of Naïve Bayes in Python.

In this example, we will be using iris dataset which is available in the python.
#Importing Libraries 

import pandas as pd

import numpy as np

from sklearn import datasets

from sklearn import metrics

from sklearn.naive_bayes import GaussianNB

from sklearn.metrics import confusion_matrix,classification_report,
accuracy_score

#Loading Datasets

dataset = datasets.load_iris()

#Creating Our Naive Bayes Model Using Sckit-learn

gnb = GaussianNB()

gnb.fit(dataset.data, dataset.target)

#Making Predictions

expected = dataset.target

predicted = gnb.predict(dataset.data)
#Getting Accuracy 

acc = accuracy_score(expected,predicted)

print("Accuracy of the model: ", acc)
Output:
Accuracy of the model: 0.96

# Getting confusion matrix

cm = confusion_matrix(expected,predicted)

print('Confusion Matrix is:',cm, sep='\n')
Output:
Confusion Matrix is:
[[50  0  0]
 [ 0 47  3]
 [ 0  3 47]]
#Getting classification report

cr = classification_report(expected, predicted)

print(cr)
Output:
precision-recall  f1-score   support

           0       1.00      1.00      1.00        50
           1       0.94      0.94      0.94        50
           2       0.94      0.94      0.94        50

    accuracy                           0.96       150
   macro avg       0.96      0.96      0.96       150
weighted avg       0.96      0.96      0.96       150
I hope after you enjoyed reading this article and finally, you came to know about Naive bayes algorithm, its working and different types. You also get an idea how you may implement it in python.
For more such blogs/courses on data science, machine learning, artificial intelligence and emerging new technologies do visit us at InsideAIML.
Thanks for reading…
Happy Learning…

Submit Review