Data Mining Algorithms In R/Classification/Naïve Bayes

From Wikibooks, open books for an open world
Jump to: navigation, search

Introduction[edit]

This chapter introduces the Naïve Bayes algorithm for classification. Naïve Bayes (NB) based on applying Bayes' theorem (from Bayesian statistics) with strong (naive) independence assumptions. It is particularly suited when the dimensionality of the inputs is high. Despite its simplicity, Naive Bayes can often outperform more sophisticated classification methods.


Naïve Bayes[edit]

Naive Bayes classifiers can handle an arbitrary number of independent variables whether continuous or categorical. Given a set of variables, X = {x_1,x_2,x_3...,x_d}, we want to construct the posterior probability for the event C_j among a set of possible outcomes C = {c_1,c_2,c_3...,c_n}. In a more familiar language, X is the predictors and C is the set of categorical levels present in the dependent variable. Using Bayes' rule:

p(C \vert x_1,\dots,x_d) = \frac{p(C) \ p(x_1,\dots,x_d\vert C)}{p(x_1,\dots,x_d)}. \,

where p(C_j \vert x_1,\dots,x_d) is the posterior probability of class membership, i.e., the probability that X belongs to C_j.

In practice we are only interested in the numerator of that fraction, since the denominator does not depend on C and the values of the features x_i are given, so that the denominator is effectively constant. The numerator is equivalent to the joint probability:

p(C, x_1, \dots, x_d) = p(C) \ p(x_1\vert C) \ p(x_2\vert C, F_1) \ p(x_3\vert C, x_1, x_2) \ \dots p(x_n\vert C, x_1, x_2, x_3,\dots,F_{d-1}).

The "naive" conditional independence assumptions come into play: assume that each feature x_i is conditionally statistical independent of every other feature x_j for j\neq i. This means that

p(x_i \vert C, x_j) = p(x_i \vert C)\,

for i\ne j, and so the joint model can be expressed as

p(C, x_1, \dots, x_d)
= p(C) \ p(x_1\vert C) \ p(x_2\vert C) \ p(x_3\vert C) \ \cdots\,
= p(C) \prod_{i=1}^d p(x_i \vert C).\,

This means that under the above independence assumptions, the conditional distribution over the class variable C can be expressed like this:

p(C \vert x_1,\dots,x_d) = \frac{1}{Z}  p(C) \prod_{i=1}^d p(x_i \vert C)

where Z (the evidence) is a scaling factor dependent only on x_1,\dots,x_d, i.e., a constant if the values of the feature variables are known.

Finally, we can label a new case F with a class level C_j that achieves the highest posterior probability:

\mathrm{classify}(F_1,\dots,F_d) = \underset{c}{\operatorname{argmax}} \ p(C=c) \displaystyle\prod_{i=1}^d p(x_i=F_i\vert C=c).


Available Implementations[edit]

There are at least two R implementations of Naïve Bayes classification available on CRAN:

Installing and Running the Naïve Bayes Classifier[edit]

E1071 is a CRAN package, so it can be installed from within R:

> install.packages('e1071', dependencies = TRUE)

Once installed, e1071 can be loaded in as a library:

> library(class) 
> library(e1071) 

It comes with several well-known datasets, which can be loaded in as ARFF files (Weka's default file format). We now load a sample dataset, the famous Iris dataset [1] and learn a Naïve Bayes classifier for it, using default parameters. First, let us take a look at the Iris dataset.

Dataset[edit]

The Iris dataset contains 150 instances, corresponding to three equally-frequent species of iris plant (Iris setosa, Iris versicolour, and Iris virginica). An Iris versicolor is shown below, courtesy of Wikimedia Commons.

Iris versicolor

Each instance contains four attributes:sepal length in cm, sepal width in cm, petal length in cm, and petal width in cm. The next picture shows each attribute plotted against the others, with the different classes in color.

> pairs(iris[1:4], main = "Iris Data (red=setosa,green=versicolor,blue=virginica)",
      pch = 21, bg = c("red", "green3", "blue")[unclass(iris$Species)])
Plotting the Iris attributes

Execution and Results[edit]

First of all, we need to specify which base we are going to use:

> data(iris)
> summary(iris)
  Sepal.Length    Sepal.Width     Petal.Length    Petal.Width   
 Min.   :4.300   Min.   :2.000   Min.   :1.000   Min.   :0.100  
 1st Qu.:5.100   1st Qu.:2.800   1st Qu.:1.600   1st Qu.:0.300  
 Median :5.800   Median :3.000   Median :4.350   Median :1.300  
 Mean   :5.843   Mean   :3.057   Mean   :3.758   Mean   :1.199  
 3rd Qu.:6.400   3rd Qu.:3.300   3rd Qu.:5.100   3rd Qu.:1.800  
 Max.   :7.900   Max.   :4.400   Max.   :6.900   Max.   :2.500  
       Species  
 setosa    :50  
 versicolor:50  
 virginica :50  

After that, we are ready to create a Naïve Bayes model to the dataset using the first 4 columns to predict the fifth. (Factor the target column by so: dataset$col <- factor(dataset$col) )

            
> classifier<-naiveBayes(iris[,1:4], iris[,5]) 
> table(predict(classifier, iris[,-5]), iris[,5])
            
             setosa versicolor virginica
  setosa         50          0         0
  versicolor      0         47         3
  virginica       0          3        47

Analysis[edit]

This simple case study shows that a Naïve Bayes classifier makes few mistakes in a dataset that, although simple, is not linearly separable, as shown in the scatterplots and by a look at the confusion matrix, where all misclassifications are between Iris Versicolor and Iris Virginica instances.

References[edit]

  1. ^ Fisher,R.A. (1936); The use of multiple measurements in taxonomic problems. Annual Eugenics, 7, Part II, 179-188.