Machine Learning - Lecture 4: The Naive Bayes Classifier



Machine Learning - Lecture 4: The Naive Bayes Classifier
Let's say we have some data that lists symptoms and ailments for everybody in a certain group.
  SYMPTOM       AILMENT

  sneezing      flu
  sneezing      hayfever
  headache      concussion
  sneezing      flu
  coughing      flu
  backache      none
  vomiting      concussion
  crying        hayfever
  temperature   flu
  drowsiness    concussion

Prediction

There are 10 cases in all so we can work out the probability of seeing a particular ailment or symptom just by counting and dividing by 10.
  P(hayfever) = 2/10 = 0.2

  P(vomiting) = 1/10 = 0.1
As a simple, statistical model of the data, these (so-called prior) probabilities can be used for prediction.
Let's say we're undecided whether someone has flu or hayfever.
We can use the fact that P(flu) > P(hayfever) to predict it's more likely to be flu.
Conditional probabilities
This sort of modeling becomes more useful when conditional probabilities are used.
These are values we work out by looking at the probability of seeing one value given we see another, e.g., the probability of vomiting given concussion.
Conditional probabilites are notated using the bar `|' to separate the conditioned from the conditioning value.
The probability of vomiting given concussion is written
  P(vomiting|concussion)
We can work this value out by seeing what proportion of the cases involving concussion also show vomiting.
  P(vomiting|concussion) = 1/3 = 0.3333

Prediction from conditional probabilities

Conditional probabilities enable conditional predictions.
For example, we could tell someone who's known to have concussion that there's a 1/3 chance of them vomiting.
This can also be a way of generating diagnoses.
If someone reports they've been sneezing a lot, we can say there's a 2/3 chance of them having flu, since
  P(flu|sneezing) = 2/3

The problem of multiple attributes

What happens if the data include more than one symptom?
We might have something like this.
  SYMPTOM       OCCUPATION    AILMENT

  sneezing      nurse         flu
  sneezing      farmer        hayfever
  headache      builder       concussion
We'd like to be able to work out probabilities conditional on multiple symptoms, e.g.,
  P(flu|sneezing,builder)
But if a combination doesn't appear in the data, how do we calculate its conditional probability?

Using inversion

There's no way to sample a probability conditional on a combination that doesn't appear.
But we can work it out by looking at probabilities that do appear.
Observable probabilities that contribute to
  P(flu|sneezing,builder)
are
  P(flu)
  P(sneezing|flu)
  P(builder|flu)
All we need is some way of putting these together.

The naive assumption

Probability theory says that if several factors don't depend on each other in any way, the probability of seeing them together is just the product of their probabilities.
So assuming that sneezing has no impact on whether you're a builder, we can say that
  P(sneezing,builder|flu) = P(sneezing|flu)P(builder|flu)
The probability of a sneezing builder having flu must depend on the chances of this combination of attributes indicating flu. So
  P(flu|sneezing,builder)
must be proportional to
  P(flu)P(sneezing,builder|flu)

Normalization needed

Unfortunately, this value is purely based on cases of flu. It doesn't take into account how common this ailment is.
We need to factor in the probability of this combination of attributes associating with flu in particular, rather than some other ailment.
We do this by expressing the value in proportion to the probability of seeing the combination of attributes.
This gives us the value we want.

Naive Bayes Classifier

A Naive Bayes Classifier is a program which predicts a class value given a set of set of attributes.
For each known class value,
  1. Calculate probabilities for each attribute, conditional on the class value.
  2. Use the product rule to obtain a joint conditional probability for the attributes.
  3. Use Bayes rule to derive conditional probabilities for the class variable.
Once this has been done for all class values, output the class with the highest probability.

The problem of missing combinations

A niggling problem with the NBC is where the dataset doesn't provide one or more of the probabilities we need.
We then get a probability of zero factored into the mix.
This may cause us to divide by zero, or simply make the final value itself zero.
The easiest solution is to ignore zero-valued probabilities altogether if we can.

Summary

  • Clustering and nearest-neighbour methods ideally suited to numeric data.
  • Probablistic modeling may be more effective with categorical (symbolic) data.
  • Probabilities easily derived from datasets.
  • But for classification, we normally need to invert the conditional probabilities we can sample.
  • The Naive Bayes Classifier uses Bayes Rule to identify the class with the highest probability.
  • On average, the NBC seems to be perform better than expected.

Read full article from Machine Learning - Lecture 4: The Naive Bayes Classifier

No comments:

Post a Comment

Labels

Algorithm (219) Lucene (130) LeetCode (97) Database (36) Data Structure (33) text mining (28) Solr (27) java (27) Mathematical Algorithm (26) Difficult Algorithm (25) Logic Thinking (23) Puzzles (23) Bit Algorithms (22) Math (21) List (20) Dynamic Programming (19) Linux (19) Tree (18) Machine Learning (15) EPI (11) Queue (11) Smart Algorithm (11) Operating System (9) Java Basic (8) Recursive Algorithm (8) Stack (8) Eclipse (7) Scala (7) Tika (7) J2EE (6) Monitoring (6) Trie (6) Concurrency (5) Geometry Algorithm (5) Greedy Algorithm (5) Mahout (5) MySQL (5) xpost (5) C (4) Interview (4) Vi (4) regular expression (4) to-do (4) C++ (3) Chrome (3) Divide and Conquer (3) Graph Algorithm (3) Permutation (3) Powershell (3) Random (3) Segment Tree (3) UIMA (3) Union-Find (3) Video (3) Virtualization (3) Windows (3) XML (3) Advanced Data Structure (2) Android (2) Bash (2) Classic Algorithm (2) Debugging (2) Design Pattern (2) Google (2) Hadoop (2) Java Collections (2) Markov Chains (2) Probabilities (2) Shell (2) Site (2) Web Development (2) Workplace (2) angularjs (2) .Net (1) Amazon Interview (1) Android Studio (1) Array (1) Boilerpipe (1) Book Notes (1) ChromeOS (1) Chromebook (1) Codility (1) Desgin (1) Design (1) Divide and Conqure (1) GAE (1) Google Interview (1) Great Stuff (1) Hash (1) High Tech Companies (1) Improving (1) LifeTips (1) Maven (1) Network (1) Performance (1) Programming (1) Resources (1) Sampling (1) Sed (1) Smart Thinking (1) Sort (1) Spark (1) Stanford NLP (1) System Design (1) Trove (1) VIP (1) tools (1)

Popular Posts