Strategy for Information Markets/Information Cascades

From Wikibooks, open books for an open world
< Strategy for Information Markets
Jump to: navigation, search

Conditional Probability[edit]

If  Pr(A|B) = "Probability of A given B" or "Probability of A conditioned on B"

then,

 Pr(A|B) = \frac {Pr(A\ AND\ B)}{Pr(B)}


Bayes' Rule[edit]

 Pr(B|A) = \frac { Pr(A|B)Pr(B) } { Pr(A|B)Pr(B)+Pr(A|NotB)Pr(NotB) }

Condorcet Jury Theorem[edit]

Binomial Distribution[edit]

If the probability of one success is \text{ }p, then

  Pr(k\text{ successes in}\ n\ \text{trials}) = 	\binom{n}{k} p^k (1-p)^{(n-k)}

while

  •  p^k\ stands for the probability of a particular \text{  }k trial being a success
  •   (1-p)^{(n-k)} stands for the probability of a particular \text{  }(n-k) trial being a failure


and in math,

  •  \binom{n}{k} = \frac {n!}{k!(n-k)!}


Group Decision/Voting[edit]

In order to determine if a group decision/voting is correct, the number of successes \text{  }C needs to be more than half of \text{  }n. The following formula derived from the Binomial Distribution Function tells the chance of the right group decision.

In the case here, by eliminating the situation that the vote is a tie, let's assume that the number of votes \text{  }n is odd so that \text{  }C could be more than half of \text{  }n.

Therefore,

 Pr(C) = \sum_{k=\frac{n+1}{2}}^{n} \binom{n}{k} p^k (1-p)^{(n-k)}

Influence-Dependent Model of Group Decision/Voting[edit]

In daily lives, people usually make votes with other influences, instead of absolutely independent decision making. Let's derive another model to determine the probability of correct group decision on other influences.

Let

  •  p = the probability of being correct
  •   C = the group makes the correct decision (more than half of the votes are correct)
  •   P_I = the probability of the influence being correct
  •  \alpha =  the probability of the voter following the influence to make decision
  •  Pr(\text{Voter votes correctly}) = (1- \alpha)p + \alpha P_I
  •   P_T = Pr(\text{Voter Correct}|\text{Influence Correct})= (1- \alpha)p + \alpha = the probability of the voter being correct if the influence is correct
  •  P_F = Pr(\text{Voter Correct}|\text{Influence Wrong})= (1- \alpha)p = the probability of the voter being correct if the influence is wrong


Therefore,


 Pr(\text{Group Correct}) = Pr(\text{Influence Correct})Pr(\text{Group Correct}|\text{Influence Correct})

+ Pr(\text{Influence Wrong})Pr(\text{Group Correct}|\text{Influence Wrong})


 Pr(C) = P_I\sum_{k=\frac{n+1}{2}}^{n} \binom{n}{k} P_T^k (1-P_T)^{(n-k)}+(1-P_I)\sum_{k=\frac{n+1}{2}}^{n} \binom{n}{k} P_F^k (1-P_F)^{(n-k)}

Central Limit Theorem[edit]

Let x_1,x_2,...,x_n be a series of independently and identically distributed random variables. The mean of these variables is  \mu_x and the variance is \sigma_x^2.

Let y = \frac {1}{n} (x_1+x_2+...+x_n) .

When n gets larger, y gets closer to be a random variable that is normally distributed and has mean  \mu_y = \mu_x and variance \sigma_y^2 = \frac{\sigma_x^2}{n}