# R Programming/Probability Functions/Bernoulli

Jump to navigation
Jump to search

A Wikibookian suggests that this book or chapter be merged into Statistics/Distributions/Bernoulli.Please discuss whether or not this merge should happen on the discussion page. |

## Contents

### The Bernoulli Distribution[edit]

A Bernoulli trial is a chance event that can have one of two outcomes, usually called "success" or "failure."

This distribution has one parameter, the unobserved probability of success, p. The probability of failure, often designated q, is the complement of p: 1-p

Some Bernoulli trials:

- Tossing a coin (Heads = success = 1, Tails = failure = 0)
- A single oocyst is detected with probability R and not detected with probability 1-R, R = recovery
- A marksman using a certain weapon aims at a target of a certain size that is a certain distance away. Each time he fires, he can either hit or miss the target.
- Walking Betty, we encounter a jogger. Betty either lunges, barking viciously, or totally ignores the jogger. (In this case, a lunge would be considered a "failure.")
- Your PARS evaluation results either in "fully successful" or something else. Do you think of fully successful a "success" or "failure?"
- Mike has written a number on a piece of paper. Your computer will generate one realization of a standard uniform random variable. It "succeeds" if the random number is less than Mike's. It is best to establish your prior before obtaining any data. Consider:
- How likely is it that Mike would pick a number less than zero, making the success probability 0?
- How likely is it that Mike would pick a number greater than one, making the success probability 1?
- Between 0 and 1, are there special values? Is 1/2 special? 1? 0? Others?
- A good prior might be Pr{0} = 1/3, Pr{1} = 1/3, with remaining 1/3 weight evenly spread across the range [0,1]

#### Probability Mass Function[edit]

- f(x) = dbinom(x,1,P)
- = P if x = 1
- = 1-P if x = 0
- = 0 otherwise

#### Distribution Function [edit]

- F(x) = pbinom(x,1,P)
- = 0 for x<0
- = 1-P for 0 <= x < 1
- = 1 for 1 <= x

#### Common Statistics[edit]

- Mean = E(X) = P
- Var = P*(1-P) or P*Q

#### Generating Random Variables [edit]

1s and 0s:

- rbinom(M,1,P)
- rbinom(10,1,0.3) --> 0 0 1 1 0 0 0 0 1 0
- sample(c("success", "failure"),10, replace=T, prob=c(0.9, 0.1)) -->

- [1] "success" "success" "success" "failure" "success" "success" "success"
- [8] "success" "success" "success"

#### Parameter Estimation[edit]

- Maximum Likelihood
- P = X

- Bayesian
- If prior is Beta(alpha,beta), then posterior is Beta(alpha+X, beta+1-X)
- Coin tossing

- Classical
*(Correction) What hit rate (p) is so small that data as extreme as what was observed would occur with at most probability 0.05? Answer: If datum is "success," a hit rate of 0.05 or less would produce "success" with at most probability 0.05 (duh). If datum is "failure," then a hit rate of 0.95 or greater would produce "failur" with at most probability 0.05.*

#### Related Distributions[edit]

Binomial is sum of repeated Bernoulli trials.