# IB Mathematics SL/Statistics and Probability

## Contents

## Probability[edit]

### Combined events[edit]

This is when both and can occur. Note that when A OR B occurs, A AND B also occurs. This means that

It also means that A occurs, **or** B occurs, **or** both A AND B occur.

### Mutually exclusive events[edit]

When two events are said to be mutually exclusive, then these two events cannot occur at the same time. For example, the outcome of one coin toss cannot be head and tails. When expressed in a Venn diagram, mutually exclusive events do not intersect with each other and have no overlapping area. In other words: . Therefore:

- .

Since the intersection of the two events is equal to 0.

### Exhaustive Events[edit]

This is when set A and set B include *all possible outcomes* in either set A, or set B. This means that where *U* is the set of all outcomes Or in other words

- .

For Exhaustive events, the complementary of A, and A add up to 1

- .

If an event isn't exhaustive, then you simply add up P(A) and P(A').

### Conditional probability[edit]

Conditional probability is the probability of an event given that a second event will definitely occur. Note that for mutually exclusive events, conditional probability will always be equal to zero.

Conditional probability can be found by first finding the probability that both events will occur. (For independent events, this simply means P(A)*P(B).) The result should then be divided by the probability of the given event.

### Independence[edit]

Two events are said to be independent if

Meaning that

## Statistics[edit]

### Diagrammatic representation of data[edit]

### Mode, median and mean[edit]

**MODE**- the part of a set of data that occurs the most frequently

Example: given the number set: 1,3,4,4,5,7,8,10,13,13,13

the number 13 occurs most frequently, thus the mode is *13*

**MEDIAN**- the number that is in the middle of a set of data To find the median, arrange the numbers such that they are in order and determine the middle number

Example: given the number set: ~~1~~,~~3~~,~~4~~,~~4~~,~~5~~,**7**,~~8~~,~~10~~,~~13~~,~~13~~,~~13~~

the number 7 is in the middle of the data, thus the median is *7*

**MEAN**- the average of a set of data To find the mean, one adds together all numbers in the data set, then divides this sum by the number of numbers in the set

Example: given the number set: 1,3,4,4,5,7,8,10,13,13,13 <-- there are 11 numbers in the data set Add all numbers together: 1+3+4+4+5+7+8+10+13+13+13= 79 Divide the sum (79) by the number of numbers in the data set (11) = 79/11 = *7.18*

Thus the average number of the data set, the mean, is *7.18*

### Measures of dispersion[edit]

Measures of dispersion, also known as measures of spread, measure how spread out the data is. There are two types: those parameters that are resistant to outliers and those parameters that aren't resistant to outliers.

Resistant to outliers: standard deviation, the standard deviation will measure what percentage of the statistics will fall under a certain range or standard deviation. 68% of all the data is said to fall within one standard deviation of the mean.

interquartile range,

Nonresistant to outliers: range, ie in the case of outliers in company salaries the top executive may earn upwards of $400,000 and a factory worker only $10,000,ergo, a range exists of $390,000 in salary amount.

The phrase "resistant to outliers" denotes that the parameter disregards the extremities and abnormalities of the set of data.

### Cumulative frequency[edit]

### Histograms[edit]

In statistics, a histogram is a graphical display of **tabulated frequencies, shown as bars.** It shows what proportion of cases fall into each of several categories: it is a form of data binning. The categories are usually specified as non-overlapping intervals of some variable. The categories (bars) must be adjacent. The intervals (or bands, or bins) are generally of the same size.

**Histograms are used to plot density of data**, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot.

An alternative to the histogram is kernel density estimation, which uses a kernel to smooth samples. This will construct a smooth probability density function, which will in general more accurately reflect the underlying variable.

The histogram is one of the seven basic tools of quality control, which also include the Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram.

### Random variables[edit]

### Expected values[edit]

The expected value of a random x variable to occur is given by the equation:

Where is the probability of event occurring. For example, the expected number of heads to occur when flipping a coin twice would be:

The probability of getting a head once, is 1/2 because there are two outcomes that include one head (HT, and TH).

### The binomial distribution[edit]

A distribution is binomial if and only if it fits the following four parameters: 1) The outcome is independent 2) There are only two outcomes,success or failure 3) The probability of success is constant 4) There is a fixed number of trials. For example, there is a bag containing 10 marbles, 5 are red, 3 are blue, and 2 are green. If 5 marbles are drawn, with replacement, what is the probability of drawing exactly 2 red marbles? This is binomial because the outcomes are independent, it is either red or not red,the probability of success is .5, and the number of trials is 5. To solve binomial ditributions use the following equation: _{n}C_{k}(p)^{k}(1-p)^{n-k} where n is the number of trials, k is the number of successes, and p is the probability of success. For the previous problem the equation would be : _{5}C_{2}(.5)^{2}(1-.5)^{5-2} which would equal 10*.25*.125 = .3125 whih means the probability of drawing exactly 2 reds is 31.25%

### The normal distribution[edit]

A normal distribution is a continuous distribution defined by two parameters, mean and variance . Because of symmetrical shape of the normal curve, the mean is equal to the mode and median.

#### The standard normal distribution[edit]

The standard normal distribution has mean of 0 and variance 1. The area under the curve (probability) is 1.

In order to find area under the normal curve, students can use **normalcdf()** in their TI calculator. The syntax is:

normalcdf (lower limit, upper limit, mean, standard deviation)

#### Probabilities for other normal distribution[edit]

### Probability mass function[edit]

In general, if the random variable *K* follows the binomial distribution with parameters *n* and *p*, we write *K* ~ B(*n*, *p*). The probability of getting exactly *k* successes in *n* trials is given by the probability mass function:

for *k* = 0, 1, 2, ..., *n* and where

is the binomial coefficient (hence the name of the distribution) "*n* choose *k*", also denoted *C*(*n*, *k*), _{n}*C*_{k}, or ^{n}*C*_{k}. The formula can be understood as follows: we want *k* successes (*p*^{k}) and *n* − *k* failures (1 − *p*)^{n − k}. However, the *k* successes can occur anywhere among the *n* trials, and there are C(*n*, *k*) different ways of distributing *k* successes in a sequence of *n* trials.

In creating reference tables for binomial distribution probability, usually the table is filled in up to *n*/2 values. This is because for *k* > *n*/2, the probability can be calculated by its complement as

So, one must look to a different *k* and a different *p* (the binomial is not symmetrical in general). However, its behavior is not arbitrary. There is always an integer *m* that satisfies

As a function of *k*, the expression *ƒ*(*k*; *n*, *p*) is monotone increasing for *k* < *m* and monotone decreasing for *k* > *m*, with the exception of one case where (*n* + 1)*p* is an integer. In this case, there are two maximum values for *m* = (*n* + 1)*p* and *m* − 1. *m* is known as the *most probable* (*most likely*) outcome of Bernoulli trials. Note that the probability of it occurring can be fairly small.