Statistics/Testing Data/Chi-SquaredTest

From Wikibooks, open books for an open world
< Statistics
Jump to: navigation, search

General idea[edit]

Assume you have observed absolute frequencies o_i and expected absolute frequencies e_i under the Null hypothesis of your test then it holds

V = \sum_i \frac{(o_i-e_i)^2}{e_i} \approx \chi^2_f .

i might denote a simple index running from 1,...,I or even a multiindex (i_1,...,i_p) running from (1,...,1) to (I_1,...,I_p).

The test statistics V is approximately \chi^2 distributed, if

  1. for all absolute expected frequencies e_i holds e_i \geq 1 and
  2. for at least 80% of the absolute expected frequencies e_i holds e_i \geq 5.

Note: In different books you might find different approximation conditions, please feel free to add further ones.

The degrees of freedom can be computed by the numbers of absolute observed frequencies which can be chosen freely. We know that the sum of absolute expected frequencies is

 \sum_i o_i = n

which means that the maximum number of degrees of freedom is I-1. We might have to subtract from the number of degrees of freedom the number of parameters we need to estimate from the sample, since this implies further relationships between the observed frequencies.

Derivation of the distribution of the test statistic[edit]

Following Boero, Smith and Wallis (2002) we need knowledge about multivariate statistics to understand the derivation.

The random variable O describing the absolute observed frequencies (o_1, ..., o_k) in a sample has a multinomial distribution O \sim M(n;p_1,...,p_k) with n the number of observations in the sample, p_i the unknown true probabilities. With certain approximation conditions (central limit theorem) it holds that

O \sim M(n; p_1,...,p_k) \approx N_k(\mu;\Sigma)

with N_k the multivariate k dimensional normal distribution, \mu = (np_1,...,np_k) and

\Sigma = (\sigma_{ij})_{i,j=1,...,k} = \begin{cases} -np_ip_j, & \mbox{if } i\neq j \\  np_i(1-p_i) & \mbox{otherwise} \end{cases}.

The covariance matrix \Sigma has only rank k-1, since p_1+...+p_k = 1.

If we considered the generalized inverse \Sigma^- then it holds that

(O-\mu)^T \Sigma^- (O-\mu) = \sum_i \frac{(o_i-e_i)^2}{e_i} \sim \chi^2_{k-1}

distributed (for a proof see Pringle and Rayner, 1971).

Since the multinomial distribution is approximately multivariate normal distributed, the term is

\sum_i \frac{(o_i-e_i)^2}{e_i} \approx \chi^2_{k-1}

distributed. If further relations between the observed probabilities are there then the rank of \Sigma will decrease further.

A common situation is that parameters on which the expected probabilities depend needs to be estimated from the observed data. As said above, usually is stated that the degrees of freedom for the chi square distribution is k-1-r with r the number of estimated parameters. In case of parameter estimation with the maximum-likelihood method this is only true if the estimator is efficient (Chernoff and Lehmann, 1954). In general it holds that degrees of freedom are somewhere between k-1-r and k-1.

Examples[edit]

The most famous examples will be handled in detail at further sections: \chi^2 test for independence, \chi^2 test for homogeneity and \chi^2 test for distributions.

The \chi^2 test can be used to generate "quick and dirty" test, e.g.

H_0: The random variable X is symmetrically distributed versus

H_1: the random variable X is not symmetrically distributed.

We know that in case of a symmetrical distribution the arithmetic mean \bar{x} and median should be nearly the same. So a simple way to test this hypothesis would be to count how many observations are less than the mean (n_-)and how many observations are larger than the arithmetic mean (n_+). If mean and median are the same than 50% of the observation should smaller than the mean and 50% should be larger than the mean. It holds

V = \frac{(n_- - n/2)^2}{n/2} + \frac{(n_+ - n/2)^2}{n/2} \approx \chi^2_1.

References[edit]