# Linear Algebra/Topic: Markov Chains/Solutions

## Solutions

Use a computer for these problems. You can, for instance, adapt the Octave script given below.

Problem 1

These questions refer to the coin-flipping game.

1. Check the computations in the table at the end of the first paragraph.
2. Consider the second row of the vector table. Note that this row has alternating $0$'s. Must $p_{1}(j)$ be $0$ when $j$ is odd? Prove that it must be, or produce a counterexample.
3. Perform a computational experiment to estimate the chance that the player ends at five dollars, starting with one dollar, two dollars, and four dollars.
1. With this file coin.m

# Octave function for Markov coin game. p is chance of going down.
function w = coin(p,v)
q = 1-p;
A=[1,p,0,0,0,0;
0,0,p,0,0,0;
0,q,0,p,0,0;
0,0,q,0,p,0;
0,0,0,q,0,0;
0,0,0,0,q,1];
w = A * v;
endfunction

This Octave session produced the output given here.
octave:1> v0=[0;0;0;1;0;0]
v0 =
0
0
0
1
0
0
octave:2> p=.5
p = 0.50000
octave:3> v1=coin(p,v0)
v1 =
0.00000
0.00000
0.50000
0.00000
0.50000
0.00000
octave:4> v2=coin(p,v1)
v2 =
0.00000
0.25000
0.00000
0.50000
0.00000
0.25000

This continued for too many steps to list here.
octave:26> v24=coin(p,v23)
v24 =
0.39600
0.00276
0.00000
0.00447
0.00000
0.59676
2. Using these formulas
\begin{align}{2} p_{1}(n+1) &= 0.5\cdot p_{2}(n) &\qquad p_{2}(n+1) &= 0.5\cdot p_{1}(n)+0.5\cdot p_{3}(n) \\ p_{3}(n+1) &= 0.5\cdot p_{2}(n)+0.5\cdot p_{4}(n) &\qquad p_{5}(n+1) &= 0.5\cdot p_{4}(n) \end{align}
and these initial conditions
$\begin{pmatrix} p_{0}(0) \\ p_{1}(0) \\ p_{2}(0) \\ p_{3}(0) \\ p_{4}(0) \\ p_{5}(0) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}$
we will prove by induction that when $n$ is odd then $p_{1}(n)=p_{3}(n)=0$ and when $n$ is even then $p_{2}(n)=p_{4}(n)=0$. Note first that this is true in the $n=0$ base case by the initial conditions. For the inductive step, suppose that it is true in the $n=0$, $n=1$, ..., $n=k$ cases and consider the $n=k+1$ case. If $k+1$ is odd then the two
$\begin{array}{rl} p_{1}(k+1) &= 0.5\cdot p_{2}(k)=0.5\cdot 0=0 \\ p_{3}(k+1) &=0.5\cdot p_{2}(k)+0.5\cdot p_{4}(k) =0.5\cdot 0+0.5\cdot 0 =0 \end{array}$
follow from the inductive hypothesis that $p_{2}(k)=p_{4}(k)=0$ since $k$ is even. The case where $k+1$ is even is similar.
3. We can use, say, $n=100$. This Octave session
octave:1> B=[1,.5,0,0,0,0;
> 0,0,.5,0,0,0;
> 0,.5,0,.5,0,0;
> 0,0,.5,0,.5,0;
> 0,0,0,.5,0,0;
> 0,0,0,0,.5,1];
octave:2> B100=B**100
B100 =
1.00000 0.80000 0.60000 0.40000 0.20000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
0.00000 0.20000 0.40000 0.60000 0.80000 1.00000
octave:3> B100*[0;1;0;0;0;0]
octave:4> B100*[0;1;0;0;0;0]
octave:5> B100*[0;0;0;1;0;0]
octave:6> B100*[0;1;0;0;0;0]

yields these outputs.
 starting with: $1$2 $3$4 $s_0(100)$ 0.80000 0.60000 0.40000 0.20000 $s_1(100)$ 0.00000 0.00000 0.00000 0.00000 $s_2(100)$ 0.00000 0.00000 0.00000 0.00000 $s_3(100)$ 0.00000 0.00000 0.00000 0.00000 $s_4(100)$ 0.00000 0.00000 0.00000 0.00000 $s_5(100)$ 0.20000 0.40000 0.60000 0.80000
Problem 2

We consider throws of a die, and say the system is in state $s_i$ if the largest number yet appearing on the die was $i$.

1. Give the transition matrix.
2. Start the system in state $s_1$, and run it for five throws. What is the vector at the end?

(Feller 1968, p. 424)

1. From these equations
$\begin{array}{*{6}{rc}r} (1/6)s_{1}(n) &+ &0s_{2}(n) &+ &0s_{3}(n) &+ &0s_{4}(n) &+ &0s_{5}(n) &+ &0s_{6}(n) &= &s_{1}(n+1) \\ (1/6)s_{1}(n) &+ &(2/6)s_{2}(n) &+ &0s_{3}(n) &+ &0s_{4}(n) &+ &0s_{5}(n) &+ &0s_{6}(n) &= &s_{2}(n+1) \\ (1/6)s_{1}(n) &+ &(1/6)s_{2}(n) &+ &(3/6)s_{3}(n) &+ &0s_{4}(n) &+ &0s_{5}(n) &+ &0s_{6}(n) &= &s_{3}(n+1) \\ (1/6)s_{1}(n) &+ &(1/6)s_{2}(n) &+ &(1/6)s_{3}(n) &+ &(4/6)s_{4}(n) &+ &0s_{5}(n) &+ &0s_{6}(n) &= &s_{4}(n+1) \\ (1/6)s_{1}(n) &+ &(1/6)s_{2}(n) &+ &(1/6)s_{3}(n) &+ &(1/6)s_{4}(n) &+ &(5/6)s_{5}(n) &+ &0s_{6}(n) &= &s_{5}(n+1) \\ (1/6)s_{1}(n) &+ &(1/6)s_{2}(n) &+ &(1/6)s_{3}(n) &+ &(1/6)s_{4}(n) &+ &(1/6)s_{5}(n) &+ &(6/6)s_{6}(n) &= &s_{6}(n+1) \end{array}$
We get this transition matrix.
$\begin{pmatrix} 1/6 &0 &0 &0 &0 &0 \\ 1/6 &2/6 &0 &0 &0 &0 \\ 1/6 &1/6 &3/6 &0 &0 &0 \\ 1/6 &1/6 &1/6 &4/6 &0 &0 \\ 1/6 &1/6 &1/6 &1/6 &5/6 &0 \\ 1/6 &1/6 &1/6 &1/6 &1/6 &6/6 \end{pmatrix}$
2. This is the Octave session, with outputs edited out and condensed into the table at the end.
octave:1> F=[1/6, 0, 0, 0, 0, 0;
> 1/6, 2/6, 0, 0, 0, 0;
> 1/6, 1/6, 3/6, 0, 0, 0;
> 1/6, 1/6, 1/6, 4/6, 0, 0;
> 1/6, 1/6, 1/6, 1/6, 5/6, 0;
> 1/6, 1/6, 1/6, 1/6, 1/6, 6/6];
octave:2> v0=[1;0;0;0;0;0]
octave:3> v1=F*v0
octave:4> v2=F*v1
octave:5> v3=F*v2
octave:6> v4=F*v3
octave:7> v5=F*v4

These are the results.
 1 2 3 4 5 1 0.16667 0.027778 0.0046296 0.0007716 0.0001286 0 0.16667 0.083333 0.0324074 0.0115741 0.00398663 0 0.16667 0.138889 0.087963 0.0501543 0.0271348 0 0.16667 0.194444 0.171296 0.135031 0.100437 0 0.16667 0.25 0.282407 0.284722 0.27019 0 0.16667 0.305556 0.421296 0.517747 0.598122
Problem 3

There has been much interest in whether industries in the United States are moving from the Northeast and North Central regions to the South and West, motivated by the warmer climate, by lower wages, and by less unionization. Here is the transition matrix for large firms in Electric and Electronic Equipment (Kelton 1983, p. 43)

 NE NC S W Z NE 0.787 0 0 0.111 0.102 NC 0 0.966 0.034 0 0 S 0 0.063 0.937 0 0 W 0 0 0.074 0.612 0.314 Z 0.021 0.009 0.005 0.010 0.954

For example, a firm in the Northeast region will be in the West region next year with probability $0.111$. (The Z entry is a "birth-death" state. For instance, with probability $0.102$ a large Electric and Electronic Equipment firm from the Northeast will move out of this system next year: go out of business, move abroad, or move to another category of firm. There is a $0.021$ probability that a firm in the National Census of Manufacturers will move into Electronics, or be created, or move in from abroad, into the Northeast. Finally, with probability $0.954$ a firm out of the categories will stay out, according to this research.)

1. Does the Markov model assumption of lack of history seem justified?
2. Assume that the initial distribution is even, except that the value at $Z$ is $0.9$. Compute the vectors for $n=1$ through $n=4$.
3. Suppose that the initial distribution is this.
 NE NC S W Z 0.0000 0.6522 0.3478 0.0000 0.0000

Calculate the distributions for $n=1$ through $n=4$.

4. Find the distribution for $n=50$ and $n=51$. Has the system settled down to an equilibrium?
1. It does seem reasonable that, while the firm's present location should strongly influence where it is next time (for instance, whether it stays), any locations in the prior stages should have little influence. That is, while a company may move or stay because of where it is, it is unlikely to move or stay because of where it was.
2. This Octave session has been edited, with the outputs put together in a table at the end.
octave:1> M=[.787,0,0,.111,.102;
> 0,.966,.034,0,0;
> 0,.063,.937,0,0;
> 0,0,.074,.612,.314;
> .021,.009,.005,.010,.954]
M =
0.78700 0.00000 0.00000 0.11100 0.10200
0.00000 0.96600 0.03400 0.00000 0.00000
0.00000 0.06300 0.93700 0.00000 0.00000
0.00000 0.00000 0.07400 0.61200 0.31400
0.02100 0.00900 0.00500 0.01000 0.95400
octave:2> v0=[.025;.025;.025;.025;.900]
octave:3> v1=M*v0
octave:4> v2=M*v1
octave:5> v3=M*v2
octave:6> v4=M*v3

is summarized in this table.

$\begin{array}{c|cccc} \vec{p}_0 &\vec{p}_1 &\vec{p}_2 &\vec{p}_3 &\vec{p}_4 \\ \hline \begin{pmatrix} 0.025000 \\ 0.025000 \\ 0.025000 \\ 0.025000 \\ 0.900000 \end{pmatrix} &\begin{pmatrix} 0.114250 \\ 0.025000 \\ 0.025000 \\ 0.299750 \\ 0.859725 \end{pmatrix} &\begin{pmatrix} 0.210879 \\ 0.025000 \\ 0.025000 \\ 0.455251 \\ 0.825924 \end{pmatrix} &\begin{pmatrix} 0.300739 \\ 0.025000 \\ 0.025000 \\ 0.539804 \\ 0.797263 \end{pmatrix} &\begin{pmatrix} 0.377920 \\ 0.025000 \\ 0.025000 \\ 0.582550 \\ 0.772652 \end{pmatrix} \end{array}$

3. This is a continuation of the Octave session from the prior item.
octave:7> p0=[.0000;.6522;.3478;.0000;.0000]
octave:8> p1=M*p0
octave:9> p2=M*p1
octave:10> p3=M*p2
octave:11> p4=M*p3

This summarizes the output.

$\begin{array}{c|cccc} \vec{p}_0 &\vec{p}_1 &\vec{p}_2 &\vec{p}_3 &\vec{p}_4 \\ \hline \begin{pmatrix} 0.00000 \\ 0.65220 \\ 0.34780 \\ 0.00000 \\ 0.00000 \end{pmatrix} &\begin{pmatrix} 0.00000 \\ 0.64185 \\ 0.36698 \\ 0.02574 \\ 0.00761 \end{pmatrix} &\begin{pmatrix} 0.0036329 \\ 0.6325047 \\ 0.3842942 \\ 0.0452966 \\ 0.0151277 \end{pmatrix} &\begin{pmatrix} 0.0094301 \\ 0.6240656 \\ 0.3999315 \\ 0.0609094 \\ 0.0225751 \end{pmatrix} &\begin{pmatrix} 0.016485 \\ 0.616445 \\ 0.414052 \\ 0.073960 \\ 0.029960 \end{pmatrix} \end{array}$

4. This is more of the same Octave session.

octave:12> M50=M**50
M50 =
0.03992 0.33666 0.20318 0.02198 0.37332
0.00000 0.65162 0.34838 0.00000 0.00000
0.00000 0.64553 0.35447 0.00000 0.00000
0.03384 0.38235 0.22511 0.01864 0.31652
0.04003 0.33316 0.20029 0.02204 0.37437
octave:13> p50=M50*p0
p50 =
0.29024
0.54615
0.54430
0.32766
0.28695
octave:14> p51=M*p50
p51 =
0.29406
0.54609
0.54442
0.33091
0.29076

This is close to a steady state.
Problem 4

This model has been suggested for some kinds of learning (Wickens 1982, p. 41). The learner starts in an undecided state $s_U$. Eventually the learner has to decide to do either response $A$ (that is, end in state $s_A$) or response $B$ (ending in $s_B$). However, the learner doesn't jump right from being undecided to being sure $A$ is the correct thing to do (or $B$). Instead, the learner spends some time in a "tentative-$A$" state, or a "tentative-$B$" state, trying the response out (denoted here $t_A$ and $t_B$). Imagine that once the learner has decided, it is final, so once $s_A$ or $s_B$ is entered it is never left. For the other state changes, imagine a transition is made with probability $p$ in either direction.

1. Construct the transition matrix.
2. Take $p=0.25$ and take the initial vector to be $1$ at $s_U$. Run this for five steps. What is the chance of ending up at $s_A$?
3. Do the same for $p=0.20$.
4. Graph $p$ versus the chance of ending at $s_A$. Is there a threshold value for $p$, above which the learner is almost sure not to take longer than five steps?
1. This is the relevant system of equations.
$\begin{array}{*{5}{rc}r} (1-2p)\cdot s_{U}(n) &+ &p\cdot t_{A}(n) &+ &p\cdot t_{B}(n) & & & & &= &s_{U}(n+1) \\ p\cdot s_{U}(n) &+ &(1-2p)\cdot t_{A}(n) & & & & & & &= &t_{A}(n+1) \\ p\cdot s_{U}(n) & & &+ &(1-2p)\cdot t_{B}(n) & & & & &= &t_{B}(n+1) \\ & &p\cdot t_{A}(n) & & &+ &s_{A}(n) & & &= &s_{A}(n+1) \\ & & & &p\cdot t_{B}(n) & & &+ &s_{B}(n) &= &s_{B}(n+1) \end{array}$
Thus we have this.
$\begin{pmatrix} 1-2p &p &p &0 &0 \\ p &1-2p &0 &0 &0 \\ p &0 &1-2p &0 &0 \\ 0 &p &0 &1 &0 \\ 0 &0 &p &0 &1 \end{pmatrix} \begin{pmatrix} s_{U}(n) \\ t_{A}(n) \\ t_{B}(n) \\ s_{A}(n) \\ s_{B}(n) \end{pmatrix} = \begin{pmatrix} s_{U}(n+1) \\ t_{A}(n+1) \\ t_{B}(n+1) \\ s_{A}(n+1) \\ s_{B}(n+1) \end{pmatrix}$
2. This is the Octave code, with the output removed.
octave:1> T=[.5,.25,.25,0,0;
> .25,.5,0,0,0;
> .25,0,.5,0,0;
> 0,.25,0,1,0;
> 0,0,.25,0,1]
T =
0.50000 0.25000 0.25000 0.00000 0.00000
0.25000 0.50000 0.00000 0.00000 0.00000
0.25000 0.00000 0.50000 0.00000 0.00000
0.00000 0.25000 0.00000 1.00000 0.00000
0.00000 0.00000 0.25000 0.00000 1.00000
octave:2> p0=[1;0;0;0;0]
octave:3> p1=T*p0
octave:4> p2=T*p1
octave:5> p3=T*p2
octave:6> p4=T*p3
octave:7> p5=T*p4

Here is the output. The probability of ending at $s_A$ is about $0.23$.
$\begin{array}{cc|ccccc} &\vec{p}_0 &\vec{p}_1 &\vec{p}_2 &\vec{p}_3 &\vec{p}_4 &\vec{p}_5 \\ \hline \begin{array}{c} s_U \\ t_A \\ t_B \\ s_A \\ s_B \end{array} &\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{array} &\begin{array}{c} 0.50000 \\ 0.25000 \\ 0.25000 \\ 0.00000 \\ 0.00000 \end{array} &\begin{array}{c} 0.375000 \\ 0.250000 \\ 0.250000 \\ 0.062500 \\ 0.062500 \end{array} &\begin{array}{c} 0.31250 \\ 0.21875 \\ 0.21875 \\ 0.12500 \\ 0.12500 \end{array} &\begin{array}{c} 0.26562 \\ 0.18750 \\ 0.18750 \\ 0.17969 \\ 0.17969 \end{array} &\begin{array}{c} 0.22656 \\ 0.16016 \\ 0.16016 \\ 0.22656 \\ 0.22656 \end{array} \end{array}$
3. With this file as learn.m
# Octave script file for learning model.
function w = learn(p)
T = [1-2*p,p, p, 0, 0;
p, 1-2*p,0, 0, 0;
p, 0, 1-2*p,0, 0;
0, p, 0, 1, 0;
0, 0, p, 0, 1];
T5 = T**5;
p5 = T5*[1;0;0;0;0];
w = p5(4);
endfunction

issuing the command octave:1> learn(.20) yields ans = 0.17664.
4. This Octave session
octave:1> x=(.01:.01:.50)';
octave:2> y=(.01:.01:.50)';
octave:3> for i=.01:.01:.50
> y(100*i)=learn(i);
> endfor
octave:4> z=[x, y];
octave:5> gplot z

yields this plot. There is no threshold value — no probability above which the curve rises sharply.

Problem 5

A certain town is in a certain country (this is a hypothetical problem). Each year ten percent of the town dwellers move to other parts of the country. Each year one percent of the people from elsewhere move to the town. Assume that there are two states $s_T$, living in town, and $s_C$, living elsewhere.

1. Construct the transistion matrix.
2. Starting with an initial distribution $s_T=0.3$ and $s_C=0.7$, get the results for the first ten years.
3. Do the same for $s_T=0.2$.
4. Are the two outcomes alike or different?
1. From these equations
$\begin{array}{*{2}{rc}r} 0.90\cdot p_{T}(n) &+ &0.01\cdot p_{C}(n) &= &p_{T}(n+1) \\ 0.10\cdot p_{T}(n) &+ &0.99\cdot p_{C}(n) &= &p_{C}(n+1) \end{array}$
we get this matrix.
$\begin{pmatrix} 0.90 &0.01 \\ 0.10 &0.99 \end{pmatrix} \begin{pmatrix} p_{T}(n) \\ p_{C}(n) \end{pmatrix} = \begin{pmatrix} p_{T}(n+1) \\ p_{C}(n+1) \end{pmatrix}$
2. This is the result from Octave.

$\begin{array}{c|ccccc} n=0 &1 &2 &3 &4 &5 \\ \hline \begin{array}{c} 0.30000 \\ 0.70000 \end{array} &\begin{array}{c} 0.27700 \\ 0.72300 \end{array} &\begin{array}{c} 0.25653 \\ 0.74347 \end{array} &\begin{array}{c} 0.23831 \\ 0.76169 \end{array} &\begin{array}{c} 0.22210 \\ 0.77790 \end{array} &\begin{array}{c} 0.20767 \\ 0.79233 \end{array} \end{array}$ $\begin{array}{|ccccc} 6 &7 &8 &9 &10 \\ \hline \begin{array}{c} 0.19482 \\ 0.80518 \end{array} &\begin{array}{c} 0.18339 \\ 0.81661 \end{array} &\begin{array}{c} 0.17322 \\ 0.82678 \end{array} &\begin{array}{c} 0.16417 \\ 0.83583 \end{array} &\begin{array}{c} 0.15611 \\ 0.84389 \end{array} \end{array}$

3. This is the $s_T=0.2$ result.

$\begin{array}{c|ccccc} n=0 &1 &2 &3 &4 &5 \\ \hline \begin{array}{c} 0.20000 \\ 0.80000 \end{array} &\begin{array}{c} 0.18800 \\ 0.81200 \end{array} &\begin{array}{c} 0.17732 \\ 0.82268 \end{array} &\begin{array}{c} 0.16781 \\ 0.83219 \end{array} &\begin{array}{c} 0.15936 \\ 0.84064 \end{array} &\begin{array}{c} 0.15183 \\ 0.84817 \end{array} \end{array}$ $\begin{array}{|ccccc} 6 &7 &8 &9 &10 \\ \hline \begin{array}{c} 0.14513 \\ 0.85487 \end{array} &\begin{array}{c} 0.13916 \\ 0.86084 \end{array} &\begin{array}{c} 0.13385 \\ 0.86615 \end{array} &\begin{array}{c} 0.12913 \\ 0.87087 \end{array} &\begin{array}{c} 0.12493 \\ 0.87507 \end{array} \end{array}$

4. Although the probability vectors start $0.1$ apart, they end only $0.032$ apart. So they are alike.
Problem 6

For the World Series application, use a computer to generate the seven vectors for $p=0.55$ and $p=0.6$.

1. What is the chance of the National League team winning it all, even though they have only a probability of $0.45$ or $0.40$ of winning any one game?
2. Graph the probability $p$ against the chance that the American League team wins it all. Is there a threshold value— a $p$ above which the better team is essentially ensured of winning?

(Some sample code is included below.)

These are the $p=.55$ vectors,

 $n=0$ $n=1$ $n=2$ $n=3$ $n=4$ $n=5$ $n=6$ $n=7$ 0-0 1 0 0 0 0 0 0 0 0 1-0 0 0.55000 0 0 0 0 0 0 0 0-1 0 0.45000 0 0 0 0 0 0 0 2-0 0 0 0.30250 0 0 0 0 0 0 1-1 0 0 0.49500 0 0 0 0 0 0 0-2 0 0 0.20250 0 0 0 0 0 0 3-0 0 0 0 0.16638 0 0 0 0 0 2-1 0 0 0 0.40837 0 0 0 0 0 1-2 0 0 0 0.33412 0 0 0 0 0 0-3 0 0 0 0.09112 0 0 0 0 0 4-0 0 0 0 0 0.09151 0.09151 0.09151 0.09151 0.09151 3-1 0 0 0 0 0.29948 0 0 0 0 2-2 0 0 0 0 0.36754 0 0 0 0 1-3 0 0 0 0 0.20047 0 0 0 0 0-4 0 0 0 0 0.04101 0.04101 0.04101 0.04101 0.04101 4-1 0 0 0 0 0 0.16471 0.16471 0.16471 0.16471 3-2 0 0 0 0 0 0.33691 0 0 0 2-3 0 0 0 0 0 0.27565 0 0 0 1-4 0 0 0 0 0 0.09021 0.09021 0.09021 0.09021 4-2 0 0 0 0 0 0 0.18530 0.18530 0.18530 3-3 0 0 0 0 0 0 0.30322 0.30322 0 2-4 0 0 0 0 0 0 0.12404 0.12404 0.12404 4-3 0 0 0 0 0 0 0 0 0.16677 3-4 0 0 0 0 0 0 0 0 0.13645

and these are the $p=.60$ vectors.

 $n=0$ $n=1$ $n=2$ $n=3$ $n=4$ $n=5$ $n=6$ $n=7$ 0-0 1 0 0 0 0 0 0 0 1-0 0 0.60000 0 0 0 0 0 0 0-1 0 0.40000 0 0 0 0 0 0 2-0 0 0 0.36000 0 0 0 0 0 1-1 0 0 0.48000 0 0 0 0 0 0-2 0 0 0.16000 0 0 0 0 0 3-0 0 0 0 0.21600 0 0 0 0 2-1 0 0 0 0.43200 0 0 0 0 1-2 0 0 0 0.28800 0 0 0 0 0-3 0 0 0 0.06400 0 0 0 0 4-0 0 0 0 0 0.12960 0.12960 0.12960 0.12960 3-1 0 0 0 0 0.34560 0 0 0 2-2 0 0 0 0 0.34560 0 0 0 1-3 0 0 0 0 0.15360 0 0 0 0-4 0 0 0 0 0.02560 0.02560 0.02560 0.02560 4-1 0 0 0 0 0 0.20736 0.20736 0.20736 3-2 0 0 0 0 0 0.34560 0 0 2-3 0 0 0 0 0 0.23040 0 0 1-4 0 0 0 0 0 0.06144 0.06144 0.06144 4-2 0 0 0 0 0 0 0.20736 0.20736 3-3 0 0 0 0 0 0 0.27648 0 2-4 0 0 0 0 0 0 0.09216 0.09216 4-3 0 0 0 0 0 0 0 0.16589 3-4 0 0 0 0 0 0 0 0.11059
1. The script from the computer code section can be easily adapted.
# Octave script file to compute chance of World Series outcomes.
function w = markov(p,v)
q = 1-p;
A=[0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 0-0
p,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 1-0
q,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 0-1_
0,p,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 2-0
0,q,p,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 1-1
0,0,q,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 0-2__
0,0,0,p,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 3-0
0,0,0,q,p,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 2-1
0,0,0,0,q,p, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 1-2_
0,0,0,0,0,q, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 0-3
0,0,0,0,0,0, p,0,0,0,1,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 4-0
0,0,0,0,0,0, q,p,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 3-1__
0,0,0,0,0,0, 0,q,p,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 2-2
0,0,0,0,0,0, 0,0,q,p,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 1-3
0,0,0,0,0,0, 0,0,0,q,0,0, 0,0,1,0,0,0, 0,0,0,0,0,0; # 0-4_
0,0,0,0,0,0, 0,0,0,0,0,p, 0,0,0,1,0,0, 0,0,0,0,0,0; # 4-1
0,0,0,0,0,0, 0,0,0,0,0,q, p,0,0,0,0,0, 0,0,0,0,0,0; # 3-2
0,0,0,0,0,0, 0,0,0,0,0,0, q,p,0,0,0,0, 0,0,0,0,0,0; # 2-3__
0,0,0,0,0,0, 0,0,0,0,0,0, 0,q,0,0,0,0, 1,0,0,0,0,0; # 1-4
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,p,0, 0,1,0,0,0,0; # 4-2
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,q,p, 0,0,0,0,0,0; # 3-3_
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,q, 0,0,0,1,0,0; # 2-4
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,p,0,1,0; # 4-3
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,q,0,0,1]; # 3-4
v7 = (A**7) * v;
w = v7(11)+v7(16)+v7(20)+v7(23)
endfunction

Using this script, we get that when the American League has a $p=0.55$ probability of winning each game then their probability of winning the first-to-win-four series is $0.60829$. When their probability of winning any one game is $p=0.6$ then their probability of winning the series is $0.71021$.
2. This Octave session
octave:1> v0=[1;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0];
octave:2> x=(.01:.01:.99)';
octave:3> y=(.01:.01:.99)';
octave:4> for i=.01:.01:.99
> y(100*i)=markov(i,v0);
> endfor
octave:5> z=[x, y];
octave:6> gplot z

yields this graph. By eye we judge that if $p>0.7$ then the team is close to assurred of the series.

Problem 7

A Markov matrix has each entry positive and each column sums to $1$.

1. Check that the three transistion matrices shown in this Topic meet these two conditions. Must any transition matrix do so?
2. Observe that if $A\vec{v}_0=\vec{v}_1$ and $A\vec{v}_1=\vec{v}_2$ then $A^2$ is a transition matrix from $\vec{v}_0$ to $\vec{v}_2$. Show that a power of a Markov matrix is also a Markov matrix.
3. Generalize the prior item by proving that the product of two appropriately-sized Markov matrices is a Markov matrix.
1. They must satisfy this condition because the total probability of a state transition (including back to the same state) is 100%.
2. See the answer to the third item.
3. We will do the $2 \! \times \! 2$ case; bigger-sized cases are just notational problems. This product
$\begin{pmatrix} a_{1,1} &a_{1,2} \\ a_{2,1} &a_{2,2} \end{pmatrix} \begin{pmatrix} b_{1,1} &b_{1,2} \\ b_{2,1} &b_{2,2} \end{pmatrix} =\begin{pmatrix} a_{1,1}b_{1,1}+a_{1,2}b_{2,1} &a_{1,1}b_{1,2}+a_{1,2}b_{2,2} \\ a_{2,1}b_{1,1}+a_{2,2}b_{2,1} &a_{2,1}b_{1,2}+a_{2,2}b_{2,2} \\ \end{pmatrix}$
has these two column sums
$(a_{1,1}b_{1,1}+a_{1,2}b_{2,1}) +(a_{2,1}b_{1,1}+a_{2,2}b_{2,1}) =(a_{1,1}+a_{2,1})\cdot b_{1,1} +(a_{1,2}+a_{2,2})\cdot b_{2,1} =1\cdot b_{1,1}+1\cdot b_{2,1} =1$
and
$(a_{1,1}b_{1,2}+a_{1,2}b_{2,2}) +(a_{2,1}b_{1,2}+a_{2,2}b_{2,2}) =(a_{1,1}+a_{2,1})\cdot b_{1,2} +(a_{1,2}+a_{2,2})\cdot b_{2,2} =1\cdot b_{1,2}+1\cdot b_{2,2} =1$
as required.

Computer Code

This script markov.m for the computer algebra system Octave was used to generate the table of World Series outcomes. (The sharp character # marks the rest of a line as a comment.)

# Octave script file to compute chance of World Series outcomes.
function w = markov(p,v)
q = 1-p;
A=[0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 0-0
p,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 1-0
q,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 0-1_
0,p,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 2-0
0,q,p,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 1-1
0,0,q,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 0-2__
0,0,0,p,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 3-0
0,0,0,q,p,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 2-1
0,0,0,0,q,p, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 1-2_
0,0,0,0,0,q, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 0-3
0,0,0,0,0,0, p,0,0,0,1,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 4-0
0,0,0,0,0,0, q,p,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 3-1__
0,0,0,0,0,0, 0,q,p,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 2-2
0,0,0,0,0,0, 0,0,q,p,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0;  # 1-3
0,0,0,0,0,0, 0,0,0,q,0,0, 0,0,1,0,0,0, 0,0,0,0,0,0;  # 0-4_
0,0,0,0,0,0, 0,0,0,0,0,p, 0,0,0,1,0,0, 0,0,0,0,0,0;  # 4-1
0,0,0,0,0,0, 0,0,0,0,0,q, p,0,0,0,0,0, 0,0,0,0,0,0;  # 3-2
0,0,0,0,0,0, 0,0,0,0,0,0, q,p,0,0,0,0, 0,0,0,0,0,0;  # 2-3__
0,0,0,0,0,0, 0,0,0,0,0,0, 0,q,0,0,0,0, 1,0,0,0,0,0;  # 1-4
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,p,0, 0,1,0,0,0,0;  # 4-2
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,q,p, 0,0,0,0,0,0;  # 3-3_
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,q, 0,0,0,1,0,0;  # 2-4
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,p,0,1,0;  # 4-3
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,q,0,0,1]; # 3-4
w = A * v;
endfunction


> v0=[1;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0]
> p=.5
> v1=markov(p,v0)
> v2=markov(p,v1)
...


Translating to another computer algebra system should be easy— all have commands similar to these.

## References

• Feller, William (1968), An Introduction to Probability Theory and Its Applications, 1 (3rd ed.), Wiley .
• Kelton, Christina M.L. (1983), Trends on the Relocation of U.S. Manufacturing, Wiley .
• Wickens, Thomas D. (1982), Models for Behavior, W.H. Freeman .