Two of the most basic means of estimating coefficients of generating functions are the Cauchy-Hadamard theorem and Cauchy's inequality.
We also include some background knowledge which will be useful for future chapters.
One key concept in analysis is a sequence of numbers. In our case, the sequence of numbers could be the coefficients of the generating function we are interested in, written
{
a
n
}
{\displaystyle \{a_{n}\}}
.
A point of accumulation of a sequence is a number
t
{\displaystyle t}
such that, given
ϵ
>
0
{\displaystyle \epsilon >0}
, there are infinitely many
n
{\displaystyle n}
such that[ 1]
|
a
n
−
t
|
<
ϵ
{\displaystyle |a_{n}-t|<\epsilon }
.
For example, the sequence of coefficients of
1
1
−
z
{\displaystyle {\frac {1}{1-z}}}
(
{
1
,
1
,
1
,
⋯
}
{\displaystyle \{1,1,1,\cdots \}}
) has point of accumulation
1
{\displaystyle 1}
.
e
z
{\displaystyle e^{z}}
(
{
1
,
1
2
!
,
1
3
!
,
⋯
}
{\displaystyle \{1,{\frac {1}{2!}},{\frac {1}{3!}},\cdots \}}
) has point of accumulation
0
{\displaystyle 0}
.
1
1
−
z
2
{\displaystyle {\frac {1}{1-z^{2}}}}
(
{
1
,
0
,
1
,
0
,
⋯
}
{\displaystyle \{1,0,1,0,\cdots \}}
) has two points of accumulation,
0
{\displaystyle 0}
and
1
{\displaystyle 1}
.
One useful property of a sequence of numbers is its limit superior , written
lim sup
a
n
{\displaystyle \limsup a_{n}}
. This is the least upper bound of the set of points of accumulation of the sequence
{
a
n
}
{\displaystyle \{a_{n}\}}
[ 2] .
In our above examples, these would be
1
{\displaystyle 1}
,
0
{\displaystyle 0}
and
1
{\displaystyle 1}
respectively.
f
(
z
)
{\displaystyle f(z)}
is said to converge if its series expansion
∑
a
n
z
n
{\displaystyle \sum a_{n}z^{n}}
equals a finite value.
It may only do so for particular values of
z
{\displaystyle z}
. There are various tests for whether or not a series converges and for which values of
z
{\displaystyle z}
.
For example,
1
1
−
z
{\displaystyle {\frac {1}{1-z}}}
has series expansion
∑
z
n
{\displaystyle \sum z^{n}}
. We can test this series for convergence with the D'Alembert's ratio test[ 3] which states that the series converges if
lim
n
→
∞
a
n
+
1
a
n
<
1
{\displaystyle \lim _{n\to \infty }{\frac {a_{n+1}}{a_{n}}}<1}
In our example, the ratio is
z
n
+
1
z
n
{\displaystyle {\frac {z^{n+1}}{z^{n}}}}
which is only less than
1
{\displaystyle 1}
if
|
z
|
<
1
{\displaystyle |z|<1}
. Therefore, the series converges for values less than
1
{\displaystyle 1}
.
The radius of convergence of
f
(
z
)
{\displaystyle f(z)}
is the value
z
0
{\displaystyle z_{0}}
such that for
|
z
|
<
z
0
{\displaystyle |z|<z_{0}}
the series expansion converges.
In our example, the radius of convergence of
1
1
−
z
{\displaystyle {\frac {1}{1-z}}}
is
1
{\displaystyle 1}
.
It should be noted that the radius of convergence is equal to the smallest singularity of a function.[ 4] We will read about singularities later.
If
f
(
z
)
=
∑
a
n
z
n
{\displaystyle f(z)=\sum a_{n}z^{n}}
and
R
{\displaystyle R}
its radius of convergence then[ 5] :
1
R
=
lim sup
n
→
∞
|
a
n
|
1
/
n
{\displaystyle {\frac {1}{R}}=\limsup _{n\to \infty }|a_{n}|^{1/n}}
One consequence of this theorem is[ 6] :
(
1
R
−
ϵ
)
n
≤
a
n
≤
(
1
R
+
ϵ
)
n
{\displaystyle \left({\frac {1}{R}}-\epsilon \right)^{n}\leq a_{n}\leq \left({\frac {1}{R}}+\epsilon \right)^{n}}
(for all
ϵ
>
0
{\displaystyle \epsilon >0}
and sufficiently large
n
{\displaystyle n}
)
Proof due to Wilf[ 7] and Lang[ 8] .
The radius of convergence
R
{\displaystyle R}
of a function
f
(
z
)
{\displaystyle f(z)}
means that if
|
z
|
<
R
{\displaystyle |z|<R}
then
f
(
z
)
≠
∞
{\displaystyle f(z)\neq \infty }
[ 9] .
Take
f
(
z
)
=
∑
a
n
z
n
{\displaystyle f(z)=\sum a_{n}z^{n}}
,
R
{\displaystyle R}
its radius of convergence and
t
=
lim sup
n
→
∞
|
a
n
|
1
/
n
{\displaystyle t=\limsup _{n\to \infty }|a_{n}|^{1/n}}
. By definition of
lim sup
{\displaystyle \limsup }
[ 10] , for all but a finite number of
n
{\displaystyle n}
|
a
n
|
≤
(
t
+
ϵ
)
n
{\displaystyle |a_{n}|\leq (t+\epsilon )^{n}}
.
f
(
z
)
{\displaystyle f(z)}
does not converge if
|
z
|
≥
1
(
t
+
ϵ
)
{\displaystyle |z|\geq {\frac {1}{(t+\epsilon )}}}
(because otherwise
a
n
+
1
z
n
+
1
a
n
z
n
>
1
{\displaystyle {\frac {a_{n+1}z^{n+1}}{a_{n}z^{n}}}>1}
for all
n
{\displaystyle n}
and so diverges by D'Alembert's ratio test), therefore
R
≥
1
(
t
+
ϵ
)
{\displaystyle R\geq {\frac {1}{(t+\epsilon )}}}
[ 11] .
By definition of
lim sup
{\displaystyle \limsup }
, there exist infinitely many
n
{\displaystyle n}
|
a
n
|
≥
(
t
−
ϵ
)
n
{\displaystyle |a_{n}|\geq (t-\epsilon )^{n}}
.
f
(
z
)
{\displaystyle f(z)}
does not converge if
|
z
|
=
1
(
t
−
ϵ
)
{\displaystyle |z|={\frac {1}{(t-\epsilon )}}}
, therefore
R
≤
1
(
t
−
ϵ
)
{\displaystyle R\leq {\frac {1}{(t-\epsilon )}}}
[ 12] .
If
R
≥
1
(
t
+
ϵ
)
{\displaystyle R\geq {\frac {1}{(t+\epsilon )}}}
and
R
≤
1
(
t
−
ϵ
)
{\displaystyle R\leq {\frac {1}{(t-\epsilon )}}}
then
R
=
1
t
{\displaystyle R={\frac {1}{t}}}
and
1
R
=
lim sup
n
→
∞
|
a
n
|
1
/
n
{\displaystyle {\frac {1}{R}}=\limsup _{n\to \infty }|a_{n}|^{1/n}}
[ 13] .
Now, we prove the consequence of the theorem.
If,
1
R
=
lim sup
n
→
∞
|
a
n
|
1
/
n
{\displaystyle {\frac {1}{R}}=\limsup _{n\to \infty }|a_{n}|^{1/n}}
and, by definition of
lim sup
{\displaystyle \limsup }
, for all but a finite number of
n
{\displaystyle n}
|
a
n
|
≤
(
1
R
+
ϵ
)
n
{\displaystyle |a_{n}|\leq \left({\frac {1}{R}}+\epsilon \right)^{n}}
and there exist infinitely many
n
{\displaystyle n}
|
a
n
|
≥
(
1
R
−
ϵ
)
n
{\displaystyle |a_{n}|\geq \left({\frac {1}{R}}-\epsilon \right)^{n}}
.
A complex number is a number
z
=
x
+
i
y
{\displaystyle z=x+iy}
where
x
{\displaystyle x}
and
y
{\displaystyle y}
are both real numbers and
i
{\displaystyle i}
is the imaginary unit where
i
2
=
−
1
{\displaystyle i^{2}=-1}
.
x
{\displaystyle x}
is called the real component and
y
{\displaystyle y}
the imaginary component (even though
y
{\displaystyle y}
is itself a real number).
Because a complex number has two components, real and imaginary, complex integration involves integrating around a curve in the two-dimensional plane. This is called contour integration .
We denote this:
∫
C
f
(
z
)
d
z
{\displaystyle \int _{C}f(z)dz}
where
C
{\displaystyle C}
denotes the contour.
It is not necessary to know how to compute contour integrals in order to understand the later material in this book.
A function
f
(
z
)
{\displaystyle f(z)}
is analytic at a point
z
0
{\displaystyle z_{0}}
if it is defined, single-valued and has a derivative at every point at and around
z
0
{\displaystyle z_{0}}
[ 14] .
We say a function
f
(
z
)
{\displaystyle f(z)}
is analytic on a set of points
U
{\displaystyle U}
if it is analytic at every point of
U
{\displaystyle U}
[ 15] .
One property of an analytic function is that when performing contour integration on a closed contour
C
{\displaystyle C}
we can continuously deform the contour
C
{\displaystyle C}
into another closed contour
C
′
{\displaystyle C^{'}}
without changing the value of the integral (as long as in deforming the contour we do not pass through any singularities)[ 16] .
Cauchy's integral formula states:[ 17]
f
(
z
0
)
=
1
2
π
i
∫
C
f
(
z
)
d
z
z
−
z
0
{\displaystyle f(z_{0})={\frac {1}{2\pi i}}\int _{C}f(z){\frac {dz}{z-z_{0}}}}
where
C
{\displaystyle C}
is a contour,
z
0
{\displaystyle z_{0}}
is a point inside
C
{\displaystyle C}
and
f
(
z
)
{\displaystyle f(z)}
is analytic on and inside the contour.
Proof: Because
f
(
z
)
{\displaystyle f(z)}
is analytic, we can replace the integral around
C
{\displaystyle C}
with a contour
γ
{\displaystyle \gamma }
with centre
z
0
{\displaystyle z_{0}}
and radius
ρ
{\displaystyle \rho }
1
2
π
i
∫
C
f
(
z
)
d
z
z
−
z
0
=
1
2
π
i
∫
γ
f
(
z
)
d
z
z
−
z
0
{\displaystyle {\frac {1}{2\pi i}}\int _{C}f(z){\frac {dz}{z-z_{0}}}={\frac {1}{2\pi i}}\int _{\gamma }f(z){\frac {dz}{z-z_{0}}}}
As
f
(
z
)
{\displaystyle f(z)}
is analytic it is also continuous. This means for any
ϵ
>
0
{\displaystyle \epsilon >0}
there exists a
δ
>
0
{\displaystyle \delta >0}
such that
|
z
−
z
0
|
<
δ
⟹
|
f
(
z
)
−
f
(
z
0
)
|
<
ϵ
{\displaystyle |z-z_{0}|<\delta \implies |f(z)-f(z_{0})|<\epsilon }
. We can do this by setting
ρ
≤
δ
{\displaystyle \rho \leq \delta }
.
∫
γ
f
(
z
)
d
z
z
−
z
0
=
f
(
z
0
)
∫
γ
d
z
z
−
z
0
+
∫
γ
f
(
z
)
−
f
(
z
0
)
z
−
z
0
d
z
{\displaystyle \int _{\gamma }f(z){\frac {dz}{z-z_{0}}}=f(z_{0})\int _{\gamma }{\frac {dz}{z-z_{0}}}+\int _{\gamma }{\frac {f(z)-f(z_{0})}{z-z_{0}}}dz}
f
(
z
0
)
∫
γ
d
z
z
−
z
0
=
2
π
i
f
(
z
0
)
{\displaystyle f(z_{0})\int _{\gamma }{\frac {dz}{z-z_{0}}}=2\pi if(z_{0})}
∫
γ
f
(
z
)
−
f
(
z
0
)
z
−
z
0
d
z
≤
ϵ
ρ
2
π
ρ
=
2
π
ϵ
{\displaystyle \int _{\gamma }{\frac {f(z)-f(z_{0})}{z-z_{0}}}dz\leq {\frac {\epsilon }{\rho }}2\pi \rho =2\pi \epsilon }
Finally,
1
2
π
i
∫
C
f
(
z
)
d
z
z
−
z
0
=
1
2
π
i
∫
γ
f
(
z
)
d
z
z
−
z
0
=
1
2
π
i
(
2
π
i
f
(
z
0
)
+
2
π
ϵ
)
=
f
(
z
0
)
{\displaystyle {\frac {1}{2\pi i}}\int _{C}f(z){\frac {dz}{z-z_{0}}}={\frac {1}{2\pi i}}\int _{\gamma }f(z){\frac {dz}{z-z_{0}}}={\frac {1}{2\pi i}}(2\pi if(z_{0})+2\pi \epsilon )=f(z_{0})}
as
ϵ
→
0
{\displaystyle \epsilon \to 0}
.
If
f
(
z
)
{\displaystyle f(z)}
is analytic inside and on a contour
C
{\displaystyle C}
, the Taylor series expansion of
f
(
z
)
{\displaystyle f(z)}
around the point
z
0
{\displaystyle z_{0}}
inside
C
{\displaystyle C}
:
f
(
z
)
=
f
(
z
0
)
+
f
′
(
z
0
)
(
z
−
z
0
)
+
f
″
(
z
0
)
(
z
−
z
0
)
2
2
!
+
⋯
{\displaystyle f(z)=f(z_{0})+f'(z_{0})(z-z_{0})+{\frac {f''(z_{0})(z-z_{0})^{2}}{2!}}+\cdots }
Cauchy's coefficient formula states that:
a
n
=
1
2
π
i
∫
C
f
(
z
)
d
z
z
n
+
1
{\displaystyle a_{n}={\frac {1}{2\pi i}}\int _{C}f(z){\frac {dz}{z^{n+1}}}}
Proof: Cauchy's integral formula states:
f
(
z
0
)
=
1
2
π
i
∫
C
f
(
z
)
d
z
z
−
z
0
{\displaystyle f(z_{0})={\frac {1}{2\pi i}}\int _{C}f(z){\frac {dz}{z-z_{0}}}}
If you differentiate both sides with respect to
z
0
{\displaystyle z_{0}}
n
{\displaystyle n}
times, you get:
f
(
n
)
(
z
0
)
n
!
=
1
2
π
i
∫
C
f
(
z
)
d
z
(
z
−
z
0
)
n
+
1
{\displaystyle {\frac {f^{(n)}(z_{0})}{n!}}={\frac {1}{2\pi i}}\int _{C}f(z){\frac {dz}{(z-z_{0})^{n+1}}}}
The Taylor series expansion of
f
(
z
)
{\displaystyle f(z)}
around
0
{\displaystyle 0}
:
f
(
z
)
=
f
(
0
)
+
f
′
(
0
)
z
+
f
″
(
0
)
z
2
2
!
+
⋯
{\displaystyle f(z)=f(0)+f'(0)z+{\frac {f''(0)z^{2}}{2!}}+\cdots }
Therefore:
a
n
=
f
(
n
)
(
0
)
n
!
=
1
2
π
i
∫
C
f
(
z
)
d
z
z
n
+
1
{\displaystyle a_{n}={\frac {f^{(n)}(0)}{n!}}={\frac {1}{2\pi i}}\int _{C}f(z){\frac {dz}{z^{n+1}}}}
Theorem due to Titchmarsh[ 18] .
If
R
{\displaystyle R}
is the radius of convergence of
f
(
z
)
{\displaystyle f(z)}
, for all
n
≥
0
{\displaystyle n\geq 0}
and
0
<
r
<
R
{\displaystyle 0<r<R}
|
a
n
|
≤
max
|
z
|
=
r
|
f
(
z
)
|
r
n
{\displaystyle |a_{n}|\leq {\frac {\max _{|z|=r}|f(z)|}{r^{n}}}}
Proof due to Titchmarsh[ 19] .
By Cauchy's coefficient formula:
a
n
=
1
2
π
i
∫
|
z
|
=
r
f
(
z
)
d
z
z
n
+
1
{\displaystyle a_{n}={\frac {1}{2\pi i}}\int _{|z|=r}f(z){\frac {dz}{z^{n+1}}}}
We have[ 20] :
|
a
n
|
=
|
1
2
π
i
∫
|
z
|
=
r
f
(
z
)
d
z
z
n
+
1
|
=
1
2
π
i
∫
|
z
|
=
r
|
f
(
z
)
|
d
z
z
n
+
1
{\displaystyle |a_{n}|=\left|{\frac {1}{2\pi i}}\int _{|z|=r}f(z){\frac {dz}{z^{n+1}}}\right|={\frac {1}{2\pi i}}\int _{|z|=r}|f(z)|{\frac {dz}{z^{n+1}}}}
and
∫
|
z
|
=
r
|
f
(
z
)
|
d
z
≤
max
|
z
|
=
r
|
f
(
z
)
|
2
π
r
{\displaystyle \int _{|z|=r}|f(z)|dz\leq \max _{|z|=r}|f(z)|2\pi r}
Therefore:
|
a
n
|
=
1
2
π
i
∫
|
z
|
=
r
|
f
(
z
)
|
d
z
z
n
+
1
≤
1
2
π
max
|
z
|
=
r
|
f
(
z
)
|
2
π
r
r
n
+
1
=
max
|
z
|
=
r
|
f
(
z
)
|
r
n
{\displaystyle |a_{n}|={\frac {1}{2\pi i}}\int _{|z|=r}|f(z)|{\frac {dz}{z^{n+1}}}\leq {\frac {1}{2\pi }}\max _{|z|=r}|f(z)|{\frac {2\pi r}{r^{n+1}}}={\frac {\max _{|z|=r}|f(z)|}{r^{n}}}}
Pictorially, we are estimating the contour integral by taking
|
f
(
z
)
|
{\displaystyle |f(z)|}
always at its maximum around the entire contour, shown by the green ring below.
↑ Lang 1999, pp. 53-54.
↑ Lang 1999, pp. 54.
↑ Stroud 2001, pp. 765.
↑ Stroud 2003, pp. 916. Wilf 2006, pp. 50-51.
↑ Lang 1999, pp. 55.
↑ Wilf 2006, pp. 52.
↑ Wilf 2006, pp. 48-52.
↑ Lang 1999, pp. 55-56.
↑ Stroud 2003, pp. 914.
↑ See Wilf 2006, pp. 49.
↑ Lang 1999, pp. 55.
↑ Lang 1999, pp. 56.
↑ This does not prove the case when
t
=
0
,
∞
{\displaystyle t=0,\infty }
.
↑ Stroud 2003, pp. 863.
↑ Lang 1999, pp. 69.
↑ Lang 1999, pp. 116-117.
↑ Titchmarsh 1939, pp. 80-81.
↑ Titchmarsh 1939, pp. 84.
↑ Titchmarsh 1939, pp. 84.
↑ Titchmarsh 1939, pp. 74.
Lang, Serge (1999). Complex Analysis (4th ed.). Springer Science+Business Media, LLC.
Stroud, K. A. (2003). Advanced Engineering Mathematics (4th ed.). Palgrave Macmillan.
Stroud, K. A. (2001). Engineering Mathematics (5th ed.). Palgrave Macmillan.
Titchmarsh, E. C. (1939). The Theory of Functions (2nd ed.). Oxford University Press.
Wilf, Herbert S. (2006). Generatingfunctionology (PDF) (3rd ed.). A K Peters, Ltd.