# Analytic Combinatorics/Darboux Method

## Introduction

Darboux's method is one way of estimating the coefficients of generating functions involving roots.

It is easier than Singularity Analysis, but it applies to a smaller set of functions.

## Big O

We will make use of the "Big O" notation.

$f(z)=O(g(z))$ as $z\to \zeta$ which means there exists positive real numbers $M,\delta$ such that if $0<|z-\zeta |<\delta$ :

$|f(z)|\leq Mg(z)$ Alternatively, we can say that

$f(z)=O(g(n))$ for positive integer $n$ , meaning there exists a positive real number $M$ and positive integer $N$ such that:

$|f(n)|\leq Mg(n)$ for $n\geq N$ ## Theorem

Theorem due to Wilf.

If we have a function $(1-z)^{\beta }f(z)$ where $\beta \notin \{0,1,2,\ldots \}$ where $f(z)$ has a radius of convergence greater than $1$ and an expansion near 1 of $\sum _{j\geq 0}f_{j}(1-z)^{j}$ , then:

$[z^{n}](1-z)^{\beta }f(z)=\sum _{j=0}^{m}f_{j}{\frac {n^{-\beta -j-1}}{\Gamma (-\beta -j)}}+O(n^{-m-\beta -2})$ ## Example

The theorem is a bit abstract, so I will show an example of how you might use it before going into the proof.

Taking an example from Wilf:

${\frac {e^{-z/2-z^{2}/4}}{\sqrt {1-z}}}=(1-z)^{-{\frac {1}{2}}}e^{-z/2-z^{2}/4}$ $e^{-z/2-z^{2}/4}$ is a complete function, so its radius of convergence is greater than 1.

Near 1 it can be expanded using the Taylor series:

$e^{-z/2-z^{2}/4}=e^{-3/4}+e^{-3/4}(1-z)+\cdots$ Therefore, for $m=0$ :

$[z^{n}]{\frac {e^{-z/2-z^{2}/4}}{\sqrt {1-z}}}=e^{-3/4}{\frac {n^{-{\frac {1}{2}}}}{\Gamma ({\frac {1}{2}})}}+O(n^{-{\frac {3}{2}}})$ Or, if we want more precision we can set $m=1$ :

$[z^{n}]{\frac {e^{-z/2-z^{2}/4}}{\sqrt {1-z}}}=e^{-3/4}{\frac {n^{-{\frac {1}{2}}}}{\Gamma ({\frac {1}{2}})}}+e^{-3/4}{\frac {n^{-{\frac {3}{2}}}}{\Gamma (-{\frac {1}{2}})}}+O(n^{-{\frac {5}{2}}})$ and so on.

## Proof

Proof due to Wilf.

We have:

$(1-z)^{\beta }f(z)=\sum _{j\geq 0}f_{j}(1-z)^{\beta +j}$ and:

$\sum _{j\geq 0}f_{j}(1-z)^{\beta +j}=\sum _{j=0}^{m}f_{j}(1-z)^{\beta +j}+\sum _{j>m}f_{j}(1-z)^{\beta +j}$ By factoring out $(1-z)^{\beta +m+1}$ from the last sum:

$\sum _{j>m}f_{j}(1-z)^{\beta +j}=(1-z)^{\beta +m+1}g(z)$ Therefore:

$(1-z)^{\beta }f(z)=\sum _{j=0}^{m}f_{j}(1-z)^{\beta +j}+(1-z)^{\beta +m+1}g(z)$ We have to prove that:

1. $[z^{n}]\sum _{j=0}^{m}f_{j}(1-z)^{\beta +j}=\sum _{j=0}^{m}f_{j}{\frac {n^{-\beta -j-1}}{\Gamma (-\beta -j)}}$ 2. $[z^{n}](1-z)^{\beta +m+1}g(z)=O(n^{-m-\beta -2})$ ### Proof of 1

By applying #Lemma 1:

$[z^{n}]\sum _{j=0}^{m}f_{j}(1-z)^{\beta +j}=\sum _{j=0}^{m}f_{j}[z^{n}](1-z)^{\beta +j}\sim \sum _{j=0}^{m}f_{j}{\frac {n^{-\beta -j-1}}{\Gamma (-\beta -j)}}$ ### Proof of 2

$[z^{n}](1-z)^{\beta +m+1}=O(n^{-m-\beta -2})$ (by #Lemma 1)
$[z^{n}]g(z)=O(\theta ^{n})\quad (0<\theta <1)$ (because, by assumption in the theorem, the radius of convergence $R$ is greater than $1$ and Cauchy's inequality tells us that $[z^{n}]g(z)=O\left({\frac {1}{R^{n}}}\right)$ and ${\frac {1}{R^{n}}}<1$ )
$\left|\sum _{0\leq k\leq {\frac {n}{2}}}h_{k}g_{n-k}\right|\leq \left\{\max _{0\leq k\leq {\frac {n}{2}}}|h_{k}|\right\}\left\{\sum _{0\leq k\leq {\frac {n}{2}}}A\theta ^{n-k}\right\}=\max\{B,Bn^{-m-\beta -2}\}C\theta ^{\frac {n}{2}}\leq D\theta ^{\frac {n}{2}}=O(\theta ^{\frac {n}{2}})$ (for $A,B,C,D$ constants and assuming that $-m-\beta -2<0$ ).
$\left|\sum _{{\frac {n}{2}}\leq k\leq n}h_{k}g_{n-k}\right|\leq \left\{\max _{{\frac {n}{2}}\leq k\leq n}|h_{k}|\right\}\left\{\sum _{{\frac {n}{2}}\leq k\leq n}A\theta ^{n-k}\right\}\leq Bn^{-m-\beta -2}=O(n^{-m-\beta -2})$ because $\left\{\sum _{{\frac {n}{2}}\leq k\leq n}A\theta ^{n-k}\right\}\leq C\max _{{\frac {n}{2}}\leq k\leq n}\theta ^{n-k}=C$ because $\theta ^{2}\leq \theta ^{1}\leq \theta ^{0}=1$ .

Putting it all together:

$[z^{n}](1-z)^{\beta +m+1}g(z)=O(\theta ^{\frac {n}{2}})+O(n^{-m-\beta -2})=O(n^{-m-\beta -2})$ because $C\theta ^{\frac {n}{2}}=o(n^{-m-\beta -2})$ because $\theta <1$ .

### Lemma 1

$[z^{n}](1-z)^{\beta }\sim {\frac {n^{-\beta -1}}{\Gamma (-\beta )}}$ Proof:

$[z^{n}](1-z)^{\beta }={\binom {\beta }{n}}(-1)^{n}={\binom {n-\beta -1}{n}}={\frac {\Gamma (n-\beta )}{\Gamma (-\beta )\Gamma (n+1)}}={\frac {1}{\Gamma (-\beta )(n-\beta )^{(\beta +1)}}}\sim {\frac {n^{-\beta -1}}{\Gamma (-\beta )}}$ where $x^{(n)}$ is the rising factorial.

## Szegő's theorem

We can apply a similar theorem to functions with multiple singularities. From Wilf and Szegő.

If $f(z)$ is analytic in $|z|<1$ , has a finite number of singularities $e^{i\phi _{1}},e^{i\phi _{2}},\cdots ,e^{i\phi _{r}}$ on the unit circle $|z|=1$ and in the neighbourhood of each singularity has the expansion

$f(z)=\sum _{v\geq 0}c_{v}^{(k)}(1-ze^{-i\phi k})^{\alpha _{k}+v\beta _{k}}\quad (k=1,2,\cdots ,r{\text{ and }}\beta _{k}>0)$ Then we have the asymptotic series

$[z^{n}]f(z)=\sum _{v\geq 0}\sum _{k=1}^{r}c_{v}^{(k)}{\binom {\alpha _{k}+v\beta _{k}}{n}}(-e^{i\phi k})^{n}$ 