# Analytic Combinatorics/Tauberian Theorem

## Introduction

'Tauberian' refers to a class of theorems with many applications. The full scope of Tauberian theorems is too large to cover here.

We prove here only a particular Tauberian theorem due to Hardy, Littlewood and Karamata, which is useful in analytic combinatorics.

## Theorem

Theorem due to Hardy[1].

If ${\displaystyle f(y)\sim y^{-\sigma }L(y^{-1})}$ as ${\displaystyle y\to 0}$, where ${\displaystyle \sigma >0}$ and ${\displaystyle L(y)}$ is a slowly varying function, then ${\displaystyle a_{n}\sim {\frac {n^{\sigma -1}}{\Gamma (\sigma )}}L(n)}$ when ${\displaystyle n\to \infty }$.

Bear in mind that if ${\displaystyle f(x)=\sum a_{n}x^{n}={\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})}$ we can convert it to a General Dirichlet series (without changing the value of the coefficients we are interested in) by the substitution ${\displaystyle x=e^{-y}}$ such that

${\displaystyle f(y)=\sum a_{n}(e^{-y})^{n}={\frac {1}{(1-e^{-y})^{\sigma }}}L({\frac {1}{1-e^{-y}}})\sim y^{-\sigma }L(y^{-1})}$

as ${\displaystyle y\to 0}$[2].

This gives us

${\displaystyle [x^{n}]{\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})\sim {\frac {n^{\sigma -1}}{\Gamma (\sigma )}}L(n)}$

which is the form it is given in Flajolet and Sedgewick 2009, pp. 435.

## Proof

Proof due to Hardy[3].

If ${\displaystyle f(x)={\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=\sum _{0}^{\infty }a_{n}x^{n}}$ then this method actually finds the asymptotic estimate of ${\displaystyle s_{n}=\sum _{i=0}^{n}a_{i}}$ after which we can find ${\displaystyle a_{n}}$ by ${\displaystyle s_{n}-s_{n-1}}$ or by finding ${\displaystyle s_{n}}$ in ${\displaystyle (1-x)f(x)}$.

Let ${\displaystyle a(t)}$ be a step function where ${\displaystyle a(n)-a(n-1)=a_{n}}$, ${\displaystyle a(n)=\sum _{i=0}^{n}a_{i}}$ and ${\displaystyle a(0)=0}$[4].

${\displaystyle a(y^{-1})=\int _{0}^{y^{-1}}da(t)}$

Using integration by parts[5]:

${\displaystyle \int _{0}^{y^{-1}}da(t)=a(y^{-1})-a(0)-\int _{0}^{y^{-1}}a(t)d1=a(y^{-1})}$

Because ${\displaystyle a(0)=\int _{0}^{y^{-1}}a(t)d1=0}$.

If ${\displaystyle g(x)=x^{-1}\quad (e^{-1}\leq x\leq 1),\quad g(x)=0\quad (0\leq x then

${\displaystyle \int _{0}^{y^{-1}}da(t)=\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)}$[6]

Because when ${\displaystyle 0\leq t\leq y^{-1}}$ then ${\displaystyle e^{-yt}g(e^{-yt})}$ ranges from ${\displaystyle e^{-1}g(e^{-1})}$ to ${\displaystyle e^{0}g(e^{0})}$ so ${\displaystyle e^{-yt}g(e^{-yt})={\frac {e^{yt}}{e^{yt}}}=1}$ and when ${\displaystyle t>y^{-1}}$ then ${\displaystyle e^{-yt}g(e^{-yt})=0}$.

### Lemma 1

${\displaystyle \int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt}$[7]

as ${\displaystyle y\to 0}$.

To prove this we need two further lemmas.

### Lemma 2

${\displaystyle \int _{0}^{\infty }e^{-yt}Q(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}Q(e^{-t})dt}$[8]

where ${\displaystyle Q}$ is any polynomial and as ${\displaystyle y\to 0}$.

${\displaystyle \int _{0}^{\infty }e^{-yt}Q(e^{-yt})da(t)=\int _{0}^{\infty }e^{-yt}(\cdots +e^{-ytn}+\cdots )da(t)=\int _{0}^{\infty }\cdots +\int _{0}^{\infty }e^{-yt}e^{-ytn}da(t)+\int _{0}^{\infty }\cdots }$[9]
${\displaystyle \int _{0}^{\infty }e^{-yt}e^{-ytn}da(t)=\int _{0}^{\infty }e^{-(n+1)yt}da(t)\sim (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{(n+1)y}})}$

as ${\displaystyle y\to 0}$ by assumption in the theorem.

${\displaystyle (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{(n+1)y}})\sim (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{y}})}$

by definition of slowly varying function.

${\displaystyle (n+1)^{-\sigma }y^{-\sigma }L({\frac {1}{y}})=y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tn}t^{\sigma -1}dt}$
${\displaystyle \cdots +y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tm}t^{\sigma -1}dt+y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}e^{-tn}t^{\sigma -1}dt+\cdots =y^{-\sigma }L({\frac {1}{y}}){\frac {1}{\Gamma (\sigma )}}\int _{0}^{\infty }e^{-t}Q(e^{-t})t^{\sigma -1}dt}$[9]

### Lemma 3

If ${\displaystyle g(x)}$ is real-valued and Riemann integrable in the open interval ${\displaystyle (0,1)}$ and ${\displaystyle \sigma >0}$ then there exist polynomials ${\displaystyle p(x)}$ and ${\displaystyle P(x)}$ such that ${\displaystyle p(x) and

${\displaystyle \int _{0}^{1}(P(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx=\int _{0}^{\infty }e^{-t}t^{\sigma -1}(P(e^{-t})-p(e^{-t}))dt<\epsilon \Gamma (\sigma )}$[10]

Construct continuous functions ${\displaystyle h}$ and ${\displaystyle H}$[11] such that

${\displaystyle h\leq g\leq H,\quad \int _{0}^{1}(H-g)dx<\epsilon ,\quad \int _{0}^{1}(g-h)dx<\epsilon }$

Then

${\displaystyle \int _{0}^{1}(H-g)(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \int _{0}^{1}(log{\frac {1}{x}})^{\sigma -1}dx=\epsilon \Gamma (\sigma )}$[12]

and

${\displaystyle \int _{0}^{1}(g-h)(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \int _{0}^{1}(log{\frac {1}{x}})^{\sigma -1}dx=\epsilon \Gamma (\sigma )}$

By the Weierstrass approximation theorem there are polynomials ${\displaystyle q}$ and ${\displaystyle Q}$ such that ${\displaystyle |Q-h|<\epsilon }$ and ${\displaystyle |h-q|<\epsilon }$. If ${\displaystyle p(x)=q(x)-\epsilon }$ and ${\displaystyle P(x)=Q(x)+\epsilon }$ then ${\displaystyle p(x) as required by the lemma and

${\displaystyle \int _{0}^{1}(P(x)-g(x))(log{\frac {1}{x}})^{\sigma -1}dx\leq \int _{0}^{1}(P(x)-Q(x))(log{\frac {1}{x}})^{\sigma -1}dx+\int _{0}^{1}|Q(x)-h(x)|(log{\frac {1}{x}})^{\sigma -1}dx+\int _{0}^{1}(h(x)-g(x))(log{\frac {1}{x}})^{\sigma -1}dx<3\epsilon \Gamma (\sigma )}$

By virtue of ${\displaystyle g(x)}$ being Riemann-integrable, we can find finite step functions ${\displaystyle g_{1}}$ and ${\displaystyle g_{2}}$ such that ${\displaystyle g_{1}(x)\leq g(x)\leq g_{2}(x)}$ and

${\displaystyle \int _{0}^{1}(g_{2}(x)-g_{1}(x))(log{\frac {1}{x}})^{\sigma -1}dx<3\epsilon \Gamma (\sigma )}$

Then, we have proven above there are polynomials ${\displaystyle p(x)}$ and ${\displaystyle P(x)}$ such that ${\displaystyle p(x) and

${\displaystyle \int _{0}^{1}(P(x)-g_{2}(x))(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \Gamma (\sigma ),\quad \int _{0}^{1}(g_{1}(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx<\epsilon \Gamma (\sigma )}$

Combining these we can complete the proof of lemma 3:

${\displaystyle \int _{0}^{1}(P(x)-p(x))(log{\frac {1}{x}})^{\sigma -1}dx<5\epsilon \Gamma (\sigma )}$

Going back to the proof of lemma 1

${\displaystyle \limsup _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\leq \lim _{y\to \infty }\int _{0}^{\infty }e^{-yt}P(e^{-yt})da(t)={\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt}$

by lemma 2.

Lemma 3 implies that

${\displaystyle \int _{0}^{\infty }e^{-t}t^{\sigma -1}(P(e^{-t})-g(e^{-t}))dt<\epsilon \Gamma (\sigma )}$

so that

${\displaystyle {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt-{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt<\epsilon }$

and

${\displaystyle {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}P(e^{-t})dt<{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt+\epsilon }$

and finally

${\displaystyle \limsup _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)<{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt+\epsilon }$

By a similar argument, one can prove

${\displaystyle \liminf _{y\to \infty }\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)>{\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt-\epsilon }$

Combining both these results we conclude the proof of lemma 1

${\displaystyle \int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt}$

Putting it all together:

${\displaystyle a(y^{-1})=\int _{0}^{y^{-1}}da(t)=\int _{0}^{\infty }e^{-yt}g(e^{-yt})da(t)\sim {\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{\infty }e^{-t}t^{\sigma -1}g(e^{-t})dt={\frac {1}{\Gamma (\sigma )}}y^{-\sigma }L(y^{-1})\int _{0}^{1}t^{\sigma -1}dt={\frac {1}{\Gamma (\sigma +1)}}y^{-\sigma }L(y^{-1})}$

or

${\displaystyle a(y)={\frac {y^{\sigma }}{\Gamma (\sigma +1)}}L(y)}$.

As ${\displaystyle y\to \infty }$. Then, if ${\displaystyle f(x)={\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})}$

${\displaystyle a_{n}=[x^{n}]{\frac {1}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=[s_{n}]{\frac {(1-x)}{(1-x)^{\sigma }}}L({\frac {1}{1-x}})=[s_{n}]{\frac {1}{(1-x)^{\sigma -1}}}L({\frac {1}{1-x}})={\frac {n^{\sigma -1}}{\Gamma (\sigma )}}L(n)}$

## Notes

1. Hardy 1949, pp. 166.
2. Because ${\displaystyle e^{-y}=1-y+o(y)}$ as ${\displaystyle y\to 0}$ which is equivalent to ${\displaystyle e^{-y}\sim 1-y}$ as ${\displaystyle y\to 0}$. See De Bruijn 1981, pp. 10 and Hardy 1949, pp. 155.
3. Hardy 1949, pp. 166-168.
4. Hardy 1949, pp. 158.
5. w:Riemann–Stieltjes_integral#Properties
6. Hardy 1949, pp. 158.
7. Hardy 1949, pp. 166.
8. Hardy 1949, pp. 168.
9. a b Due to the sum rule of integration?
10. Hardy 1949, pp. 166.
11. For example, you could construct piecewise continuous functions by partitioning ${\displaystyle [0,1]}$ into ${\displaystyle \{x_{i}\}}$, putting ${\displaystyle h(x_{i})=\inf\{f(x)|x\in [x_{i},x_{i+1}]\}}$ and ${\displaystyle H(x_{i})=\sup\{f(x)|x\in [x_{i},x_{i+1}]\}}$ and then "joining the dots" between these values. Refine the partition until the conditions are met.
12. w:Gamma_function#Integral_representations

## References

• Hardy, G.H. (1949). Divergent Series (1st ed.). Oxford University Press.
• Flajolet, Philippe; Sedgewick, Robert (2009). Analytic Combinatorics (PDF). Cambridge University Press.
• De Bruijn, N.G. (1981). Asymptotic Methods in Analysis. Dover Publications.