# Chapter 6

## 7

### a

By page 121 we know that ${\displaystyle f}$ must be bounded, say by ${\displaystyle M}$. We need to show that given ${\displaystyle \epsilon >0}$ we can find some ${\displaystyle c}$ such that ${\displaystyle \int _{c}^{1}f(x)dx\in B_{\epsilon }\left(\int _{0}^{1}f(x)dx\right).}$So, by Theorem 6.12 (c) we have ${\displaystyle \int _{0}^{1}f(x)dx=\int _{0}^{c}f(x)dx+\int _{c}^{1}f(x)dx}$ and ${\displaystyle \int _{0}^{c}f(x)dx\leq M\cdot c}$.

Hence, ${\displaystyle \int _{0}^{1}f(x)dx\leq M\cdot c+\int _{c}^{1}f(x)dx}$ but since we can choose any ${\displaystyle c>0}$ and ${\displaystyle M}$ is fixed we can choose ${\displaystyle c={\frac {\epsilon }{2M}}}$ which yields ${\displaystyle \int _{0}^{1}f(x)dx\leq {\frac {\epsilon }{2}}+\int _{c}^{1}f(x)dx}$ So, given ${\displaystyle \epsilon }$ we can always choose a ${\displaystyle c}$ such that ${\displaystyle \int _{c}^{1}f(x)dx\in B_{\epsilon }\left(\int _{0}^{1}f(x)dx\right)}$ as desired.

### b

Considered the function which is defined to be ${\displaystyle n(-1)^{n}}$ on the last ${\displaystyle 6/(n^{2}\pi ^{2})}$ of the interval [0,1] and zero at those ${\displaystyle f(x)}$ where ${\displaystyle x=6/(n^{2}\pi ^{2})}$. This function is well defined, since we know that ${\displaystyle \sum _{n=1}^{\infty }6/(n^{2}\pi ^{2})=1}$.

More specifically the function has value ${\displaystyle n(-1)^{n}}$ on the open interval from ${\displaystyle (p_{n}=1-\sum _{m=1}^{n-1}6/(m^{2}\pi ^{2}),p_{n+1}=1-\sum _{m=1}^{n}6/(m^{2}\pi ^{2}))}$

First we evaluate the integral of the function itself. Consider a partitioning of the interval ${\displaystyle [0,1]}$ at each ${\displaystyle p_{n}\pm \epsilon }$ for some ${\displaystyle \epsilon >0}$

Then, the lower and upper sums corresponding to the intervals of the partition from ${\displaystyle p_{n}-\epsilon }$ to ${\displaystyle p_{n+1}+\epsilon }$ are the same, since the function is constant valued on these intervals. Moreover, as ${\displaystyle \epsilon \to 0}$ the value of the upper and lower sums both approach ${\displaystyle n(-1)^{n}(p_{n+1}-p_{n})}$.

Thus we can express the value of the integral as the sum of the series ${\displaystyle \sum _{n=1}^{\infty }\left({\frac {6}{n^{2}\pi ^{2}}}\right)n(-1)^{n}=\sum _{n=1}^{\infty }\left({\frac {(-1)^{n}6}{n\pi ^{2}}}\right)}$ ${\displaystyle ={\frac {6}{\pi ^{2}}}\sum _{n=1}^{\infty }\left({\frac {(-1)^{n}}{n}}\right)}$ but we recognize this sum as just a constant multiple of the alternating harmonic series. Hence, the integral converges.

Now we examine the integral of the absolute value of the function. We argue similarly to the above, again partitioning the function at ${\displaystyle p_{n}\pm \epsilon }$ as defined above. The difference is that now, as we let ${\displaystyle \epsilon \to 0}$ the upper and lower sums both go to ${\displaystyle \sum _{n=1}^{\infty }\left({\frac {6}{n^{2}\pi ^{2}}}\right)n=\sum _{n=1}^{\infty }\left({\frac {6}{n\pi ^{2}}}\right)}$ ${\displaystyle ={\frac {6}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n}}}$ and so the integral does not exist, as this is the harmonic series, which does not converge.

In the above proof of divergence the important point is that the lower sums diverge. The fact that the upper sums diverge is an immediate consequence of this.

So, we have demonstrated a function whose integral converges, but does not converge absolutely as desired.

## 8

We begin by showing (${\displaystyle \Rightarrow [itex])that[itex]\int _{1}^{\infty }f(x)dx}$ converges if ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges.

So, we assume to start that ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges. Now consider the partition ${\displaystyle P=\{p_{n}\ |\ p_{n}=n,n\in \mathbb {N} \}}$. Since ${\displaystyle f(x)}$ decreases monotonically it must be that ${\displaystyle inf\{f([p_{n},p_{n+1}])\}=f(p_{n+1})}$ and similarly that ${\displaystyle sup\{f([p_{n},p_{n+1}])\}=f(p_{n})}$. Thus, the integral which we are trying to evaluate is bounded above by ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ and below by ${\displaystyle \sum _{n=2}^{\infty }f(n)}$.

Now we observe that ${\displaystyle \int _{a}^{\infty }f(x)dx}$ may be written as a sum over the domain as ${\displaystyle \sum _{n=1}^{\infty }\left(\int _{p_{n}}^{p_{n+1}}f(x)dx\right)}$ We know moreover that each of these integrals exist, by Theorem 6.9. Also, since ${\displaystyle f(x)}$ is always positive each such integral must be positive. Therefore, the integral may be expressed as a sum of a nonnegative series which is bounded above. Hence, by Theorem 3.24 the integral exists.

Now we prove (${\displaystyle \Leftarrow }$) that if ${\displaystyle \int _{1}^{\infty }f(x)dx}$ converges then ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges.

So assume now that ${\displaystyle \int _{1}^{\infty }f(x)dx}$ converges. Then we can prove that the summation ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ satisfies the Cauchy criterion. We established above ${\displaystyle \int _{k}^{\infty }f(x)dx}$ is bounded above by ${\displaystyle \sum _{n=k}^{\infty }f(n)}$ and below by ${\displaystyle \sum _{n=K+1}^{\infty }f(n)}$. This implies that given a sum ${\displaystyle \sum _{n=K+1}^{\infty }f(n)}$ it is bounded above by the integral ${\displaystyle \int _{k}^{\infty }f(x)dx}$. Moreover, since the integral ${\displaystyle \int _{k}^{\infty }f(x)dx}$ exists and ${\displaystyle f}$ is nonegative we know that it has the property given ${\displaystyle \epsilon >0\ \exists M}$ such that ${\displaystyle \int _{M}^{\infty }f(x)dx<\epsilon }$. For otherwise the integral would not exist and instead tend to infinity.

So now we can apply the Cauchy criterion for series. Since an upper bound of the series has the property that given ${\displaystyle \epsilon >0\ \exists M}$ such that ${\displaystyle \sum _{M}^{\infty }f(x)<\epsilon }$. So must the series itself have this property.

Thus, the sum converges as desired.

## 10

### a

We will prove that If ${\displaystyle u\geq 0}$ and ${\displaystyle v\geq 0}$ then ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ and that equality holds if and only if ${\displaystyle u^{p}=v^{q}}$ \begin{proof} We begin by proving the special case of equality

Assume that ${\displaystyle u^{p}=v^{q}}$. ${\displaystyle \Leftrightarrow u=v^{q/p}}$ ${\displaystyle \Leftrightarrow vu=v^{q/p+1}}$ ${\displaystyle \Leftrightarrow vu=v^{q/p+1}}$ ${\displaystyle \Leftrightarrow vu=v^{q(1/p+1/q)}}$ ${\displaystyle \Leftrightarrow vu=v^{q}}$ (Similarly we can show that ${\displaystyle vu=u^{p}\Leftrightarrow u^{p}=v^{q}}$.) Thus, ${\displaystyle vu=v^{q}\Leftrightarrow u^{p}=v^{q}}$ and we see moreover that ${\displaystyle uv={\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}\Leftarrow vu=v^{q}}$ since in this case we have ${\displaystyle uv=v^{q}\left({\frac {1}{p}}+{\frac {1}{q}}\right)\checkmark }$ Also, if it is not the case that ${\displaystyle vu=v^{q}}$ then it is easy to see that ${\displaystyle uv\neq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ as for a sum of quotients by ${\displaystyle p}$ and ${\displaystyle q}$ to not contain ${\displaystyle p}$, ${\displaystyle q}$ we must have the numerators equal.

Now we show that as we vary ${\displaystyle u}$ we must always have ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$. For, compute the derivative of ${\displaystyle uv}$ with respect to ${\displaystyle u}$, and the derivative of ${\displaystyle {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ with respect to ${\displaystyle u}$. We get ${\displaystyle v}$ and ${\displaystyle u^{p-1}}$ respectively. If we have ${\displaystyle u^{p}=v^{q}}$ then these are equal as demonstrated above (we showed that ${\displaystyle uv=u^{p}}$ in that case). In the case that ${\displaystyle u}$ is larger than this value then ${\displaystyle u^{p-1}>v}$ and in the case that ${\displaystyle u}$ is less than this value then ${\displaystyle u^{p-1}.

This argument can be repeated in an analogous manner for variations in ${\displaystyle v}$, and given any ${\displaystyle p}$ and ${\displaystyle q}$ we can find values for which ${\displaystyle u^{p}=v^{q}}$.

Thus, we observe that ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ as desired\end{proof}

### b

If ${\displaystyle f\in {\mathcal {R}}(\alpha )}$, ${\displaystyle g\in {\mathcal {R}}(\alpha )}$, ${\displaystyle f\geq 0}$, ${\displaystyle g\geq 0}$, and ${\displaystyle \int _{a}^{b}f^{p}d\alpha =1=\int _{a}^{b}g^{q}d\alpha ,}$ then ${\displaystyle \int _{a}^{b}fgd\alpha \leq 1}$ \begin{proof}

If ${\displaystyle 0\leq f\in {\mathcal {R}}(\alpha )}$ and ${\displaystyle 0\leq g\in {\mathcal {R}}(\alpha )}$ then ${\displaystyle f^{p}}$ and ${\displaystyle g^{q}}$ are in ${\displaystyle {\mathcal {R}}(\alpha )}$ by Theorem 6.11. Also, we have ${\displaystyle fg\in {\mathcal {R}}(\alpha )}$ so we get ${\displaystyle \int _{a}^{b}fgd\alpha \leq {\frac {1}{p}}\int _{a}^{b}f^{p}d\alpha +{\frac {1}{q}}\int _{a}^{b}g^{q}d\alpha =1}$ as desired.\end{proof}

### c

We prove H\"older's inequality \begin{proof} If ${\displaystyle f}$ and ${\displaystyle g}$ are complex valued then we get ${\displaystyle \left|\int _{a}^{b}fgd\alpha \right|\leq \int _{a}^{b}|f||g|d\alpha .}$

If ${\displaystyle \int _{a}^{b}|f|^{p}\neq 0}$and ${\displaystyle \int _{a}^{b}|g|^{q}\neq 0}$ then applying the previous part to the functions ${\displaystyle |f|/c}$ and ${\displaystyle |g|/d}$ where ${\displaystyle c^{p}=\int _{a}^{b}|g|^{q}}$ and ${\displaystyle d^{q}=\int _{a}^{b}|g|^{q}}$ gives what we wanted to show.

${\displaystyle \left|\int _{a}^{b}fgd\alpha \right|\leq \left(\int _{a}^{b}|f|^{p}d\alpha \right)^{1/p}+\left(\int _{a}^{b}|g|^{q}d\alpha \right)^{1/q}}$

However, if one of the above is zero (say without loss of generality ${\displaystyle \int _{a}^{b}|f|^{p}=0}$ then we just have ${\displaystyle \int _{a}^{b}|f|(c|g|)d\alpha \leq c^{q}{\frac {1}{q}}\int _{a}^{b}|g|^{q}d\alpha }$ for ${\displaystyle c>0}$. Taking the limit ${\displaystyle c\to 0}$ we observe that the inequality is still true.

${\displaystyle \int _{a}^{b}|f||g|d\alpha =0}$

\end{proof}

## 16

\begin{enumerate}

### a

We take the expression ${\displaystyle s\int _{1}^{\infty }{\frac {[x]}{x^{s+1}}}dx}$ and express it as a sum of integrals on the intervals ${\displaystyle (n,n+1)}$ to get ${\displaystyle s\left(\int _{1}^{2}{\frac {[x]}{x^{s+1}}}dx+\int _{2}^{3}{\frac {[x]}{x^{s+1}}}dx+\dots \right)}$ but since each such interval ${\displaystyle [x]}$ is the same, we just write ${\displaystyle s\left(\int _{1}^{2}{\frac {1}{x^{s+1}}}dx+\int _{2}^{3}{\frac {2}{x^{s+1}}}dx+\dots \right)}$(1)

Now we exploit the Fundamental Theorem of Calculus, computing ${\displaystyle \int _{n}^{n+1}{\frac {n}{x^{s+1}}}dx=n\left[-{\frac {x^{-s}}{s}}\right]_{n}^{n+1}=n\left(-{\frac {(n+1)^{-s}}{s}}+{\frac {n^{-s}}{s}}\right).}$ So, the summation in Equation 1 can, more explicitly be written as ${\displaystyle s\sum _{n=1}^{\infty }n\left(-{\frac {(n+1)^{-s}}{s}}+{\frac {n^{-s}}{s}}\right)=\sum _{n=1}^{\infty }\left({\frac {n}{n^{s}}}-{\frac {n}{(n+1)^{s}}}\right)}$ However, grouping common denominators, we observe that the sum partially telescopes to yield more simply ${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\zeta (s).}$

### b

Having now proved Part a it suffices to show that ${\displaystyle s\int _{1}^{\infty }{\frac {[x]}{x^{s+1}}}dx={\frac {s}{s-1}}-s\int _{1}^{\infty }{\frac {x-[x]}{x^{s+1}}}dx.}$

By the Fundamental Theorem of Calculus we have ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s}}}dx={\frac {1}{s-1}}}$ So \begin{eqnarray} \int_1^\infty \frac{x}{x^{s+1}} dx&=&\frac{1}{s-1}\\ \Rightarrow s \int_1^\infty \frac{x}{x^{s+1}} dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \left( \frac{x-[x]}{x^{s+1}} + \frac{[x]}{x^{s+1}} \right) dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \left( \frac{x-[x]}{x^{s+1}} + \frac{[x]}{x^{s+1}} \right) dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \frac{[x]}{x^{s+1}} dx &=&\frac{s}{s-1} - s \int_1^\infty \frac{x-[x]}{x^{s+1}} dx\\ \end{eqnarray*} as desired\

end part b

It remains now to show that the integral in Part \ref{2} converges.

Since for ${\displaystyle x\in (1,\infty )[itex]wehave[itex]0\leq {\frac {x-[x]}{x^{s+1}}}\leq {\frac {1}{x^{s+1}}}}$ we know that ${\displaystyle \int _{1}^{\infty }{\frac {x-[x]}{x^{s+1}}}dx}$ converges if and only if ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s+1}}}dx}$ converges.

However, ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s+1}}}dx}$ converges by the integral test (Problem 8) since we have already shown that the sequence ${\displaystyle \sum _{x=1}^{\infty }{\frac {1}{x^{s+1}}}}$ is convergent for ${\displaystyle 1