# User:TakuyaMurata/Calculus

## Module and linear space

An additive group ${\displaystyle G}$ is said to be a module over ${\displaystyle R}$, or R-module for short, if the scalars, the members of a ring ${\displaystyle R}$, satisfy the following properties: if ${\displaystyle x,y\in G}$ and ${\displaystyle \alpha ,\beta \in R}$

• (i) Both ${\displaystyle \alpha x}$ and ${\displaystyle x+y}$ are in ${\displaystyle G}$
• (ii) ${\displaystyle (\alpha \beta )x=\alpha (\beta x)}$ (associativity)
• (iii) ${\displaystyle \alpha (x+y)=\alpha x+\alpha y}$ and ${\displaystyle (\alpha +\beta )x=\alpha x+\beta x}$ (distribution law)
• (iv) ${\displaystyle 1_{R}x=x}$

By definition, every abelian group itself is a module over ${\displaystyle \mathbb {Z} }$, since ${\displaystyle x+x+x+...=nx}$ and ${\displaystyle n}$ is a scalar. Finally, a linear space is a module over a field. Defining the notion of dimension is a bit tricky. However, we can safely say a ${\displaystyle {\mathcal {K}}}$-vector space is finite-dimensional if it has a finite basis; that is, we can find linear independent vectors ${\displaystyle e_{1},e_{2},...,e_{n}}$ so that ${\displaystyle {\mathcal {V}}=\{a_{1}e_{1}+a_{2}e_{2}+...+a_{n}e_{n};a_{j}\in {\mathcal {F}}\}}$. Such a basis need not be unique.

3 Theorem Let ${\displaystyle {\mathcal {V}}}$ be a finite-dimensional ${\displaystyle {\mathcal {K}}}$-vector space. Then ${\displaystyle {\mathcal {V}}^{*}}$ has the same dimension as ${\displaystyle {\mathcal {V}}}$ does; that is, every basis for ${\displaystyle {\mathcal {V}}}$ has the same cardinality as every basis for ${\displaystyle {\mathcal {V}}^{*}}$ does.

It can be shown that the map ${\displaystyle {\mathcal {V}}\to {\mathcal {V}}^{*}}$ cannot be defined constructively.[1] (TODO: need to detail this matter)

1 Theorem If ${\displaystyle {\mathcal {X}}}$ is a TVS and every finite subset of ${\displaystyle {\mathcal {X}}}$ is closed, it then follows that ${\displaystyle {\mathcal {X}}}$ is a Hausdorff space.
Proof: Let ${\displaystyle x,y\in X}$ with ${\displaystyle x\neq y}$ be given. Moreover, let ${\displaystyle \Omega }$ be the complement of the singleton ${\displaystyle \{y\}}$, which is open by hypothesis. Since the function ${\displaystyle f(z)=x+z}$ is continuous at ${\displaystyle 0}$ and ${\displaystyle f(0)=x}$ is in ${\displaystyle \Omega }$, we can find an ${\displaystyle \omega }$ open and such that ${\displaystyle \{x\}+\omega \subset \Omega }$. Here, we used, and would do so henceforward, the notation ${\displaystyle A+B=}$ the union of ${\displaystyle \{x+y\}}$ taken all over ${\displaystyle x\in A}$ and ${\displaystyle y\in B}$. Furthermore, since the function ${\displaystyle g(x)=-x}$ is continuous and so is its inverse, namely ${\displaystyle g}$, we may assume that ${\displaystyle \omega =-\omega }$ by replacing ${\displaystyle \omega }$ by the intersection of ${\displaystyle \omega }$ and ${\displaystyle -\omega }$. By repeating the same construction for each ${\displaystyle x+z}$ where ${\displaystyle z\in \omega }$, we find ${\displaystyle \omega }$ so that ${\displaystyle \{x\}+\omega +\omega \subset \Omega }$. It then follows that ${\displaystyle \{x\}+\omega }$ and ${\displaystyle \{y\}+\omega }$ are disjoint. Indeed, if we write ${\displaystyle x+z=y+w}$ for some ${\displaystyle z,w\in \omega }$, then ${\displaystyle y=x+z-w\in \Omega }$, a contradiction. ${\displaystyle \square }$

## Normed spaces

A vector space is said to be normed if it is a metric space and its metric ${\displaystyle d}$ has the form:

${\displaystyle d(x,y)=\|x-y\|}$

Here, the function ${\displaystyle \|\cdot \|}$, called a norm, has the property (in addition to that it induces the metric) that ${\displaystyle \|\lambda x\|=|\lambda |\|x\|}$ for any scalar ${\displaystyle \lambda }$. We note that:

${\displaystyle \|x+y\|=d(x,-y)\leq d(x,0)+d(0,-y)=\|x\|+\|y\|}$

and

${\displaystyle d(x+z,y+z)=\|x-y\|=d(x,y)}$ for any ${\displaystyle x,y,z}$.

It may go without saying but a vector space is infinite-dimensional if it is not finite-dimensional.

3 Theorem Let ${\displaystyle {\mathcal {X}}}$, ${\displaystyle {\mathcal {Y}}}$ be normed spaces. If ${\displaystyle {\mathcal {X}}}$ is an infinite-dimensional and if ${\displaystyle {\mathcal {Y}}}$ is nonzero, there exists a linear operator ${\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}}$ that is not continuous.

## Baire's theorem

A normed space is said to be complete when every Cauchy sequence in it converges in it.

3 Theorem Let ${\displaystyle E}$ is a subspace of a Banach space ${\displaystyle G}$ carrying the same norm. Then the following are equivalent:

(a) ${\displaystyle E}$ is complete.
(b) ${\displaystyle E}$ is closed in ${\displaystyle G}$.
(c) ${\displaystyle \sum \|x_{k}\|<\infty }$ implies ${\displaystyle \sum x_{k}}$.

Proof: (i) Show (a) ${\displaystyle \iff }$ (b). If ${\displaystyle E}$ is complete, then every Cauchy sequence in ${\displaystyle E}$ has the limit in ${\displaystyle E}$; thus, ${\displaystyle E}$ is closed. Conversely, if ${\displaystyle E}$ is closed, then every Cauchy sequence converges in ${\displaystyle E}$ since ${\displaystyle G}$ is complete. Hence, ${\displaystyle E}$ is complete. (ii) Show (a) ${\displaystyle \iff }$ (c). Let ${\displaystyle x_{j}\in G}$ be a Cauchy sequence. Then

${\displaystyle \left\|\sum _{0}^{n}x_{k}-\sum _{0}^{m}x_{k}\right\|=\left\|\sum _{m}^{n}x_{k}\right\|\leq \sum _{m}^{n}\|x_{k}\|\to 0}$ as ${\displaystyle n,m\to \infty }$.

Thus, ${\displaystyle \sum x_{k}}$ is Cauchy, and converges in ${\displaystyle G}$ since the completeness. Conversely, since a Cauchy sequence is convergent, we can find its subsequence ${\displaystyle x_{k}}$ such that ${\displaystyle \|x_{k+1}-x_{k}\|<2^{-k}}$. Then

${\displaystyle \sum \|x_{k+1}-x_{k}\|<\infty }$.

If the summation condition holds, then it follows that ${\displaystyle \sum x_{k+1}-x_{k}}$ converges in ${\displaystyle G}$. Hence, ${\displaystyle x_{j}}$ converges in ${\displaystyle G}$ as well. ${\displaystyle \square }$

3 Corollary ${\displaystyle Q}$ is incomplete but dense in ${\displaystyle \mathbb {R} }$.
Proof: ${\displaystyle \mathbb {Q} }$ is not closed in ${\displaystyle \mathbb {R} }$. Since ${\displaystyle \mathbb {R} \backslash \mathbb {Q} }$ has empty interior, ${\displaystyle {\overline {\mathbb {Q} }}={\overline {\mathbb {R} }}}$. ${\displaystyle \square }$

We say a set has dense complement if its closure has empty interior.

The next is the theorem whose importance is not what it says literally but that of consequences. Though the theorem can be proved more generally for a pseudometric space; e.g., F-space, this classical formulation suffices for the remainder of the book.

3 Theorem A complete normed space ${\displaystyle G}$ which is nonempty is never the union of a sequence of subsets of ${\displaystyle G}$ with dense complement.
Proof: Let ${\displaystyle E_{n}\subset G}$ be a sequence of subsets of ${\displaystyle G}$ with dense complement. Since ${\displaystyle {\overline {E_{1}}}}$ has empty interior and ${\displaystyle G}$ has nonempty interior, there exists an nonempty open ball ${\displaystyle S_{1}\subset (G\backslash {\overline {E_{1}}})}$ with the radius ${\displaystyle \leq 2^{-1}}$. Since ${\displaystyle {\overline {E^{2}}}}$ has empty interior and ${\displaystyle S_{1}}$ has empty interior, again there exists an nonempty open ball ${\displaystyle S_{2}\subset (S_{1}\backslash {\overline {E_{1}}})}$ with the radius ${\displaystyle \leq 2^{-2}}$. Iterating the construction ad infinitum we get the decreasing sequence ${\displaystyle S_{n}}$. Now let ${\displaystyle x_{n}}$ be the sequence of the centers of ${\displaystyle S_{n}}$. Then ${\displaystyle x_{n}}$ is Cauchy since: for some ${\displaystyle N\leq n,m}$

${\displaystyle \|x_{n}-x_{m}\|<2^{-N}+2^{-N}\to 0}$ as ${\displaystyle N\to \infty }$.

It then follows ${\displaystyle x_{n}}$ converges in ${\displaystyle G\backslash \bigcup ^{\infty }E_{n}}$ from the compleness of ${\displaystyle G}$. ${\displaystyle \square }$

3 Corollary (open mapping theorem) If ${\displaystyle A}$ and ${\displaystyle B}$ are Banach spaces, then a continuous linear surjection ${\displaystyle f:A\to B}$ maps an open set in ${\displaystyle A}$ to an open set in ${\displaystyle B}$.
Proof: Left as an exercise.

The following gives an nice example of the consequences of Baire's theorem.

3 Corollary (Lipschitz continuity) Let ${\displaystyle S_{n}}$ = the set of functions ${\displaystyle u\in {\mathcal {C}}^{0}([0,1])}$ such that there exists some ${\displaystyle x\in [0,1]}$ such that:

${\displaystyle |u(x+h)-u(x)|\leq n|h|}$ for all ${\displaystyle x+h\in [0,1]}$.

Then (i) ${\displaystyle {\mathcal {C}}^{0}([0,1])}$ is complete, (ii) ${\displaystyle S_{n}}$ is closed and has dense complement, and (iii) there exists a ${\displaystyle u\in {\mathcal {C}}^{0}([0,1])}$ that is not in any ${\displaystyle S_{n}}$; i.e., one that is differentiable nowhere.
Proof: (i) ${\displaystyle [0,1]}$ is complete; thus, ${\displaystyle {\mathcal {C}}^{0}}$ is a Banach space by some early theorem. (ii) Let ${\displaystyle u_{j}\in S_{n}}$ be a sequence, and suppose ${\displaystyle u_{j}\to u}$. Then we have:

 ${\displaystyle |u(x+h)-u(x)|}$ ${\displaystyle \leq |u(x+h)-u_{j}(x+h)|+|u_{j}(x+h)-u_{j}(x)|+|u_{j}(x)-u(x)|}$ ${\displaystyle \to n|h|}$ as ${\displaystyle j\to \infty }$

Thus, ${\displaystyle u\in S_{n}}$; i.e., ${\displaystyle S_{n}}$ is closed. Stone-Weierstrass theorem says that every continuous function can be uniformly approximated by some infinitely differentiable function; thus, we find a ${\displaystyle g\in {\mathcal {C}}^{\infty }([0,1])}$ such that:

${\displaystyle \|u-g\|}$.

If we let ${\displaystyle v=g+{\epsilon \over 2}\sin Nx}$, then

${\displaystyle v\in {\mathcal {C}}^{0}([0,1])\backslash S_{n}}$

Hence, ${\displaystyle S_{n}}$ has dense complement. Finally, (iii) follows from Baire's theorem since (i) and (ii). ${\displaystyle \square }$

More concisely, the theorem says that not every continuity is Lipschitz because of Baire's theorem.

3 Lemma In a topological space ${\displaystyle X}$, the following are equivalent:

• (i) Every countable union of closed sets with empty interior has empty interior.
• (ii) Every countable intersection of open dense sets is dense.

Proof: The lemma holds since an open set is dense if and only if its complement has empty interior. ${\displaystyle \square }$

When the above equivalent conditions are true, we say ${\displaystyle X}$ is a Baire space.

3 Theorem If a Banach space ${\displaystyle G}$ has a Schauder basis, a unique sequence of scalars such that

${\displaystyle \|x-\sum _{1}^{n}\alpha _{k}x_{k}\|\to 0}$ as ${\displaystyle n\to \infty }$,

then ${\displaystyle G}$ is separable.
Proof:

The validity of the converse had been known as a Basis Problem for long time. It was, however, proven to be false in 19-something by someone.

## Duality

The kernel of a linear operator ${\displaystyle f}$, denoted by ${\displaystyle \ker(f)}$, is the set of all zero divisors for ${\displaystyle f}$. A kernel of a linear operator is a linear space since ${\displaystyle f(x)=0impliesthatf(\alpha x)=0}$ and ${\displaystyle f(x)=0=f(y)}$ implies ${\displaystyle 0=f(x+y)}$. Moreover, a linear operator has zero kernel if and only it is injective.

3 Theorem Let ${\displaystyle f}$ be a linear functional. Then ${\displaystyle f}$ is continuous if and only if ${\displaystyle \ker(f)}$ is closed.
Proof: If ${\displaystyle f}$ is continuous, then ${\displaystyle \ker(f)=f^{-1}\mid _{\{0\}}}$ is closed since a finite set is closed. Conversely, suppose ${\displaystyle f}$ is not continuous. Then there exists a sequence ${\displaystyle x_{j}\to x}$ such that

${\displaystyle \lim _{j\to \infty }f(x_{j})-f(x)=\lim _{j\to \infty }f(x_{j}-x)\neq 0}$

In other words, ${\displaystyle \ker(f)}$ is not closed. ${\displaystyle \square }$

3 Theorem If ${\displaystyle f}$ is a linear functional on ${\displaystyle l^{p}}$, then

${\displaystyle f(x_{1},x_{2},x_{3},...)=\sum _{1}^{\infty }x_{k}y_{k}}$

Proof: Let ${\displaystyle y_{k}=f(\delta (1,k),\delta (2,k),\delta (3,k),...)}$ where ${\displaystyle \delta (j,k)=1}$ if ${\displaystyle j=k}$ else ${\displaystyle 0}$. ${\displaystyle \square }$

The dual of a linear space ${\displaystyle G}$, denoted by ${\displaystyle G^{*}}$, is the set of all of linear operators from ${\displaystyle G}$ to ${\displaystyle \mathbb {F} }$ (i.e., either ${\displaystyle \mathbb {C} }$ or ${\displaystyle \mathbb {R} }$). Every dual of a linear space becomes again a linear space over the same field as the original one since the set of linear spaces forms an additive group.

Theorem Let G be a normed linear space. Then

${\displaystyle \|x\|=\sup _{\|y\|=1}||}$ and ${\displaystyle \|y\|=\sup _{\|x\|=1}||}$.

The duality between a Banach space and its dual gives rise to.

Example: For ${\displaystyle p}$ finite, the dual of ${\displaystyle l^{p}}$ is ${\displaystyle l^{q}}$ where ${\displaystyle 1/p+1/q=1}$.

3 Theorem (Krein-Milman) The unit ball of the dual of a real normed linear space has an extreme point.
Proof: (TODO: to be written)

The theorem is equivalent to the AC. [2]

## The Hahn-Banach theorem

3 Theorem (Hahn-Banach) Let ${\displaystyle {\mathcal {X}},{\mathcal {Y}}}$ be normed vector spaces over real numbers. Then the following are equivalent.

• (i) Every collection of mutually intersecting closed balls of ${\displaystyle {\mathcal {Y}}}$ has nonempty intersection. (binary intersection property)
• (ii) If ${\displaystyle {\mathcal {M}}\subset {\mathcal {X}}}$ is a subspace and ${\displaystyle f:{\mathcal {M}}\to {\mathcal {Y}}}$ is a continuous linear operator, then ${\displaystyle f}$ can be extend to a ${\displaystyle F}$ on ${\displaystyle {\mathcal {X}}}$ such that ${\displaystyle \|f\|=\|F\|}$. (dominated version)
• (iii) If the linear variety ${\displaystyle {x}+{\mathcal {M}}}$ does not meet a non-empty open convex subset ${\displaystyle G}$ of ${\displaystyle {\mathcal {X}}}$, then there exists a closed hyper-plane ${\displaystyle H}$ containing ${\displaystyle {x}+{\mathcal {M}}}$ that does not meet ${\displaystyle G}$ either. (geometric form)

3 Corollary If the equivalent conditions hold in the theorem, ${\displaystyle {\mathcal {Y}}}$ is complete.
Proof: Consider the identity map extended to the completion of ${\displaystyle {\mathcal {Y}}}$. ${\displaystyle \square }$

3 Corollary Let ${\displaystyle f}$ be a linear operator from a Banach space ${\displaystyle {\mathcal {X}}}$ to a Banach space ${\displaystyle {\mathcal {Y}}}$. If there exists a set ${\displaystyle \Gamma }$ and operators ${\displaystyle f_{1}:{\mathcal {X}}\to l^{\infty }(\Gamma )}$ and ${\displaystyle f_{2}:l^{\infty }(\Gamma )\to {\mathcal {X}}}$ such that ${\displaystyle f_{2}\circ f_{1}}$ and ${\displaystyle \|f_{2}\|=\|f_{1}\|}$, then ${\displaystyle f}$ can be extended to a Banach space containing ${\displaystyle {\mathcal {X}}}$ without increase in norm.

## Hilbert spaces

A linear space ${\displaystyle {\mathcal {X}}}$ is called a pre-Hilbert space if for each ordered pair of ${\displaystyle (x,y)}$ there is a unique complex number called an inner product of ${\displaystyle x}$ and ${\displaystyle y}$ and denoted by ${\displaystyle \langle x,y\rangle _{\mathcal {X}}}$ satisfying the following properties:

• (i) ${\displaystyle \langle x,y\rangle _{\mathcal {X}}}$ is a linear operator of ${\displaystyle x}$ when ${\displaystyle y}$ is fixed.
• (ii) ${\displaystyle \langle x,y\rangle _{\mathcal {X}}={\overline {\langle y,x\rangle _{\mathcal {X}}}}}$ (where the bar means the complex conjugation).
• (iii) ${\displaystyle \langle x,x\rangle \geq 0}$ with equality only when ${\displaystyle x=0}$.

When only one pre-Hilbert space is being considered we usually omit the subscript ${\displaystyle {\mathcal {X}}}$.

We define ${\displaystyle \|x\|=\langle x,x\rangle ^{1/2}}$ and indeed this is a norm. Indeed, it is clear that ${\displaystyle \|\alpha x\|=|\alpha |\|x\|}$ and (iii) is the reason that ${\displaystyle \|x\|=0}$ implies that ${\displaystyle x=0}$. Finally, the triangular inequality follows from the next lemma.

3 Lemma (Schwarz's inequality) ${\displaystyle |\langle x,y\rangle |\leq \|x\|\|y\|}$ where the equality holds if and only if we can write ${\displaystyle x=\lambda y}$ for some scalar ${\displaystyle \lambda }$.

If we assume the lemma, then since ${\displaystyle \operatorname {Re} (\alpha )\leq |\alpha |}$ for any complex number ${\displaystyle \alpha }$ it follows:

 ${\displaystyle \|x+y\|^{2}}$ ${\displaystyle =\|x\|^{2}+2\operatorname {Re} \langle x,y\rangle +\|y\|^{2}}$ ${\displaystyle \leq \|x\|^{2}+2|\langle x,y\rangle |+\|y\|^{2}}$ ${\displaystyle \leq (\|x\|+\|y\|)^{2}}$

Proof of Lemma: The lemma is just a special case of the next theorem:

3 Theorem Let ${\displaystyle {\mathcal {H}}}$ be a pre-Hilbert and ${\displaystyle S\subset {\mathcal {H}}}$ be an orthonormal set (i.e., for ${\displaystyle u,v\in E}$ ${\displaystyle \langle u,v\rangle =1}$ iff ${\displaystyle u=v}$ iff ${\displaystyle \langle u,v\rangle }$ is nonzero.)

• (i) ${\displaystyle \sum _{u\in S}\langle x,u\rangle \leq |x|}$ for any ${\displaystyle x\in {\mathcal {H}}}$.
• (ii) The equality holds in (i) if and only if ${\displaystyle S}$ is maximal in the collection of all orthonormal subsets of ${\displaystyle {\mathcal {H}}}$ ordered by ${\displaystyle \subset }$.

Proof: (TODO)

3 Theorem Let ${\displaystyle u_{j}}$ be a sequence in a pre-Hilbert space with ${\displaystyle \|u_{j}\|=1}$. If ${\displaystyle \Gamma =\sum _{j\neq k}|\langle u_{j},u_{k}\rangle |^{2}<\infty }$, then

${\displaystyle (1-\Gamma )\sum _{j=m}^{n}|\alpha _{j}|^{2}\leq \|\sum _{j=m}^{n}\alpha _{j}u_{j}\|^{2}\leq (1+\Gamma )\sum _{j=m}^{n}|\alpha _{j}|^{2}}$ for any sequence ${\displaystyle \alpha _{j}}$ of scalars.

Proof: Let ${\displaystyle I}$ be a set of all pairs ${\displaystyle (i,j)}$ such that ${\displaystyle m\leq i\leq n}$, ${\displaystyle m\leq j\leq n}$ and ${\displaystyle i\neq j}$. By Hölder's inequality we get:

${\displaystyle \sum _{(j,k)\in I}|\langle \alpha _{j}u_{j},\alpha _{k}u_{k}\rangle |\leq \sum _{j=m}^{n}|\alpha _{j}|^{2}\Gamma }$.

Since

${\displaystyle \|\sum _{j=1}^{\infty }\alpha _{j}u_{j}\|^{2}\leq \sum _{j=m}^{n}|\alpha _{j}|^{2}+\sum _{(j,k)\in I}|\langle \alpha _{j}u_{j},\alpha _{k}u_{k}\rangle |}$,

we get the second inequality. Moreover,

${\displaystyle \sum _{j=m}^{n}|\alpha _{j}|^{2}\leq \sum _{j=m}^{n}\langle \alpha _{j}u_{j},\alpha _{j}u_{j}\rangle +\sum _{(j,k)\in I}\langle \alpha _{j}u_{j},\alpha _{k}u_{k}\rangle +|\langle \alpha _{j}u_{j},\alpha _{k}u_{k}\rangle |}$

and this gives the first inequality. ${\displaystyle \square }$

3 Theorem (Bessel's inequality) Let ${\displaystyle U}$ be an orthonormal subset of a pre-Hilbert space. Then for each ${\displaystyle x}$ in the space,

${\displaystyle \sum _{u\in U}|\langle x,u\rangle |^{2}\leq \|x\|^{2}}$

where the sum can be obtained over some countable subset of ${\displaystyle U}$ and the equality holds if and only if ${\displaystyle U}$ is maximal; i.e., ${\displaystyle U}$ is contained in no other orthogonal sets.
Proof: First suppose ${\displaystyle U}$ is finite; i.e., ${\displaystyle U=\{u_{1},u_{2},...u_{n}\}}$. Let ${\displaystyle \alpha _{j}=\langle x,u_{j}\rangle }$. Since for each ${\displaystyle k}$, ${\displaystyle \langle x-\sum _{j=1}^{n}\alpha _{j}u_{j},u_{k}\rangle =\langle x,u_{k}\rangle -\alpha _{k}\langle u_{k},u_{k}\rangle =0}$, by the preceding theorem or by direct computation,

 ${\displaystyle \|x\|^{2}}$ ${\displaystyle =\|x-\sum _{j=1}^{n}\alpha _{j}u_{j}\|+\|\sum _{j=1}^{n}\alpha _{j}u_{j}\|^{2}}$ ${\displaystyle \geq \|\sum _{j=1}^{n}\alpha _{j}u_{j}\|^{2}=\sum _{j=1}^{n}|\alpha _{j}|^{2}}$

Now suppose that ${\displaystyle U}$ is maximal. Let ${\displaystyle y=\sum _{j=1}^{n}\langle x,u_{j}\rangle u_{j}}$. Then by the same reasoning above, ${\displaystyle x-y}$ is orthogonal to every ${\displaystyle u_{j}}$. But since the assumed maximality ${\displaystyle x=y}$. Hence,

${\displaystyle \sum _{j=1}^{n}|\langle x,u_{j}\rangle |^{2}=\|\sum _{j=1}^{n}|\langle x,u_{j}\rangle u_{j}|^{2}=\|y\|^{2}=\|x\|^{2}}$. Conversely, suppose that ${\displaystyle U}$ is not maximal. Then there exists some nonzero ${\displaystyle x}$ such that ${\displaystyle \langle x,u\rangle =0}$ for every ${\displaystyle u\in U}$. Thus,
${\displaystyle \sum _{j=1}^{n}|\langle x,u_{j}\rangle |^{2}=0<\|x\|^{2}}$.

The general case follows from the application of Egorov's theorem. ${\displaystyle \square }$

3 Corollary In view of Zorn's Lemma, it can be shown that a set satisfying the condition in (ii) exists. (TODO: need elaboration)

3 Lemma The function ${\displaystyle f(x)=\langle x,y\rangle }$ is continuous each time ${\displaystyle y}$ is fixed.
Proof: If ${\displaystyle f(x)=\langle x,y\rangle }$, from Schwarz's inequality it follows:

${\displaystyle |f(z)-f(x)|=|\langle z-x,y\rangle |\leq \|z-x\|\|y\|\to 0}$ as ${\displaystyle z\to x}$. ${\displaystyle \square }$

Given a linear subspace ${\displaystyle {\mathcal {M}}}$ of ${\displaystyle {\mathcal {H}}}$, we define: ${\displaystyle {\mathcal {M}}^{\bot }=\{y\in {\mathcal {H}};\langle x,y\rangle =0,x\in {\mathcal {M}}\}}$. In other words, ${\displaystyle {\mathcal {M}}^{\bot }}$ is the intersection of the kernels of the continuous functionals ${\displaystyle f(x)=\langle x,y}$, which are closed; hence, ${\displaystyle M^{\bot }}$ is closed. (TODO: we can also show that ${\displaystyle {\mathcal {M}}^{\bot }={\overline {\mathcal {M}}}^{\bot }}$)

3 Lemma Let ${\displaystyle {\mathcal {M}}}$ be a linear subspace of a pre-Hilbert space. Then ${\displaystyle z\in {\mathcal {M}}^{\bot }}$ if and only if ${\displaystyle \|z\|=\inf\{\|z+w\|;w\in {\mathcal {M}}\}}$.
Proof: The Schwarz inequality says the inequality

${\displaystyle |\langle z,z+w\rangle |\leq \|z\|\|z+w\|}$

is actually equality if and only if ${\displaystyle z}$ and ${\displaystyle z+w}$ are linear dependent. ${\displaystyle \square }$

3 Theorem (Riesz) Let ${\displaystyle {\mathcal {X}}}$ be a pre-Hilbert space and ${\displaystyle {\mathcal {M}}}$ be its subspace. The following are equivalent:

• (i) ${\displaystyle {\mathcal {X}}}$ is a complete.
• (ii) ${\displaystyle {\mathcal {M}}}$ is dense if and only if ${\displaystyle {\mathcal {M}}^{\bot }=\{0\}}$.
• (iii) Every continuous linear functional on ${\displaystyle {\mathcal {X}}^{*}}$ has the form ${\displaystyle f(x)=\langle x,y\rangle }$ where y is uniquely determined by ${\displaystyle f}$.

Proof: If ${\displaystyle {\overline {\mathcal {M}}}={\mathcal {H}}}$ and ${\displaystyle z\in {\mathcal {M}}^{\bot }}$, then ${\displaystyle z\in {\overline {\mathcal {M}}}\cap {\mathcal {M}}^{\bot }={0}}$. (Note: completeness was not needed.) Conversely, if ${\displaystyle {\overline {\mathcal {M}}}}$ is not dense, then it can be shown (TODO: using completeness) that there is ${\displaystyle y\in {\overline {\mathcal {M}}}}$ such that

${\displaystyle \|x-y\|=\inf\{\|x-w\|;w\in {\mathcal {M}}\}}$.

That is, ${\displaystyle 0\neq x-y\in {\overline {M}}^{\bot }}$. In sum, (i) implies (ii). To show (iii), we may suppose that ${\displaystyle f}$ is not identically zero, and in view of (ii), there exists a ${\displaystyle z\in \ker(f)^{\bot }}$ with ${\displaystyle \|z\|=1}$. Since ${\displaystyle f(xf(z)-f(x)z)=0}$,

${\displaystyle 0=\langle xf(z)-f(x)z,z\rangle =\langle x,{\overline {f}}(z)z\rangle -f(x)}$.

The uniqueness holds since ${\displaystyle \langle x,y\rangle =\langle x,y_{2}\rangle }$ for all ${\displaystyle x}$ implies that ${\displaystyle y=y_{2}}$. Finally, (iii) implies reflexivility which implies (i). ${\displaystyle \square }$

A complete pre-Hilbert space is called a Hilbert space.

3 Corollary Let ${\displaystyle {\mathcal {M}}}$ be a a closed linear subspace of a Hilbert space[/itex]

• (i) For any ${\displaystyle x\in {\mathcal {H}}}$ we can write ${\displaystyle x=y+z}$ where ${\displaystyle y\in {\mathcal {M}}}$ and ${\displaystyle z\in {\mathcal {M}}^{\bot }}$ and ${\displaystyle y,z}$ are uniquely determined by ${\displaystyle x}$.
• (ii) then ${\displaystyle {\mathcal {M}}^{\bot \bot }={\bar {\mathcal {M}}}}$.

Proof: (i) Let ${\displaystyle x\in {\mathcal {H}}}$ be given. Define ${\displaystyle f(w)=\langle w,x\rangle }$ for each ${\displaystyle w\in {\mathcal {M}}}$. Since ${\displaystyle f}$ is continuous and linear on ${\displaystyle {\mathcal {M}}}$, which is a Hilbert space, there is ${\displaystyle y\in {\mathcal {M}}}$ such that ${\displaystyle f(w)=\langle w,y\rangle }$. It follows that ${\displaystyle \langle w,x-y\rangle =0}$ for any ${\displaystyle w\in {\mathcal {M}}}$; that is, ${\displaystyle x-y\in {\mathcal {M}}^{\bot }}$. The uniqueness holds since if ${\displaystyle y_{2}\in {\mathcal {M}}}$ and ${\displaystyle x-y_{2}\in {\mathcal {M}}^{\bot }}$, then ${\displaystyle f(w)=\langle x,y_{2}\rangle }$ and the representation is unique. (ii) If ${\displaystyle x\in {\mathcal {M}}}$, then since ${\displaystyle x}$ is orthogonal to ${\displaystyle {\mathcal {M}}^{\bot \bot }}$. Thus, ${\displaystyle {\mathcal {M}}\subset {\mathcal {M}}^{\bot \bot }}$ and taking closure on both sides we get: ${\displaystyle {\overline {\mathcal {M}}}\subset {\overline {M^{\bot \bot }}}\subset M^{\bot \bot }}$. Also, if ${\displaystyle x\in {\overline {M}}^{\bot \bot }}$, then we write: ${\displaystyle x=y+z}$ where ${\displaystyle y\in {\overline {M}}}$ and ${\displaystyle z\in {\overline {M}}^{\bot }}$ and ${\displaystyle \|z\|^{2}=\langle x-y,z\rangle =\langle x,z\rangle =0}$. Thus, ${\displaystyle x=y\in {\overline {\mathcal {M}}}}$. Since ${\displaystyle {\mathcal {M}}\subset {\overline {\mathcal {M}}}}$ implies that ${\displaystyle {\overline {\mathcal {M}}}\subset {\mathcal {M}}^{\bot }}$ and ${\displaystyle {\mathcal {M}}^{\bot \bot }\subset {\overline {\mathcal {M}}}^{\bot \bot }}$, the corollary follows. ${\displaystyle \square }$

## Integration

3 Theorem (Fundamental Theorem of Calculus) The following are equivalent.

• (i) The derivative of ${\displaystyle \int _{a}^{x}f(t)dt}$ at ${\displaystyle x}$ is ${\displaystyle f(x)}$.
• (ii) ${\displaystyle f}$ is absolutely continuous.

Proof: Suppose (ii). Since we have:

${\displaystyle \inf _{x\leq t\leq y}f(t)\leq (x-y)^{-1}\int _{x}^{y}f(t)\leq \sup _{x\leq t\leq y}f(t)}$,

for any ${\displaystyle a}$,

${\displaystyle \lim _{y\to x}(x-y)^{-1}(\int _{a}^{y}f(t)dt-\int _{a}^{x}f(t)dt)=f(x)}$. ${\displaystyle \square }$

## Differentiation

Differentiation of ${\displaystyle f}$ at ${\displaystyle x}$ is to take the limit of the quotient by letting ${\displaystyle h\rightarrow 0}$:

${\displaystyle {f(x+h)-f(x) \over h}}$.

When the limit of the quotient indeed exists, we say ${\displaystyle f}$ is differentiable at ${\displaystyle x}$. The derivative of ${\displaystyle f}$, denoted by ${\displaystyle {\dot {f}}}$, is defined by ${\displaystyle f(x)}$ = the limit of the quotient at ${\displaystyle x}$.

3.8. Theorem The power series:

${\displaystyle u=\sum _{0}^{\infty }a_{j}z^{j}}$

is analytic inside the radius of convergence.
Proof: The normal convergence of ${\displaystyle u}$ implies the theorem.

To show that every analytic function can be represented by a power series, we will, though not necessarily, wait for Cauchy's integral formula.

We define the norm in ${\displaystyle \mathbb {R} ^{n}}$, thereby inducing topology;

${\displaystyle \|x\|=\|\sum _{1}^{n}x_{j}^{2}\|^{1/2}}$.

The topology in this way is often called a natural topology of ${\displaystyle {\mathit {R}}^{2}}$, since so to speak we don't artificially induce a topology by defining ${\displaystyle \sigma }$.

3. Theorem (Euler's formula)

${\displaystyle z=|z|e^{i\theta }=|z|(cos\theta +sin\theta )}$ If ${\displaystyle z\in \mathbb {C} }$.

Proof:

3. Theorem (Cauchy-Riemann equations) Suppose ${\displaystyle u\in {\mathcal {C}}^{1}(\Omega )}$. We have:

${\displaystyle {\partial u \over \partial {\bar {z}}}=0}$ on ${\displaystyle \Omega }$ if and only if ${\displaystyle {\partial u \over \partial x}-{1 \over i}{\partial u \over \partial y}=0}$ on ${\displaystyle \Omega }$.

Proof:

3. Corollary Let ${\displaystyle u,v}$ are non-constant and analytic in ${\displaystyle \Omega }$. If ${\displaystyle {\mbox{Re }}u={\mbox{Re }}v}$, then ${\displaystyle u=v}$.
Proof: Let ${\displaystyle g=u-v}$. Then ${\displaystyle 0={\mbox{Re }}g=g+{\bar {g}}}$. Thus, ${\displaystyle {\mbox{Im }}g=0}$, and hence g = 0[/itex]. ${\displaystyle \square }$

This furnishes examples of functions that are not analytic. For example, ${\displaystyle u(x+iy)=x+iy}$ is analytic everywhere and that means ${\displaystyle v(x+iy)=x+icy}$ cannot be analytic unless ${\displaystyle c=1}$.

A operator ${\displaystyle f}$ is bounded if there exists a constant ${\displaystyle C>0}$ such that for every ${\displaystyle x}$:

${\displaystyle \|f(x)\|\leq C\|x\|}$.

3.1 Theorem Given a bounded operator ${\displaystyle f}$, if

${\displaystyle \alpha =\inf\{C:\|f(x)\leq C\|x\|\}}$, ${\displaystyle \beta =\sup _{\|x\|\leq 1}\|f(x)\|}$ and ${\displaystyle \gamma =\sup _{\|x\|=1}\|f(x)\|}$,

then ${\displaystyle \alpha =\beta =\gamma }$.
Proof: Since ${\displaystyle \beta \leq \gamma }$ can be verified (FIXME) and ${\displaystyle \gamma }$ is inf,

${\displaystyle \|f(x)\|=\left\|f\left({x \over \|x\|}\right)\right\|\|x\|\leq \alpha \|x\|\leq \beta \|x\|\leq \gamma \|x\|}$.

Thus,

${\displaystyle \alpha \leq \beta \leq \gamma }$.

But if ${\displaystyle \alpha \|x\|<\gamma \|x\|}$ in the above, then this is absurd since ${\displaystyle \gamma }$ is sup; hence the theorem is proven.

We denote by ${\displaystyle \|f\|}$ either of the above values, and call it the norm of ${\displaystyle f}$

3.2 Corollary A operator ${\displaystyle f}$ is bounded if and only if is continuous.
Proof: If ${\displaystyle f}$ is bounded, then we find ${\displaystyle \|f\|}$ and since the identity: for every ${\displaystyle x}$ and ${\displaystyle h}$

${\displaystyle \|f(x+h)-f(x)\|\leq \|f\|\|h\|}$,

${\displaystyle f}$ is continuous everywhere. Conversely, every continuous operator maps a open ball centered at 0 of radius 1 to some bounded set; thus, we find the norm of ${\displaystyle f}$, ${\displaystyle \|f\|}$, and the theorem follows after the preceding theorem. ${\displaystyle \square }$

3. Theorem If F is a linear space of dimension ${\displaystyle n}$, then it has exactly ${\displaystyle n}$ subspaces including F and excluding {0}.
Proof: F has a basis of n elements.

Theorem If H is complete, then ${\displaystyle H^{n}=\{\sum _{1}^{n}x_{j}e_{j}:x_{j}\in E\}}$ (i.e., a cartesian space of E) is complete
Proof: Let ${\displaystyle z_{j}\in H}$ be a Cauchy sequence. Then we have:

${\displaystyle |z_{n}-z_{m}|=\left|\sum _{1}^{n}(z|e_{j})e_{j}-\sum _{1}^{m}(z|e_{j})e_{j}\right|\to 0}$ as ${\displaystyle n,m\to \infty }$.

Since orthogonality, we have:

${\displaystyle |(x_{n}-x_{m})e_{1}+(y_{n}-y_{m})e_{2}|=|x_{n}-x_{m}|+|y_{n}-y_{m}|}$,

and both ${\displaystyle x_{j}}$ and ${\displaystyle y_{j}}$ are also Cauchy sequences. Since completeness, the respective limits ${\displaystyle x}$ and ${\displaystyle y}$ are in ${\displaystyle E}$; thus, the limit ${\displaystyle z=xe_{1}+ye_{2}}$ is in E_2. ${\displaystyle \square }$

The theorem shows in particular that ${\displaystyle \mathbb {R} ,\mathbb {R^{n}} ,\mathbb {C} ,\mathbb {C^{n}} }$ are complete.

3. Theorem (Hamel basis) The Axiom of Choice implies that every linear space has a basis
Proof: We may suppose the space is infinite-dimensional, otherwise the theorem holds trivially.

FIXEME: Adopt [3]. 3. Theorem (Fixed Point Theorem) Suppose a function f maps a closed subset ${\displaystyle F}$ of a Banach space to itself, and further suppose that there exists some ${\displaystyle c<1}$ such that ${\displaystyle \|f(x)-f(y)\|\leq c\|x-y\|}$ for any ${\displaystyle x}$ and ${\displaystyle y}$. Then ${\displaystyle f}$ has a unique fixed point.
Proof: Let ${\displaystyle s_{n}}$ be a sequence: ${\displaystyle x,f(x),f(f(x)),f(f(f(x))),...}$. For any ${\displaystyle n}$ for some ${\displaystyle x\in F}$. Then we have:

${\displaystyle \|s_{n+1}-s_{n}\|=\|f(s_{n})-f(s_{n-1})\|\leq c\|s_{n}-s_{n-1}\|}$.

By induction it follows:

${\displaystyle \|s_{n+1}-s_{n}\|=c^{n}\|s_{1}-s_{0}\|}$.

Thus, ${\displaystyle s_{n}}$ is a Cauchy sequence since:

 ${\displaystyle \|s_{n+1}+...s_{n+k}-s_{n}\|\leq \sum _{0}^{k}\|s_{n+j}\|\leq \|s_{1}-s_{0}\|c^{n}\sum _{0}^{k}c^{j}}$ ${\displaystyle =\|s_{1}-s_{0}\|c^{n}(c-1)^{-1}(1-c^{k+1})}$.

That ${\displaystyle F}$ is closed puts the limit of ${\displaystyle s_{n}}$ in ${\displaystyle F}$. Finally, the uniqueness follows since if ${\displaystyle f(x)=x}$ and ${\displaystyle f(y)=y}$, then

${\displaystyle \|f(x)-f(y)\|=\|x-y\|\leq c\|x-y\|}$ or ${\displaystyle 1\leq c}$unless ${\displaystyle x=y}$. ${\displaystyle \square }$

3. Corollary (mean value inequality) Let ${\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}$ be differentiable. Then there exists some ${\displaystyle z=(1-t)x+ty}$ for some ${\displaystyle t\in [0,1]}$ such that

${\displaystyle \|f(x)-f(y)\|\leq \|f'(z)\|\|x-y\|}$

where the equality holds if ${\displaystyle n=m=1}$ (mean value theorem).
Proof:

Theorem Let ${\displaystyle f:E\to \mathbb {R} }$ where ${\displaystyle E\subset \mathbb {R} ^{n}}$ and is open. If ${\displaystyle D_{1}f,D_{2}f,...D_{n}f}$ are bounded in ${\displaystyle E}$, then ${\displaystyle f}$ is continuous.
Proof: Let ${\displaystyle \epsilon >0}$ and ${\displaystyle x\in E}$ be given. Using the assumption, we find a constant ${\displaystyle M}$ so that:

${\displaystyle \sup _{E}|D_{i}f| for ${\displaystyle i=1,2,...n}$.

Let ${\displaystyle \delta =\epsilon (nM)^{-1}}$. Suppose ${\displaystyle |h|<\delta }$ and ${\displaystyle x+h\in E}$. Let

${\displaystyle \phi _{k}(t)=f\left(x+\sum _{1}^{k}(h\cdot e_{j})e_{j}+t(h\cdot e_{k+1})e_{k+1}\right)}$.

Then by the mean value theorem, we have: for some ${\displaystyle c\in (0,1)}$,

 ${\displaystyle |\phi _{k}(1)-\phi _{k}(0)|}$ ${\displaystyle =|h|\left|D_{k}f(x+\sum _{1}^{k}(h\cdot e_{j})e_{j}+c(h\cdot e_{k+1})e_{k+1})\right|}$ ${\displaystyle <|h|M}$.

It thus follows: since ${\displaystyle \phi _{k}(0)=\phi _{k-1}(1)}$,

 ${\displaystyle |f(x+h)-f(x)|}$ ${\displaystyle =|\phi _{n}(1)-\phi _{1}(0)|=\left|\sum _{1}^{n}\phi _{k}(1)-\phi _{k}(0)\right|}$ ${\displaystyle <|h|nM<\epsilon }$ ${\displaystyle \square }$

Theorem (differentiation rules)' Given ${\displaystyle f,g:\mathbb {R} \to \mathbb {R} }$ differentiable,

• (a) (Chain Rule) ${\displaystyle D(g\circ f)=(D(g)\circ f)D(f)}$.
• (b) (Product Rule) ${\displaystyle D(fg)=D(f)g+fD(g)}$.
• (b) (Quotient Rule) ${\displaystyle D(f/g)=g^{-2}(D(f)g-fD(g))}$.

Proof: (b) and (c) follows after we apply (a) to them with ${\displaystyle log}$, ${\displaystyle h(x)=x^{-1}}$ and the implicit function theorem. ${\displaystyle \square }$.

Theorem (Cauchy-Riemann equations) Let ${\displaystyle \Omega \subset \mathbb {C} }$ and ${\displaystyle u:\Omega \to \mathbb {C} }$. Then ${\displaystyle u}$ is differentiable if and only if ${\displaystyle {\partial \over \partial x}u}$ and ${\displaystyle {\partial \over \partial y}u}$ are continuous on ${\displaystyle \Omega }$ and ${\displaystyle {\partial \over \partial z}u=0}$ on ${\displaystyle \Omega }$.
Proof: Suppose ${\displaystyle u}$ is differentiable. Let ${\displaystyle z\in \Omega }$ and ${\displaystyle x={\mbox{Re}}z}$ and ${\displaystyle y={\mbox{Im}}z}$.

 ${\displaystyle u'(z)}$ ${\displaystyle =\lim _{h\in \mathbb {R} \to 0}{u(x+h,y)-u(x,y) \over h}={\partial \over \partial x}u(z)}$ ${\displaystyle =\lim _{h\in \mathbb {R} \to 0}{u(x,y+h)-u(x,y) \over ih}={1 \over i}{\partial \over \partial y}u(z)}$

Since ${\displaystyle x={z+{\overline {z}} \over 2}}$ and ${\displaystyle y={z-{\bar {z}} \over 2i}}$, the Chain Rule gives:

 ${\displaystyle {\partial \over \partial {\bar {z}}}u}$ ${\displaystyle =\left({\partial x \over \partial {\bar {z}}}{\partial \over \partial x}+{\partial y \over \partial {\bar {z}}}{\partial \over \partial y}\right)u}$ ${\displaystyle ={1 \over 2}\left({\partial \over \partial x}-{1 \over i}{\partial \over \partial y}\right)u}$ ${\displaystyle =0}$.

Conversely, let ${\displaystyle z\in \Omega }$. It suffices to show that ${\displaystyle u'(z)={\partial \over \partial x}u(z)}$. Let ${\displaystyle \epsilon >0}$ be given and ${\displaystyle x=\Re z}$ and ${\displaystyle y=\Im z}$. Since the continuity of the partial derivatives and that ${\displaystyle \Omega }$ is open, we can find a ${\displaystyle \delta >0}$ so that: ${\displaystyle B(\delta ,z)\subset \Omega }$ and for ${\displaystyle s\in B(\delta ,z)}$ it holds:

${\displaystyle \left|{\partial \over \partial x}(u(s)-u(z))\right|<\epsilon /2}$ and ${\displaystyle \left|{\partial \over \partial y}(u(s)-u(z))\right|<\epsilon /2}$.

Let ${\displaystyle h\in B(\delta ,0)}$ be given and ${\displaystyle h_{1}=\Re h}$ and ${\displaystyle h_{2}=\Im h}$. Using the mean value theorem we have: for some ${\displaystyle s_{1},s_{2}\in B(\delta ,z)}$,

 ${\displaystyle u(x+h_{1},y+h_{2})-u(x,y)}$ ${\displaystyle =u(x+h_{1},y+h_{2})-u(x,y+h_{2})+u(x,y+h_{2})-u(x,y)}$ ${\displaystyle =h_{1}{\partial \over \partial x}u(s1)+h_{2}{\partial \over \partial y}u(s2)}$

where ${\displaystyle {\partial \over \partial y}u=i{\partial \over \partial x}u}$ by assumption. Finally it now follows:

 ${\displaystyle \left|{u(z+h)-u(z) \over h}-{\partial \over \partial x}u(z)\right|}$ ${\displaystyle =\left|{h_{1} \over h}\right|\left|{\partial \over \partial x}(u(s_{1})-u(z))\right|+\left|{h_{2} \over h}\right|\left|{\partial \over \partial x}(u(s_{2})-u(z))\right|}$ ${\displaystyle <\epsilon }$ ${\displaystyle \square }$

3 Corollary Let ${\displaystyle u\in {\mathcal {A}}(\Omega )}$ and suppose ${\displaystyle \Omega }$ is connected. Then the following are equivalent:

• (a) ${\displaystyle u}$ is constant.
• (b) ${\displaystyle {\mbox{Re}}u}$ is constant.
• (c) ${\displaystyle |u|}$ is constant.

Proof: That (a) ${\displaystyle \Rightarrow }$ (b) is obvious. Suppose (b). Since we have some constant ${\displaystyle M}$ so that for all ${\displaystyle z\in \Omega }$,

${\displaystyle |e^{u}|=|e^{\Re u}e^{i\Im u}|=|e^{\Re u}|=e^{M}}$,

clearly it holds that ${\displaystyle |u|=|\log e^{u}|=M}$. Thus, (b) ${\displaystyle \Rightarrow }$ (c). Suppose (c). Then ${\displaystyle M^{2}=|u|^{2}=u{\overline {u}}}$. Differentiating both sides we get:

${\displaystyle 0={\partial \over \partial z}u{\overline {u}}=u{\partial \over \partial z}{\overline {u}}+{\overline {u}}{\partial \over \partial z}u}$.

Since ${\displaystyle u\in {\mathcal {A}}(\Omega )}$, it follows that ${\displaystyle {\partial \over \partial z}{\overline {u}}=0}$ and ${\displaystyle {\overline {u}}{\partial \over \partial z}u=0}$. If ${\displaystyle {\overline {u}}=0}$, then ${\displaystyle u=0}$. If ${\displaystyle {\partial \over \partial z}u=0}$, then ${\displaystyle u}$ is constant since ${\displaystyle \Omega }$ is connected. Thus, (c) ${\displaystyle \Rightarrow }$ (a). ${\displaystyle \square }$

We say a function has the open mapping property if it maps open sets to open sets. The maximum principle states that equivalently

• if a function has a local maximum, then the function is constant.

3 Theorem Let ${\displaystyle u:\Omega \to \mathbb {C} }$. The following are equivalent:

• (a) ${\displaystyle u}$ is harmonic.
• (b) ${\displaystyle u}$ has the mean value property.

3 Theorem Let ${\displaystyle u:\Omega \to \mathbb {C} }$. If ${\displaystyle u}$ has the open mapping property, then the maximum principle holds.
Proof: Suppose ${\displaystyle u\in {\mathcal {A}}(\Omega )}$ and ${\displaystyle \Omega }$ is open and connected. Let ${\displaystyle \omega =\{z\in \Omega :|u(z)|=\sup _{\Omega }|u|\}}$. If ${\displaystyle u}$ has a local maximum, then ${\displaystyle \omega }$ is nonempty. Also, ${\displaystyle \Omega }$ is closed in ${\displaystyle \Omega }$ since ${\displaystyle \Omega =u^{-1}({\omega })}$. Let ${\displaystyle a\in \omega }$. Since ${\displaystyle \Omega }$ is open, we can find a ${\displaystyle r>0}$ so that: ${\displaystyle B=B(r,a)\subset \Omega }$. Since ${\displaystyle u(\omega )}$ is open by the open mapping property, we can find a ${\displaystyle \epsilon >0}$ so that ${\displaystyle B(\epsilon ,u(a))\subset u(\omega )}$. This is to say that ${\displaystyle u(a)<\epsilon +u(z)}$ for some ${\displaystyle z\subset B(r,a)}$. This is absurd since ${\displaystyle a\in \omega }$ and ${\displaystyle u(z)\leq u(a)}$ for all ${\displaystyle z\in \Omega }$. Thus, ${\displaystyle u=\sup _{\Omega }|u|}$ identically on ${\displaystyle B(r,a)}$ and it thus holds that ${\displaystyle B(r,a)\subset \omega }$ and ${\displaystyle \omega }$ is open in ${\displaystyle \Omega }$. Since ${\displaystyle \Omega }$ is connected, ${\displaystyle \Omega =\omega }$. Therefore, ${\displaystyle u=\sup _{\Omega }|u|}$ on ${\displaystyle \Omega }$. ${\displaystyle \square }$

Exercise Let ${\displaystyle f\in {\mathcal {A}}(\mathbb {C} )0}$. Then ${\displaystyle f}$ is a polynomial of degree ${\displaystyle >n}$ if and only if there are constants ${\displaystyle A}$ and ${\displaystyle B}$ such that ${\displaystyle |f(z)|\leq A+B|z|^{n}}$ for all ${\displaystyle z\in \mathbb {C} }$.
Exercise 2 Let ${\displaystyle f:A\to A}$ be linear. Further suppose ${\displaystyle A}$ has dimension ${\displaystyle n<\infty }$. Then the following are equivalent:
1. ${\displaystyle f^{-1}}$ exists
2. ${\displaystyle \det(f)\neq 0}$ where ${\displaystyle \det(f)=\sum _{1}^{n}sgn(\sigma )x_{\sigma (i)j}}$
${\displaystyle \left\{f{\begin{bmatrix}1\\\vdots \\0\end{bmatrix}},f{\begin{bmatrix}0\\\vdots \\0\end{bmatrix}}...f{\begin{bmatrix}0\\\vdots \\1\end{bmatrix}}\right\}}$ has dimension ${\displaystyle n}$.