User:TakuyaMurata/Differential forms

In particular, the chapter covers subharmonic functions.

Implicit function theorem

4 Theorem A linear operator ${\displaystyle T}$ from a finite-diemnsional vector space ${\displaystyle {\mathcal {X}}}$ into itself is injective if and only if it is surjective.
Proof: Let ${\displaystyle e_{1},...e_{n}}$ be a basis for ${\displaystyle {\mathcal {X}}}$. The following are equivalent: (i) ${\displaystyle T}$ has zero kernel. (ii) ${\displaystyle 0=T(\sum _{j=1}^{n}a_{j}e_{j})=\sum _{j=1}^{n}a_{j}T(e_{j})}$ implies that all the ${\displaystyle a_{j}}$ are zero. (iii) ${\displaystyle T(e_{1}),...T(e_{n})}$ is a basis for ${\displaystyle {\mathcal {X}}}$. Since the range of T is the span of the set ${\displaystyle \{T(e_{1}),...T(e_{n})\}}$, the theorem now follows. ${\displaystyle \square }$

4 Theorem Let ${\displaystyle \Omega }$ be a neighborhood of a point ${\displaystyle (a,b)\in \mathbb {R} ^{n}\times \mathbb {R} ^{m}}$. If ${\displaystyle f_{j}(a,b)=0}$ and ${\displaystyle f_{j}\in {\mathcal {C}}^{1}(\Omega )}$ for ${\displaystyle j=1...n}$, and if the matrix

${\displaystyle {\begin{bmatrix}{\partial f_{1} \over \partial x_{1}}&\cdots &{\partial f_{1} \over \partial x_{n}}\\\vdots &\ddots &\vdots \\{\partial f_{n} \over \partial x_{1}}&\cdots &{\partial f_{n} \over \partial x_{n}}\end{bmatrix}}}$

is invertible at ${\displaystyle (a,b)}$, then the equations ${\displaystyle f_{j}(x,y)=0,j=1...n}$, has a unique solution ${\displaystyle x}$ such that ${\displaystyle x(b)=a}$ and ${\displaystyle x}$ is ${\displaystyle {\mathcal {C}}^{1}}$ in some neighborhood of ${\displaystyle b}$.
Proof (from [1]):

We need

4 Lemma If a linear operator ${\displaystyle T}$ is injective in ${\displaystyle \Omega }$, then ${\displaystyle T^{-1}}$ is defined and continuously differentiable in ${\displaystyle T(\Omega )}$.

Let ${\displaystyle F(x,y)=(f(x,y),y)}$ for ${\displaystyle (x,y)\in \Omega }$.

Connected spaces

The space A at top is connected; the shaded space B at bottom is not.

A set ${\displaystyle E}$ is connected if there exists no open cover of ${\displaystyle E}$ consisting of two disjoint open sets.

A connected component of a set ${\displaystyle E}$ in ${\displaystyle G}$ is the "maximal" connected subsets containing ${\displaystyle E}$; that is, the component = ${\displaystyle \bigcup }$ connected set ${\displaystyle \subset G}$ containing ${\displaystyle E}$. Every topological space, in other words, consists of components, which are necessarily disjoint and closed. That a topological space consists of exactly one component is equivalent to that the space is connected.

To give an example, induce to an arbitrary set ${\displaystyle G}$ a topology as a collection of any subsets of ${\displaystyle G}$ (i.e., the finest topology). The topological space ${\displaystyle G}$ has no closed sets since every open set in ${\displaystyle G}$ is also closed. The components of ${\displaystyle G}$ are the same as all the subsets of ${\displaystyle G}$ since .

4.3 Theorem The following are equivalent. Given a topological space ${\displaystyle G}$,

1. ${\displaystyle G}$ is connected.
2. If ${\displaystyle G=A\cup B}$, then both ${\displaystyle {\overline {A}}\cap B}$ and ${\displaystyle A\cap {\overline {B}}}$ are nonempty.
3. Only ${\displaystyle \varnothing }$ and ${\displaystyle G}$ have empty boundary.

Proof: Suppose ${\displaystyle G=A\cup B}$ for some sets ${\displaystyle A}$ and ${\displaystyle B}$. If ${\displaystyle {\overline {A}}}$ and ${\displaystyle B}$ are disjoint, so are ${\displaystyle A}$ and ${\displaystyle B}$ since ${\displaystyle A\subset {\overline {A}}}$. This is to say that (1) is false, which also follows if ${\displaystyle A}$ and ${\displaystyle {\overline {B}}}$ are disjoint for the same reasoning. This shows that (1) implies (2). Now suppose ${\displaystyle E}$ is nonempty, open, closed subset of ${\displaystyle G}$ that is not ${\displaystyle G}$. Then so is ${\displaystyle G\backslash E}$. Thus, ${\displaystyle G=E\cup (G\backslash E)}$, the disjoint union of an open set and a closed set. This contradicts (2). Hence, (2) implies (3. Finally, suppose (1) is false; that is, there are at least two components of ${\displaystyle G}$, either of which has empty boundary but is not ${\displaystyle G}$. ${\displaystyle \square }$

A path is a continuous function from [0, 1] to some space; e.g., a straight-line represented by ${\displaystyle f(t)}$ = A path is a loop if f(0) = f(1). e.g., a unit circle represented by ${\displaystyle f(t)=e^{t2\pi i}}$.

Two points ${\displaystyle a}$ and ${\displaystyle b}$ are said to be jointed by a path ${\displaystyle f}$ if f(0) = a and f(1) = b. We say the space is path-connected, the importance of which notion is the following.

5.1 Theorem A set ${\displaystyle E}$ is path-connected set if and only if it is connected.

Two paths that are homotopic.

Two paths are said to be homotopic if FIXME.

We say a space is simply connected if every path in the space is homotopic to a point. For example, in the plane ${\displaystyle \mathbb {R} ^{2}}$, every circle centered at the origin is homotopic to the origin. But in ${\displaystyle \mathbb {R} ^{2}/{0}}$ the circle fails to be homotopic to the origin. Hence, the former is simply connected while the latter is not. We also see, in light of theorem 3.1, that every simply-connected space is connected.

5.1 Theorem Let ${\displaystyle E}$ be a set. The following are equivalent.

• (i) ${\displaystyle df=0}$ implies that ${\displaystyle f}$ is constant for any ${\displaystyle f\in {\mathcal {C}}^{1}(E)}$
• (ii) ${\displaystyle E}$ is connected.

Partition of unities

4 Lemma (Urysohn) A topological space ${\displaystyle X}$ is normal if and only if for any disjoint closed sets ${\displaystyle A}$ and ${\displaystyle B}$ there exists a continuous function ${\displaystyle f}$ such that ${\displaystyle 0\leq f\geq 1}$, ${\displaystyle f=0}$ on ${\displaystyle A}$ and ${\displaystyle f=1}$ on ${\displaystyle B}$. Proof (from Urysohn's lemma):

4 Corollary A topological space ${\displaystyle X}$ is completely regular if and only if there exists a continuous injection from ${\displaystyle X}$ to a compact Hausdorff space with continuous inverse.

4 Theorem A Hausdorff space is paracompact if and only if it admits a partition of unity.

to be merged

In this chapter, we shall prove (after some works are done) Cauchy's integral formula, first by the Stoke's theorem then again by the notion of the winding number.

6.1 Theorem There exists a partition of unity ${\displaystyle \phi _{i}}$ subordinate to the cover ${\displaystyle \{G_{j}\}}$; that is:

• (a) ${\displaystyle \phi _{i}}$ is infinitely differentiable in every ${\displaystyle G_{j}}$.
• (b) ${\displaystyle {\mbox{supp }}\phi _{j}}$ is in ${\displaystyle G_{j}}$.
• (c) If ${\displaystyle x}$ is in ${\displaystyle G_{j}}$, then ${\displaystyle \sum _{1}^{N}\phi _{j}=1}$ for some ${\displaystyle N}$. (locally finite)

Proof: Let ${\displaystyle G}$ = the union of all ${\displaystyle G_{j}}$. Choose ${\displaystyle g_{j}}$ in ${\displaystyle C^{\infty }(G)}$ so that {all ${\displaystyle {\mbox{supp }}g_{j}}$} covers ${\displaystyle G}$ and ${\displaystyle 0\leq g_{j}\geq 1}$. (See the lemma for why this is possible.)
Let ${\displaystyle \phi _{1}=g_{1}}$, ${\displaystyle \phi _{2}=(1-g_{1})g_{2}}$, ${\displaystyle \phi _{3}=(1-g_{1})(1-g_{2})g_{3}}$ and so forth. If ${\displaystyle \sum _{1}^{m}\phi _{j}=1-\prod _{1}^{m}g_{j}}$ for some ${\displaystyle m}$, then the computation gives: ${\displaystyle \sum _{1}^{m+1}\phi _{j}=1-\prod _{1}^{m+1}g_{j}}$. Since ${\displaystyle \phi _{1}=1-(1-g_{1})}$, by induction,

${\displaystyle \sum _{1}^{\infty }\phi _{j}=1-\prod _{1}^{\infty }g_{j}}$, which is locally finite.

For ${\displaystyle x}$ in ${\displaystyle G_{j}}$, some ${\displaystyle g_{j}(x)=1}$. Thus, (c) holds and the others (a) and (b) are also true by construction. ${\displaystyle \square }$

We define the integral of a form ${\displaystyle \theta }$ over ${\displaystyle E}$ by for a partition of unity ${\displaystyle \phi _{j}}$ subordinate to the locally finite cover ${\displaystyle \{E_{j}\}}$ of ${\displaystyle E}$,

${\displaystyle \int _{E}\theta =\sum \int _{E_{j}}\phi _{j}\theta }$.

6.1 Theorem If ${\displaystyle u}$ is analytic in ${\displaystyle \Omega }$, then:

${\displaystyle f(z)={\frac {1}{2\pi i}}\int _{\partial \Omega }{\frac {f(\zeta )d\zeta }{\zeta -z}}\ \ \ \forall z\in \Omega }$.

We say a function f satisfies the mean value property when:

${\displaystyle f(z)={1 \over 2\pi }\int (z+\epsilon e^{i\theta })d\theta }$.

An analytic function is an archetypical example, for the property is the immediate consequence of Caucy's integral formula. If f has the mean value property, then, for one, ${\displaystyle f}$ is harmonic, and for another, the maximal principle become applicable to it.

6.1 Theorem
If ${\displaystyle u}$ is analytic in ${\displaystyle \Omega }$, then the following are equivalent:

• (a) ${\displaystyle u^{(k)}}$ (z) = 0 for all ${\displaystyle k}$.
• (b) ${\displaystyle u\mid _{\omega }}$ = 0 for some ${\displaystyle \omega \subset \Omega }$ open.
• (c) ${\displaystyle u}$ has a non-isolated zero.

and if any of the above is true, then

• (d) ${\displaystyle u\mid _{\Omega }}$ = 0.

Proof: Let ${\displaystyle E=\{f:f\mid _{\omega }\}}$. If ${\displaystyle u}$ is in ${\displaystyle E}$, then its derivative:

${\displaystyle {\dot {u}}=\lim _{h\rightarrow 0}h^{-1}(f(x+h)-f(x))}$

is 0 in ${\displaystyle \Omega }$ since ${\displaystyle \omega }$ consists of interior points, and so we may suppose ${\displaystyle x+h}$ is ${\displaystyle \omega }$. Thus, from (b), (a) follows. That (b) implies (c) is obvious since an interior point is non-isolated. To show (d), let Z be ${\displaystyle \{z\subset \Omega :f(z)=0\}}$. Then ${\displaystyle Z}$ is closed since the inverse of ${\displaystyle f}$, which is continuous by the inverse theorem, maps a closed set {0} back to ${\displaystyle Z}$ in ${\displaystyle \Omega }$. ${\displaystyle Z}$ is also open, which we can know by considering a power series expansion. Since ${\displaystyle Z}$ is nonempty by assumption, (d) follows after (a). ${\displaystyle \square }$ (FIXME: this is still a partial proof)

6 Theorem (Runge) Let ${\displaystyle K\subset \mathbb {C} }$ be compact, and ${\displaystyle \omega }$ be an arbitrary open subset of ${\displaystyle \mathbb {C} }$ containing ${\displaystyle K}$. Then the following are equivalent:

(a) For any ${\displaystyle f\in {\mathcal {A}}(K)}$ and an integer ${\displaystyle j}$, we can find a ${\displaystyle u\in {\mathcal {A}}(\omega )}$ so that:
${\displaystyle \sup _{K}|f-u|<2^{-j}}$
(b) K is holomorphically convex.

Proof: The theorem is a consequence of the Hahn-Banach theorem.

A compact subset K of a complex plane is said to have the Runge property if ${\displaystyle K}$ satisfies any of the statements in the theorem.

6.2 Theorem (Weierstrass) Let ${\displaystyle \Omega \subset \mathbb {C} }$ be open. Let the sequence ${\displaystyle z_{j}\subset \Omega }$ be discrete, and ${\displaystyle n_{j}}$ be a sequence of arbitrary integers. Then there exists a nonzero ${\displaystyle f\in {\mathcal {A}}(\Omega \backslash \{z_{1},z_{2},...\})}$ such that for each ${\displaystyle j}$ ${\displaystyle (z-z_{j})^{(}-n_{j})f}$ is nonzero and analytic in some open set containing ${\displaystyle z_{j}}$.
Proof: Let ${\displaystyle K_{j}}$ be an exhaustion by compact sets of ${\displaystyle \Omega }$ with the Runge property. By the Runge property, for each ${\displaystyle j}$, we find a ${\displaystyle u_{j}\in {\mathcal {A}}(\Omega )}$ so that:

${\displaystyle \sup _{K_{j}}|(z-z_{j})^{n_{j}}+u_{j}|<2^{-j}}$

where since the sequence ${\displaystyle z_{j}}$ is discrete, we may suppose ${\displaystyle z_{k}\not \in K_{j}}$ for any ${\displaystyle k\leq j}$. Let

${\displaystyle g=\sum _{1}^{\infty }(z-z_{j})^{n_{j}}+u_{j}}$, and ${\displaystyle f(z)=e^{\int _{0}^{z}g(s)ds}}$.

Then ${\displaystyle f}$ is analytic in ${\displaystyle \Omega }$ except for all ${\displaystyle z_{j}}$. Also, let ${\displaystyle j}$ be fixed and ${\displaystyle \omega }$ be an open set containing ${\displaystyle z_{j}}$ and no other terms in the sequence. Then ${\displaystyle {{\dot {f}} \over f}=g}$ in ${\displaystyle \omega }$. Thus, by Cauchy's integral formula,

${\displaystyle 2\pi in_{j}=\int _{\omega }g(s)ds=\int _{\omega }{{\dot {f}}(s) \over f(s)}ds}$

It now follows that the argument principle says ${\displaystyle f}$ has a zero of order ${\displaystyle n_{j}}$ (if the order is negative, then it is actually a pole). ${\displaystyle \square }$

This formulation is probably more illustrative, if it states more weakly.

6.2 Corollary Every discrete subset of ${\displaystyle \Omega \subset \mathbb {C} }$ is the zero and pole set of some analytic function.
Proof: Every discrete set is countable.

6 Theorem Let ${\displaystyle \Omega \subset \mathbb {R} ^{n}}$ be open and connected and ${\displaystyle \eta }$ be one-form. Then the following are equivalent:

(1) ${\displaystyle \eta }$ is exact on ${\displaystyle \Omega }$.
(2) ${\displaystyle \int _{\gamma }=0}$ if ${\displaystyle \gamma }$ is a closed path.
(3) ${\displaystyle \int _{a}^{b}\eta }$ is independent of path.

Proof: On ${\displaystyle \Omega }$, if ${\displaystyle \eta }$ is exact, then ${\displaystyle \eta =df}$ for some zero-form ${\displaystyle f}$. It thus follow:

${\displaystyle \int _{a}^{b}\eta =\int _{\gamma }df=f(\gamma (1))-f(\gamma (0))}$.

If ${\displaystyle \gamma }$ is a closed path, then ${\displaystyle \gamma (1)=\gamma (0)}$ by definition, and hence, (2) is true. Let ${\displaystyle \gamma _{1}}$ and ${\displaystyle \gamma _{2}}$ be arbitrary paths from ${\displaystyle a}$ to ${\displaystyle b}$. Then

${\displaystyle \int _{\gamma _{1}-\gamma _{2}}\eta =0}$ if (2) is true.

Thus, (2) implies (3). Finally, show (3) implies (1). Let ${\displaystyle f(x)=\int _{0}^{x}\eta }$. Then ${\displaystyle df=\sum _{1}^{n}{\partial f \over \partial x_{i}}dx_{i}}$. For each ${\displaystyle i}$, if ${\displaystyle \int _{x}^{x+h_{i}}\eta =g(x+h_{i})-g(x),thensince[itex]\int _{0}^{x+h_{i}}-\int _{0}^{x}=}$,

 ${\displaystyle {\partial f \over \partial x_{i}}(x)}$ ${\displaystyle =lim_{h_{i}\to \infty }{1 \over h_{i}}(\int _{x}^{x+h_{i}}\eta }$ ${\displaystyle =lim_{h_{i}\to \infty }{1 \over h_{i}}(g(x+h_{i})-g(x))}$ ${\displaystyle ={\partial g \over \partial x_{i}}(x)}$

Here the derivative of ${\displaystyle f}$ does exist since the integral is independent of path. We conclude that ${\displaystyle df=\sum _{1}^{n}{\partial f \over \partial x_{i}}dx_{i}=\eta }$.

Stokes formula

4 Theorem (Stokes) If ${\displaystyle \omega \subset \mathbb {C} }$ has boundary which consists of finitely many Jordan curves, then:

${\displaystyle \int _{\partial \omega }\eta =\int _{\omega }d\eta }$

Proof: (FIXME: To be written)

4 Corollary (Green) If ${\displaystyle \omega \subset \mathbb {C} }$ has boundary which consists of finitely many Jordan curves, then we have:

${\displaystyle \int _{\omega }(f\Delta g-g\Delta f)dx\wedge dy=\int _{\partial \omega }\left(f{\partial \over \partial x}g-g{\partial \over \partial x}f\right)dy-\left(f{\partial \over \partial y}g-g{\partial \over \partial y}f\right)dx}$.

Proof: ${\displaystyle d\left(f{\partial \over \partial x}g-{g \over \partial x}f\right)\wedge dy=\left(f{\partial ^{2} \over \partial x^{2}}g-g{\partial ^{2} \over \partial x^{2}}f\right)dx\wedge dy}$. ${\displaystyle \square }$

Harmonicity

Let ${\displaystyle \Omega \subset \mathbb {R} ^{n}}$. A function ${\displaystyle u\in {\mathcal {C}}^{2}(\Omega )}$ is said to be harmonic if

${\displaystyle \sum _{1}^{n}{\partial ^{2}u \over \partial x_{j}^{2}}=0}$ (the Laplace equation)

We also define the poisson kernel

${\displaystyle P(x/R,y)={C_{n}}^{-1}(1-|x/R|^{2})|y-x/R|^{-n}}$

where ${\displaystyle C_{n}}$ is the volume of a unit ball in ${\displaystyle \mathbb {R} ^{n}}$.

4. Theorem Let ${\displaystyle \Omega ={\mbox{Ball}}_{R}}$. Then ${\displaystyle u}$ is harmonic on and continuous on ${\displaystyle {\overline {\Omega }}}$ if and only if

${\displaystyle u(x)=\int P(x/R,y)u(Ry)d\omega (y)}$.

Proof: Suppose ${\displaystyle u}$ is harmonic on ${\displaystyle \Omega }$. Then using the Green's function

${\displaystyle u(x)=\int P(x/r,y)u(ry)d\omega (y)}$ for ${\displaystyle |x|.

Letting ${\displaystyle r\to R}$ gives the direct part. Conversely, if ${\displaystyle x\in \Omega }$, then the second derivative of ${\displaystyle u}$ = 0 since ${\displaystyle P}$ is harmonic on ${\displaystyle \Omega }$. ${\displaystyle \square }$

4. Corollary (mean value property) Let ${\displaystyle \Omega ={\mbox{Ball}}_{R}}$ and ${\displaystyle u}$ be harmonic on and continuous on ${\displaystyle {\overline {\Omega }}}$. Then

${\displaystyle u(0)=\int u(Ry){d\omega (y) \over C_{n}}}$.

Proof: Let ${\displaystyle x=0}$ in the theorem. Then ${\displaystyle P(0,y)={C_{n}}^{-1}}$.

4. Corollary (maximum principle) If ${\displaystyle \Omega \subset \mathbb {R} }$ and ${\displaystyle u:\Omega \to \mathbb {R} }$ is harmonic on ${\displaystyle \Omega }$ and continuous on ${\displaystyle {\overline {\Omega }}}$, then for ${\displaystyle x\in \Omega }$,

${\displaystyle \min _{{\mbox{b}}\Omega }u\leq u(x)\leq \max _{{\mbox{b}}\Omega }u}$

where if the equality holds at some ${\displaystyle x\in \Omega }$, then ${\displaystyle f}$ is constant in the component of ${\displaystyle x}$.
Proof: (i) Suppose ${\displaystyle \Omega ={\mbox{Ball}}_{R}}$. Then for ${\displaystyle x\in \Omega }$

 ${\displaystyle u(x)}$ ${\displaystyle =\int _{|y|=1}P(x/R,y)u(Ry)d\omega (y)}$ ${\displaystyle \leq \sup _{{\mbox{b}}\Omega }u\int _{|y|=1}P(x/R,y)d\omega (y)}$ ${\displaystyle =\sup _{{\mbox{b}}\Omega }u}$

since

${\displaystyle \int _{|y|=1}P(x/R)d\omega (y)\sup _{{\mbox{b}}\Omega }=1}$ when ${\displaystyle |x|\leq |R|}$.

Likewise, ${\displaystyle -u(x)\leq \sup _{{\mbox{b}}\Omega }-u}$. Thus,

${\displaystyle \inf _{{\mbox{b}}\Omega }u\leq u(x)\leq \sup _{{\mbox{b}}\Omega }u}$

where ${\displaystyle \inf }$ and ${\displaystyle \sup }$ are actually ${\displaystyle \min }$ and ${\displaystyle \max }$, respectively since the continuity of ${\displaystyle u}$ and the compactness of a closed ball. (ii) Suppose ${\displaystyle \Omega }$ is arbitrary. Let ${\displaystyle x\in \Omega }$. From (i) it follows that ${\displaystyle u}$ is constant on every open ball containing ${\displaystyle x}$. Since ${\displaystyle \Omega }$ is open, every component of ${\displaystyle \Omega }$ is open. Since an open set is the union of non-disjoint open balls, ${\displaystyle u}$ is constant on the component of ${\displaystyle x}$. ${\displaystyle \square }$

4. Theorem Let ${\displaystyle u}$ be continuous on ${\displaystyle \Omega \subset \mathbb {R} ^{n}}$. Then the following are equivalent:

• (i) ${\displaystyle u}$ is harmonic.
• (ii) If ${\displaystyle \delta \leq 0}$ is given,
${\displaystyle \int u(x+ry){d\omega (y) \over C_{n}}\wedge d\mu (r)=u(x)\int d\mu (r)}$
• where ${\displaystyle d(x,{\mbox{b}}\Omega )\leq \delta }$ and ${\displaystyle {\mbox{supp}}(d\mu )\subset [0,\delta ]}$.
• (iii) If ${\displaystyle \delta >0}$ is given, then (ii) holds.

Proof: The mean value property says:

${\displaystyle u(x)=\int u(x+ry){d\omega (y) \over C_{n}}}$

By integrating both sides we get:

${\displaystyle u(x)\int d\mu (r)=\int u(x+ry){d\omega (y) \over C_{n}}\wedge d\mu (r)}$

Hence, (i) implies (ii). Clearly, (ii) implies (iii). Suppose (iii), and let ${\displaystyle B}$ be an open ball with ${\displaystyle {\overline {B}}\subset \omega }$. Let ${\displaystyle h}$ be harmonic on ${\displaystyle B}$ and continuous on ${\displaystyle {\overline {B}}}$ such that ${\displaystyle u=h}$ on ${\displaystyle {\mbox{b}}B}$. If ${\displaystyle {\mbox{Ball}}_{\delta ,x}\subset \Omega }$, then using (iii)

${\displaystyle \int (h-u)(x+ry){d\omega (y) \over C_{n}}\wedge d\mu (r)=(h-u)(x)\int d\mu (r)}$

where ${\displaystyle h-u=0}$ on the boundary of ${\displaystyle B}$. Since ${\displaystyle d\mu (r)}$ has non-zero measure, ${\displaystyle u=h}$ on ${\displaystyle B}$. Thus, (iii) implies (i). ${\displaystyle \square }$

Cauchy's integral formula

4 Thorem Let ${\displaystyle G}$ be a bounded open subset of ${\displaystyle \mathbb {C} }$ whose boundary is smooth enough that Stokes' formula is applicable. If ${\displaystyle u\in {\mathcal {C}}^{1}({\overline {G}})}$, we have:

${\displaystyle u(w)={1 \over 2\pi i}\int _{\partial G}{u(w) \over z-w}dz-{1 \over \pi }\int _{G}{\partial u \over \partial {\bar {z}}}(z){1 \over z-w}dx\wedge dy}$ for ${\displaystyle w\in G}$

4 Theorem Let ${\displaystyle \mu }$ be a complex-valued measure with compact support in ${\displaystyle \mathbb {C} }$ and define

${\displaystyle u(w)=\int {1 \over z-w}d\mu }$

Schwarz lemma

4 Lemma (Schwarz) If ${\displaystyle f}$ is analytic and ${\displaystyle |f(z)|\leq 1}$ for all ${\displaystyle |z|<1}$ and ${\displaystyle f(0)=0}$, then we have:

${\displaystyle |f(z)|\leq |z|}$ for all ${\displaystyle |z|\leq 1}$

Moreover, if the equality in the above holds at some point ${\displaystyle w\neq 0}$, then ${\displaystyle f}$ is proportional to ${\displaystyle z}$
Proof: The hypothesis means that we can write ${\displaystyle f(z)=zg(z)}$. Furthermore, if ${\displaystyle 0, the maximum principle says

${\displaystyle \sup _{|z|\leq r}|g(z)|=\sup _{|z|=r}|g(z)|\leq {1 \over r}}$.

and ${\displaystyle g}$ is constant if ${\displaystyle g=1}$ at some point on the circle ${\displaystyle |z|=r}$. Letting ${\displaystyle r\to 1}$ completes the proof. ${\displaystyle \square }$

A Lie algebra is an algebra whose multiplication, denoted by ${\displaystyle [,]}$, satisfies

• (i) ${\displaystyle [x,x]=0}$, and
• (ii) ${\displaystyle [[x,y],z]+[[y,z],x]+[[z,x],y]=0}$

for all ${\displaystyle x,y,z}$. Under the assumption (ii) we see (i) is equivalent to

${\displaystyle 0=[x+y,x+y]=[x,y]+[y,x]}$.

When given an algebra is associative; i.e., ${\displaystyle (xy)z=x(yz)}$ we can turn the algebra into a Lie algebra by defining ${\displaystyle [x,y]=xy-yx}$, called a commutator. Indeed, it is clear that ${\displaystyle [x,y]}$ distributes over scalars and addition and the condition (i) holds. It then follows ${\displaystyle [[x,y],z]=[x,[y,z]]-[y,[x,z]]}$.

Also, ${\displaystyle \int e^{x}=f(u^{n})}$