# Complex Analysis/Trigonometry

## The complex exponential, sine and cosine

Leonhard Euler observed closely many mathematical structures and was able to create theories which grasped the content of these structures. Many identities are named for him, but the most prominent Euler's identity, which is ${\displaystyle e^{i\pi }=-1}$, has been determined to be the most beautiful formula of mathematics by a poll among mathematicians.

As Leonhard Euler observed, the exponential function can assume a central rule in trigonometry. In this exposition, we will first formally define the exponential function as a power series, and then define sine and cosine by Euler's formula (not precisely the one in the caption on the right, but a slightly more general formula containing it as a special case) and argue why sine and cosine thus defined have the geometric meaning which one learns in school.

Definition 5.1:

The complex exponential function is the function

${\displaystyle \exp :\mathbb {C} \to \mathbb {C} ,\exp(z):=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}}$.

According to the ratio test, the convergence radius of this function is ${\displaystyle \infty }$. Thus, ${\displaystyle \exp }$ is an entire function by the results of chapter 3.

We now consider what happens if we insert a number of the form ${\displaystyle i\varphi }$ into ${\displaystyle \exp }$, where ${\displaystyle \varphi }$ is a real number, which we may choose to think of as an angle (on the level of interpretation); this will be made precise only later. Indeed, in this case, we get

${\displaystyle \exp(i\varphi )=\sum _{n=0}^{\infty }{\frac {(i\varphi )^{n}}{n!}}}$.

We now would like to split up the above into real part and imaginary part. To do so, we use commutativity of complex multiplication, ie. ${\displaystyle (i\varphi )^{n}=i^{n}\varphi ^{n}}$, and then we observe that ${\displaystyle i^{n}}$ cycles periodically through the values ${\displaystyle i,-1,-i,1}$ as is seen by induction and using ${\displaystyle i^{4}=1}$. In particular, in the series above, odd ${\displaystyle n}$ will be the ones contributing to the imaginary part of ${\displaystyle \exp(i\varphi )}$, whereas even ${\displaystyle n}$ will contribute to the real part of ${\displaystyle \exp(i\varphi )}$. We thus get:

{\displaystyle {\begin{aligned}\exp(i\varphi )&=\sum _{n=0}^{\infty }{\frac {(i\varphi )^{n}}{n!}}\\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}\varphi ^{2n}}{(2n)!}}+i\sum _{n=0}^{\infty }{\frac {(-1)^{n}\varphi ^{2n+1}}{(2n+1)!}}\end{aligned}}}

If we then define

${\displaystyle \cos(\varphi ):=\sum _{n=0}^{\infty }{\frac {(-1)^{n}\varphi ^{2n}}{(2n)!}}}$ and ${\displaystyle \sin(\varphi ):=\sum _{n=0}^{\infty }{\frac {(-1)^{n}\varphi ^{2n+1}}{(2n+1)!}}}$,

we immediately get what's called Euler's formula:

Theorem 5.2 (Euler's formula):

${\displaystyle \exp(i\varphi )=\cos(\varphi )+i\sin(\varphi )}$

A way to remember this formula is to realize that the letter i is contained in ${\displaystyle \sin }$, and thus it's ${\displaystyle \cos +i\sin }$ (instead of, say, ${\displaystyle \sin +i\cos }$).

## Algebraic properties of exp, sin and cos

We now prove a few algebraic properties of the functions exp, sin and cos. First of all, we prove what's called the functional equation of the exponential function (it is called that way because the exponential function is, up to a constant, precisely the solution to the equation ${\displaystyle f(x+y)=f(x)f(y)}$; and we normalize by ${\displaystyle f(0)=1}$):

Theorem 5.3 (functional equation):

For ${\displaystyle z,w\in \mathbb {C} }$, ${\displaystyle \exp(z+w)=\exp(z)\exp(w)}$.

Proof:

{\displaystyle {\begin{aligned}\exp(z)\exp(w)&=\sum _{n=0}^{\infty }\sum _{k=0}^{n}{\frac {1}{k!(n-k)!}}z^{k}w^{n-k}\\&=\sum _{n=0}^{\infty }\sum _{k=0}^{n}{\frac {1}{n!}}{\binom {n}{k}}z^{k}w^{n-k}\\&=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left(z+w\right)^{n}=\exp(z+w)\end{aligned}}}

by the Binomial theorem.${\displaystyle \Box }$

From this theorem we can immediately deduce what's called the addition theorems for sine and cosine. Indeed, we have for ${\displaystyle x,y\in \mathbb {R} }$ that

${\displaystyle \exp(i(x+y))=\cos(x+y)+i\sin(x+y)}$.

On the other hand, by the functional equation,

${\displaystyle \exp(i(x+y))=\exp(ix)\exp(iy)=(\cos(x)+i\sin(x))(\cos(y)+i\sin(y))=\cos(x)\cos(y)-\sin(x)\sin(y)+i[\sin(x)\cos(y)+\cos(x)\sin(y)]}$.

Comparing real and imaginary parts,

${\displaystyle \cos(x+y)=\cos(x)\cos(y)-\sin(x)\sin(y)}$ and ${\displaystyle \sin(x+y)=\sin(x)\cos(y)+\cos(x)\sin(y)}$.

Let's put a theorem box around this, since it's really important:

If ${\displaystyle x,y\in \mathbb {R} }$, then

${\displaystyle \cos(x+y)=\cos(x)\cos(y)-\sin(x)\sin(y)}$
${\displaystyle \sin(x+y)=\sin(x)\cos(y)+\cos(x)\sin(y)}$

Another important consequence is the following formula, which is used fairly often in many fields of analysis:

Theorem 5.5:

For ${\displaystyle x\in \mathbb {R} }$,

${\displaystyle \sin(x)^{2}+\cos(x)^{2}=1}$.

Proof: For this proof, we use the "trick" ${\displaystyle 0=x-x}$. Indeed, we have

${\displaystyle \cos(0)=1}$

from the series definition. Thus, and by the addition theorems:

{\displaystyle {\begin{aligned}1&=\cos(0)=\cos(x-x)\\&=\cos(x)\cos(-x)-\sin(x)\sin(-x)\end{aligned}}}

Now we observe that ${\displaystyle \sin(-x)=-\sin(x)}$ and ${\displaystyle \cos(-x)=\cos(x)}$; again this follows from the power series definition. Hence,

${\displaystyle 1=\cos(x)\cos(x)+\sin(x)\sin(x)}$

as desired.${\displaystyle \Box }$

## Analytic properties of sine and cosine

Sine and cosine have several obvious analytical properties. First of all, by differentiating the series term by term, we get

${\displaystyle \sin '(z)=\cos(z),\cos '(z)=-\sin(z)}$.

From this and by induction, we see that the derivatives of ${\displaystyle \sin }$ and ${\displaystyle \cos }$ cycle periodically through ${\displaystyle \sin }$, ${\displaystyle \cos }$, ${\displaystyle -\sin }$ and ${\displaystyle -\cos }$, for example

{\displaystyle {\begin{aligned}\sin '&=\cos \\\sin ^{(2)}&=-\sin \\\sin ^{(3)}&=-\cos \\\sin ^{(4)}&=\sin \\\sin ^{(5)}&=\cos \\&\vdots \end{aligned}}}

or

{\displaystyle {\begin{aligned}\cos '&=-\sin \\\cos ^{(2)}&=-\cos \\\cos ^{(3)}&=\sin \end{aligned}}}

and so on.

Now sine and cosine are continuous. Further, by the identity

${\displaystyle \sin(x)^{2}+\cos(x)^{2}=1}$ (${\displaystyle x\in \mathbb {R} }$),

they are actually bounded by ${\displaystyle 1}$ on ${\displaystyle \mathbb {R} }$. (We will later see that this identity holds, in fact, for ${\displaystyle x\in \mathbb {C} }$, but this does NOT mean that e.g. ${\displaystyle \sin }$ is bounded by ${\displaystyle 1}$ on all of ${\displaystyle \mathbb {C} }$, since we don't have in general ${\displaystyle \cos(z)^{2}>0}$ for ${\displaystyle z\in \mathbb {C} }$.)

For reasons which will become clear later, we have to insert a theorem from convex analysis at this point.

Theorem 5.6:

Let ${\displaystyle f:V\to \mathbb {R} }$ be a concave function, where ${\displaystyle V}$ is a real vector space. Consider the line in ${\displaystyle V}$ defined by a ${\displaystyle v\in V\setminus \{0\}}$. Take ${\displaystyle x_{0}. Then for all ${\displaystyle x>x_{1}}$, we'll have ${\displaystyle }$.

Proof:

## Connection of the above to trigonometry

At school, one learns the usual geometrical meanings of sine and cosine. Namely, sine is the length of the opposite-cathetus divided by the length of the hypotenuse of a triangle, and cosine is the length of the on-cathetus divided by the length of the hypotenuse of the triangle.

Now actually, the ${\displaystyle \sin }$ and ${\displaystyle \cos }$ that we defined above by series are precisely these, so that given an angle ${\displaystyle \varphi }$ in an arbitrary triangle between an on-cathetus and a hypotenuse, the value ${\displaystyle \sin(\varphi )}$ (${\displaystyle \cos(\varphi )}$) equals the length of the opposite-cathetus (on-cathetus) divided by the length of the hypotenuse of the triangle.

We shall now rigorously prove that.

First, we consider the real numbers ${\displaystyle \mathbb {R} }$. Together with addition, they form a group, and if we endow ${\displaystyle \mathbb {R} }$ with the Eucliden (ie. standard) topology, this is even a topological group, by which we mean that the map

${\displaystyle +:\mathbb {R} \times \mathbb {R} \to \mathbb {R} }$

is continuous, where ${\displaystyle \mathbb {R} \times \mathbb {R} }$ has the product topology induced by the topology we put on ${\displaystyle \mathbb {R} }$ (in this case, the standard (ie. Euclidean) topology).

Furthermore, the set

${\displaystyle 2\pi \mathbb {Z} :=\{2\pi n|n\in \mathbb {Z} \}}$

forms an additive subgroup of ${\displaystyle \mathbb {R} }$. Since ${\displaystyle (\mathbb {R} ,+)}$ is abelian, ${\displaystyle 2\pi \mathbb {Z} }$ is a normal subgroup of ${\displaystyle \mathbb {R} }$ (additively); this fact may be written ${\displaystyle 2\pi \mathbb {Z} \trianglelefteq \mathbb {R} }$ (where again both groups have addition as composition law). Thus, we may form the quotient group

${\displaystyle \mathbb {R} /2\pi \mathbb {Z} }$

and endow it with the quotient topology. It takes a little thought to see that this is actually a topological group together with that topology; more generally, we have

Theorem 5.?:

Let ${\displaystyle G}$ be a topological group and ${\displaystyle H\trianglelefteq G}$. Then ${\displaystyle G/H}$ is a topological group with the subspace topology.