# Real Analysis/Exponential Function

Our aim in this chapter is to formally define the very interesting exponential and logarithmic functions for all real numbers. This will also highlight intriguing interpretations on how the field of mathematics operates, due to the odd nature of how and why mathematicians even defined exponentiation and logarithms to begin with. Simply put, these functions serve to make otherwise insurmountably complex hurdles easier. "What kind of hurdles?" one may ask. The ones where one might want to switch between addition and multiplication. Mathematically, they wanted to create some function ƒ and g such that

${\displaystyle f(x+y)=f(x)\cdot f(y)}$ and ${\displaystyle g(x\cdot y)=g(x)+g(y)}$

[To summarize significant portions of this section, the function ƒ is exponentiation and the function g is logarithms]

As you can see, in some algebraic problems, such a function would be very desirable for some—and absolutely necessary for others. However, the guiding philosophy of mathematics dictates that there ought be a definable statement that composes these intriguing functions—it can't be arbitrary or else there may be some hidden contradiction somewhere! That drive is what this section will satiate.

## Construction

We will begin constructing what the functions ƒ and g are through two streams. First, we will identify how we expect these functions to operate. Second, we build a definition of ƒ and g that happens to also match the behaviors of the function perfectly. How so? It's the definition, of course.

### Behaviors

We will first identify something extremely important in terms of how day to day usage of the function ƒ will work. What if:

${\displaystyle x=x{\text{ and }}y=-x\mid f(x+y)=f(x-x)=f(0)}$

Well, we can first observe that the function can be alternatively written as

${\displaystyle f(x+y)=f(x)\cdot f(y)=f(x)\cdot f(-x)=f(0)}$

but still must equal to the earlier statement. Well, we can't make ${\textstyle f(0)=0}$—the additive identity, since then that implies that x or negative x is 0, which makes the entire function ƒ worthless; it would necessarily mean that all inputs for ƒ equals 0. We could instead make ${\textstyle f(0)=1}$—the multiplicative identity. This new definition avoids the pitfall of ${\textstyle f(0)=0}$ making the entire function worthless. However, it therefore necessitates that either ${\textstyle f(x)}$ or ${\textstyle f(-x)}$ be the multiplicative inverse. Since ${\textstyle f(x)}$ for positive x is traditionally left alone (and also since—spoilers—it represents exponentiation of positive integers), we will assign that quality to ${\textstyle f(-x)}$. Thus,

${\displaystyle f(-x)={\frac {1}{f(x)}}}$

If you noticed, we have inadvertently assumed that the variable x and y are at least integers by our addition of negation. Oops. Well, we can take it one step further by imagining them to be rational numbers. If we suppose that the function input can be rational too, we open up another kind of property to fulfill, namely

${\displaystyle \underbrace {f\left({\frac {1}{q}}\right)\times \cdots \times f\left({\frac {1}{q}}\right)} _{q{\text{ times}}}=f\underbrace {\left({\frac {1}{q}}+\cdots +{\frac {1}{q}}\right)} _{q{\text{ times}}}=f\left({\frac {q}{q}}\right)=f(1)}$

and if we're multiplying p terms together instead of q terms,

${\displaystyle \underbrace {f\left({\frac {1}{q}}\right)\times \cdots \times f\left({\frac {1}{q}}\right)} _{p{\text{ times}}}=f\underbrace {\left({\frac {1}{q}}+\cdots +{\frac {1}{q}}\right)} _{p{\text{ times}}}=f\left({\frac {p}{q}}\right)}$

which adds more to our plate. Luckily, this new definition does not break anything we have assumed before. Luckily.

### Differentiation

We will first assume that ƒ is differentiable. When we do, we can write out the definition of ƒ being differentiable.

${\displaystyle f'=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}}$

which, given the special relationship between addition and multiplication, can be applied here to give a special answer.

{\displaystyle {\begin{aligned}f'&=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {f(x)\cdot f(h)-f(x)}{h}}\\&=\lim _{h\rightarrow 0}{\left(f(x)\cdot {\frac {f(h)-1}{h}}\right)}\\&=f(x)\cdot \lim _{h\rightarrow 0}{\frac {f(h)-1}{h}}\end{aligned}}}

For now, let's do something irrational to our conceptions of mathematics. We suppose that the limit is equal to 1. If we do that—and hold on, we will show you how even this egregious disregard still keeps mathematics consistent (eventually), we have ourselves a new property for the function ƒ. The derivative of ${\displaystyle f'}$ is ${\displaystyle f}$. All in all, this exercise has lead us to create a function immune to differentiation—so long as the addition of inputs for a function is equivalent to the multiplication of the function with inputs separated!

### The Logarithm

We have made a lot of assumptions in that last sub-heading. A lot. We will, for all intents and purposes, spend the rest of the heading on justifying (i.e. ensuring a lack of contradiction to) these claims. This will also act the part of justifying those claims, but in lieu of defending the earlier claim of the function ƒ, it will bring to form a function wiith properties so intriguing and relationship to this exercise so destined, it is given a special name and notation in mathematics: the logarithm.

There's a big issue with the function ƒ, besides the claims we made earlier. Even with those claims we made, there's no obvious point to it. the function ƒ has yet to be defined—and thus suseptible to possible contradictions later on. Heck, with more assumptions laid on it it has even less likely of a chance to survive analysis. However, we still have a trick left to give the function form—the inverse function and its properties. If we forget about the function ƒ and focus on its inverse, we can do some cool things with it. Using the locally named Reciprocal Definition for inverse functions, we can give a definition for the derivative of the inverse function

{\displaystyle {\begin{aligned}(f^{-1})'&={\frac {1}{f'\circ f^{-1}}}\\&={\frac {1}{f\circ f^{-1}}}\\&={\frac {1}{x}}\end{aligned}}}

That is one easy definition. With this, we will make another outrageous claim, albeit less intense. We can say that 1 over x is a primative for an integral. A special integral whose properties will give this inverse function some teeth. We will suppose (with a little more merit than the previous supposition) that

${\displaystyle f^{-1}=\int _{1}^{x}{{\frac {1}{t}}\operatorname {d} \!t}}$

We're going to simply drop the function ƒ-1 notation now. This inverse function is the definition of the logarithm. This special logarithm, unlike the ones used in elementary mathematics, has no base. This is the mathematican's favorite version logarithm, and is either notated as simply "log" without any base, or "ln", which has a special significance (a defined base) that will be described later on. To summarize a key point for this section,

Definition of a Logarithm
${\displaystyle \log x=\ln x=\int _{1}^{x}{{\frac {1}{t}}\operatorname {d} \!t}}$

Note that in mathematics, you may see either ${\displaystyle \log }$ or ${\displaystyle \ln }$ (pronounced "lawn") used to refer to this special function. More often in fields where logarithms with bases (which will be covered in the next heading) are used, ${\displaystyle \ln }$ is preferred as it is clearly different from ${\displaystyle \log }$—which may appear as a mistake. In mathematics, logarithms with bases are often not used, so it would not be an issue in this discipline which one you use.

### The Exponential

The exponential function is, unlike the logarithmic function, simply a single supposition. Essentially, the exponential function is the inverse of the logarithmic function. The purpose of this will be explored in the second heading of this page. All in all, the construction of the logarithmic and exponential functions is complete.

Definition of an Exponentiation
${\displaystyle \exp x=\log ^{-1}x=\ln ^{-1}x}$

However, it should be emphasized that although the exponential function is similar in name to exponentiation as learned in elementary mathematics (${\displaystyle 10^{x}}$ as an example), they have a small, but significant difference that should not be easily glossed over in this heading.

## Appendix

### Alternate Construction of Exponentiation

The usual approach of constructing exponentiation is by defining the logarithm as an integral and the exponent as its inverse (as done above). We will, in this appendix, follow the reverse approach by constructing—in the loose sense of the word—the exponential function that remains defined for the entire real numbers from that of rationals. (Unfortunately, it involves some tedious computations!)

We can now use what we know about continuity to construct rational powers of positive real numbers.

### Continuity of x^n

We've already defined the integer powers as a series of multiplications, yet we haven't shown that it's continuous. Let's show they're continuous first.

• ${\displaystyle f(x)=x^{0}=1}$ is continuous.

Given ${\displaystyle \epsilon >0}$, ${\displaystyle |f(x)-f(c)|=|1-1|=0<\epsilon }$. So, ${\displaystyle \forall \delta >0:|x-c|<\delta \implies |f(x)-f(c)|<\epsilon }$.

• ${\displaystyle f(x)=x}$ is continuous.

Given ${\displaystyle \epsilon >0}$, let ${\displaystyle \delta =\epsilon }$. Then ${\displaystyle |x-c|<\delta \implies |x-c|<\epsilon \implies |f(x)-f(c)|<\epsilon }$ .

• ${\displaystyle f(x)=x^{n}}$ is continuous for all ${\displaystyle n\in \mathbb {N} }$ and all ${\displaystyle x\in \mathbb {R} }$.

We proceed by induction. We have already seen that ${\displaystyle f(x)=x^{1}}$ is continuous. Assuming ${\displaystyle f(x)=x^{n-1}}$ is continuous, we use the fact that continuity is preserved under algebraic operations to see that ${\displaystyle xf(x)=x^{n}}$ is continuous.

• ${\displaystyle f(x)=x^{-n}}$ is continuous for all ${\displaystyle n\in \mathbb {N} }$ and all ${\displaystyle x\in \mathbb {R} \setminus {0}}$.

Since ${\displaystyle x^{n}}$ is continuous and nonzero on the set in question, ${\displaystyle {\frac {1}{x^{n}}}=x^{-n}}$ is continuous since continuity is preserved under division by a nonzero function.

We can now use the continuity of ${\displaystyle x^{n}}$ together with the intermediate value theorem to construct positive nth roots. As promised, this is much nicer than the construction of square roots in the first chapter:

### Construction of nth roots

We begin with construction of rational powers of arbitrary positive reals.

Given ${\displaystyle c>0}$, consider the function ${\displaystyle f(x)=x^{n}-c}$(it is clear that 0 has a unique nth root, so we do not consider this case). ${\displaystyle f(0)=-c<0}$ and since ${\displaystyle 1+c>1}$, ${\displaystyle f(1+c)=(1+c)^{n}-c>(1+c)^{n}-(1+c)>0}$. By the Intermediate Value Theorem, ${\displaystyle \exists x\in (0,1+c):f(x)=x^{n}-c=0}$. Thus c has a positive nth root.

To prove uniqueness, let x and y be two nth roots of c. If ${\displaystyle x>y>0}$, then ${\displaystyle x^{n}>y^{n}>0}$. But then it would follow that ${\displaystyle c>c}$, a contradiction. Similarly we cannot have ${\displaystyle x, so it follows that ${\displaystyle x=y}$.

### Definition and Properties of Rational Powers

Given ${\displaystyle n\in \mathbb {N} }$ we define ${\displaystyle x^{\frac {1}{n}}={\sqrt[{n}]{x}}}$ to be the unique nonnegative nth root of x. We then define all rational powers as follows:

If ${\displaystyle r={\frac {p}{q}}}$ is in lowest terms(i.e. p and q have no common factors and ${\displaystyle q>0}$), we define ${\displaystyle x^{r}=({\sqrt[{q}]{x}})^{p}}$.

Our definition would work just as well if ${\displaystyle {\frac {p}{q}}}$ were not in lowest terms, as we'll see in a minute. First we must prove some basic facts:

• ${\displaystyle {\sqrt[{m}]{\sqrt[{n}]{x}}}={\sqrt[{mn}]{x}}}$

Note that ${\displaystyle ({\sqrt[{m}]{\sqrt[{n}]{x}}})^{mn}=(({\sqrt[{m}]{\sqrt[{n}]{x}}})^{m})^{n}={\sqrt[{n}]{x}}^{n}=x}$. Thus ${\displaystyle {\sqrt[{m}]{\sqrt[{n}]{x}}}}$ is an mn-th root of x. The result follows immediately from uniqueness of positive roots.

• ${\displaystyle x^{\frac {p}{q}}={\sqrt[{q}]{x^{p}}}={\sqrt[{q}]{x}}^{p}}$

Using what we know about integer powers we see that ${\displaystyle (x^{\frac {1}{q}})^{p})={\sqrt[{q}]{({\sqrt[{q}]{x}}^{p})^{q})}}={\sqrt[{q}]{({\sqrt[{q}]{x}}^{q})^{p})}}={\sqrt[{q}]{x^{p}}}}$

As promised, our definition does not depend on the fraction representing r:

• If ${\displaystyle {\frac {p}{q}}={\frac {m}{n}}}$, then ${\displaystyle x^{\frac {p}{q}}={\sqrt[{n}]{x^{m}}}}$.

If ${\displaystyle {\frac {p}{q}}={\frac {m}{n}}}$, then ${\displaystyle m=cp}$ and ${\displaystyle n=cq}$ for some ${\displaystyle c\in \mathbb {Z} }$. Thus ${\displaystyle {\sqrt[{n}]{x^{m}}}=({\sqrt[{c}]{\sqrt[{q}]{x}}}^{c})^{p}={\sqrt[{q}]{x^{p}}}=x^{\frac {p}{q}}}$.

Now we'll prove the standard algebraic facts about rational powers:

• If ${\displaystyle r,s\in \mathbb {Q} }$ and ${\displaystyle x>0}$, then ${\displaystyle x^{r}x^{s}=x^{r+s}}$ and ${\displaystyle (x^{r})^{s}=x^{rs}}$

Proof: Let ${\displaystyle r={\frac {a}{b}}}$ and ${\displaystyle s={\frac {c}{d}}}$. Then ${\displaystyle x^{r}x^{s}=x^{\frac {a}{b}}x^{\frac {c}{d}}=x^{\frac {ad}{bd}}x^{\frac {bc}{bd}}=(x^{\frac {1}{bd}})^{ad}(x^{\frac {1}{bd}})^{bd}=(x^{\frac {1}{bd}})^{ad+bc}=x^{\frac {ad+bc}{bd}}=x^{(r+s)}}$

Also, ${\displaystyle x^{rs}}$ ${\displaystyle =x^{{\frac {a}{b}}{\frac {c}{d}}}}$ ${\displaystyle =x^{\frac {ac}{bd}}}$ ${\displaystyle =(x^{ac})^{\frac {1}{bd}}}$ = ${\displaystyle (((x^{a})^{c})^{\frac {1}{b}})^{\frac {1}{d}}}$ ${\displaystyle =(((x^{a})^{\frac {1}{b}})^{c})^{\frac {1}{d}}}$${\displaystyle =(x^{\frac {a}{b}})^{\frac {c}{d}}}$

• If ${\displaystyle r={\frac {a}{b}}>0\in \mathbb {Q} }$ and ${\displaystyle y>x>0}$, then ${\displaystyle y^{r}>x^{r}>0}$.

Proof: If ${\displaystyle y^{\frac {1}{b}}\leq x^{\frac {1}{b}}}$, then ${\displaystyle (y^{\frac {1}{b}})^{b}\leq (x^{\frac {1}{b}})^{b}}$, and ${\displaystyle y\leq x}$, contradicting the assumption ${\displaystyle y>x>0}$. So ${\displaystyle y^{\frac {1}{b}}>x^{\frac {1}{b}}>0}$. Since a > 0, ${\displaystyle y^{\frac {a}{b}}>x^{\frac {a}{b}}>0}$. Thus ${\displaystyle y^{r}>x^{r}>0}$

### Continuity of rational powers

Now we'll use the preceding algebraic properties to prove continuity of all rational powers:

• ${\displaystyle f(x)=x^{\frac {1}{n}}}$ is continuous for all ${\displaystyle n\in \mathbb {N} }$and ${\displaystyle x\geq 0}$.

Proof: Given ${\displaystyle \epsilon >0}$, let ${\displaystyle \delta =|c^{\frac {n-1}{n}}|\epsilon }$. Then ${\displaystyle |x-c|<\delta \implies }$

${\displaystyle |x^{\frac {1}{n}}-c^{\frac {1}{n}}|={\frac {|x^{\frac {1}{n}}-c^{\frac {1}{n}}||x^{\frac {n-1}{n}}+x^{\frac {n-2}{n}}c^{\frac {1}{n}}+\cdots +x^{\frac {1}{n}}c^{\frac {n-2}{n}}+c^{\frac {n-1}{n}}|}{|x^{\frac {n-1}{n}}+x^{\frac {n-2}{n}}c^{\frac {1}{n}}+\cdots +x^{\frac {1}{n}}c^{\frac {n-2}{n}}+c^{\frac {n-1}{n}}|}}}$${\displaystyle ={\frac {|x-c|}{|x^{\frac {n-1}{n}}+x^{\frac {n-2}{n}}c^{\frac {1}{n}}+\cdots +x^{\frac {1}{n}}c^{\frac {n-2}{n}}+c^{\frac {n-1}{n}}|}}}$${\displaystyle \leq {\frac {|x-c|}{|c^{\frac {n-1}{n}}|}}<{\frac {|c^{\frac {n-1}{n}}|\epsilon }{|c^{\frac {n-1}{n}}|}}=\epsilon }$.

The preceding argument works for ${\displaystyle c\not =0}$. If ${\displaystyle c=0}$, then let ${\displaystyle \delta =\epsilon ^{n}}$. Then:

${\displaystyle |x-0|<\delta \implies |x|<\epsilon ^{n}\implies |x|^{\frac {1}{n}}<\epsilon \implies |x^{\frac {1}{n}}-0|<\epsilon }$

So, ${\displaystyle x^{\frac {1}{n}}}$ is continuous for all ${\displaystyle x\geq 0}$.

• ${\displaystyle x^{q}}$ is continuous for all ${\displaystyle q\in \mathbb {Q} }$

Proof: If ${\displaystyle q={\frac {a}{b}}}$, where a and b are integers and ${\displaystyle b>0}$, then ${\displaystyle x^{q}=(x^{a})^{\frac {1}{b}}}$. Thus ${\displaystyle x^{q}}$ is the composition of continuous functions, and therefore is continuous itself.

## Real Powers

We will define arbitrary real exponents as the supremum of the exponents of rational members of the "Cut" that corresponds to the given real number. But first, we need to establish that this operation indeed produces a unique real number.

### Theorem

Let ${\displaystyle a,x\in \mathbb {R} }$ and let ${\displaystyle a>0}$

Let ${\displaystyle A=\{a^{q}|q\in \mathbb {Q} ;q\leq x\}}$
Let ${\displaystyle B=\{a^{q}|q\in \mathbb {Q} ;q\geq x\}}$

Let ${\displaystyle \alpha =\sup A}$ and ${\displaystyle \beta =\inf B}$

Then, ${\displaystyle \alpha =\beta }$