# Real Analysis/Limits

 Real Analysis Limits

The challenge in understanding limits is not in its definition, but rather in its execution. Successfully completing a limit proof, using the epsilon-delta definition, involves learning many different concepts at once—most of which will be unfamiliar coming out from earlier mathematics. This chapter will serve as a guide in navigating these proofs, as the skills here will serve you well in higher mathematics.

## Definition

The definition of a limit, in ordinary real analysis, is notated as:

$\lim_{x \rightarrow c}f(x) = L$

One way to conceptualize the definition of a limit, and one which you may have been taught, is this: $\lim_{x \rightarrow c}f(x) = L$ means that we can make f(x) as close as we like to L by making x close to c. However, in real analysis, you will need to be rigorous with your definition—and we have a standard definition for a limit.

The notation of a limit is actually a shorthand for this expression:

Definition of "ƒ approaches the limit L near c"
Given a function ƒ; a limit L; and an approaching value c, ∀ε > 0, there ∃δ > 0 such that ∀x, $0 < |x - c| < \delta \implies |f(x) - L| < \epsilon$.

This definition gives a lot of people a lot of trouble, but since it is so fundamental to higher mathematics, there are many ways to help solidify the definition down. This chapter will be a guide in solidifying the behavior of this definition and provide necessary insight into working with the definition, whereas the Exercises will help unravel the puzzle, solidify the concept, and enable you to execute this definition properly.

## Corollaries of Limits

A graphical example of a function converging to a limit as it approaches infinity

It is very common, given limits, to work with the concept of infinity. However, the concept of infinity has yet to be well defined. Intuitively, we know that infinity represents endlessness and it is represented as . Yet, infinity itself is not a number. The current limit definition will fail if we use infinity like a number. If you suppose some limit where c = ∞ and we use our original definition, it would mean that

$\lim_{x \rightarrow \infty}f(x) = L$  means that  $\forall M>0: \exists \delta : 0 < |x - \infty| < \delta \implies |f(x) - L| < \epsilon$

Which is clearly nonsense!

1. You cannot "subtract by infinity" - infinity isn't a number nor is it really a variable.
2. Infinity cannot be bounded, yet by putting infinity in a $|a - b| < x$ format, it implies boundedness.

So, the definition needs to be rewritten, which is done in the following chart. The definitions for when either the limit as x approaches positive or negative infinity; or the limit as ƒ(x) converges to positive or negative infinity are as follows:

Note
Yes, the approaching and converging distinction is important. You can look at it as either referencing the delta or the epsilon, respectively.
Variations of the Epsilon-Delta Definition
Notation Formulation
$\lim_{x \rightarrow c}f(x) = \infty$ $\forall N>0: \exists \delta: 0<|x-c|<\delta \implies f(x) > N$
$\lim_{x \rightarrow c}f(x) = -\infty$ $\forall N>0: \exists \delta: 0<|x-c|<\delta \implies f(x) < -N$
$\lim_{x \rightarrow \infty}f(x) = L$ $\forall \epsilon > 0: \exists M: x > M \implies |f(x)-L| < \epsilon$
$\lim_{x \rightarrow -\infty}f(x) = L$ $\forall \epsilon > 0: \exists M: x < -M \implies |f(x)-L| < \epsilon$
$\lim_{x \rightarrow \infty}f(x) = \infty$ $\forall N > 0: \exists M: x > M \implies f(x) > N$
$\lim_{x \rightarrow \infty}f(x) = -\infty$ $\forall N > 0: \exists M: x > M \implies f(x) < -N$
$\lim_{x \rightarrow -\infty}f(x) = \infty$ $\forall N > 0: \exists M: x < -M \implies f(x) > N$
$\lim_{x \rightarrow -\infty}f(x) = -\infty$ $\forall N > 0: \exists M: x < -M \implies f(x) < -N$

Take a note of the following variables:

1. N usually notates a limit with infinity and is analogous to ε.
2. M usually notates a limit with infinity and is analogous to δ.

We only use big N and M because the connotation associated with ε and δ is that they are small numbers. Big N and M has the opposite connotation.

## Conceptualization

The concept of a limit: Whenever a point x is within δ units of c, f(x) is within ε units of L.
1. For all ε, only it, the ε variable, will be used to derive a δ.

This powerful statement basically states that δ is related to ε. To excuse mathematically rigorous language for a moment, δ can be imagined of like a function which outputs ε. This is actually important, as neither δ or ε are allowed to have variables, such as x, be part of their formulation.

2. ε and δ are supposed to represent bounds.

Hence the absolute value signs. They are mathematically equivalent to writing $-\delta < x - c < \delta$ and $-\epsilon< f(x) - L < \epsilon$, which exemplifies their bound-like nature a lot more.

3. This limit definition is designed to ignore the value of f(c) and whether or not c is even in the domain of ƒ.

The requirement $|x-c|>0$ provides the appeal in studying calculus, by removing the technicality of having to analyze the behavior at the point (which is usually undefined to begin with). It is the mathematical implementation of the idea that the behavior of a function near a point shouldn't be affected by its behavior at the point. Thus, f(x) need not be defined at c to have a limit there.

## Properties

Given that limits are such a fundamental concept of calculus, it should be reasonable to expect that limits should have some intriguing properties to both warrant analysis as well as be a staple mathematical topic in both elementary, applied, and higher mathematics.

### Uniqueness

A limit is unique, in that there is always one and only one answer if the input is the same. This is commonly rephrased as "a function cannot approach two different limits at c". Limits having unique answers is very important, since if they don't, the use of limits will grow so complex that it will simply become unusable.

Theorem

Suppose a function ƒ such that the limit as x approaches c converges to L. If the limit of ƒ as it approaches c also converges to M, then L = M

### Algebraic Operations

If $\lim_{x \rightarrow c}f(x) = L$, and $\lim_{x \rightarrow c}g(x) = M$, then:

List of Algebraic Operations for Limits
Name Meaning
Addition $\lim_{x \rightarrow c}{f(x) + g(x)} = L + M$
Subtraction $\lim_{x \rightarrow c}{f(x) - g(x)} = L - M$
Multiple $\lim_{x \rightarrow c} af(x) = a \cdot \lim_{x \rightarrow c} f(x) = aL$
Multiplication $\lim_{x \rightarrow c}{f(x) \cdot g(x)} = L \cdot M$
Reciprocal $\lim_{x \rightarrow c}\frac{1}{g(x)} = \frac{1}{M}$ assuming $M$ and $\lim_{x \rightarrow c}g(x)$ are both non-zero.
Division $\lim_{x \rightarrow c}\frac{f(x)}{g(x)} = \frac{L}{M}$ assuming $M$ and $\lim_{x \rightarrow c}g(x)$ are both non-zero.

By applying the corresponding theorems for sequential limits, we find that functional limits are both unique—they preserve algebraic operations and ordering—and that a corresponding "Squeeze Theorem" holds.

• If $\exists \delta: f(x) > g(x) \forall x \in A$, then $L > M$.
• $L=M, f(x) \leq h(x) \leq g(x) \implies \lim_{x \rightarrow c}h(x) = L$
• If L = 0 and h(x) is bounded, then $\lim_{x \rightarrow c}f(x)h(x) = 0$.

#### Proof

For the various operations, the following proofs may require more or less knowledge of algebraic inequality manipulation.

Of the operations, the proof for addition is the most simple, as it relies on the least amount of inequality algebra.

 Suppose the limit of two functions ƒ and g as x approaches c. The equations will be as follows. $|x - c| < \delta_1 \implies |f(x) - L| < \epsilon_1$ $|x - c| < \delta_2 \implies |g(x) - M| < \epsilon_2$ Since both epsilon-delta descriptions reference the same variables x and c in |x - c|, we can combine both equations by ensuring the smallest bound is used in both inequalities (since the bound range will be true for both), through the min function. $|x - c| < \delta_1 \text{ and } |x - c| < \delta_2 \implies$ $|x - c| < \delta = min(\delta_1, \delta_2)$ Because, due to the constraints placed on δ, it is the case that the previous expression, by definition because of our statement that it is a limit, must imply the epsilon expression. However, since the previous expression encompasses both epsilons, it now implies both epsilons simultaneously. $|x - c| < \delta = min(\delta_1, \delta_2) \implies$ $|f(x) - L| < \epsilon_1 \text{ and } |g(x) - M| < \epsilon_2$ It should also be the case that, given that the entire delta expression is shared, that the epsilon value bounding both functions ƒ and g can also be bounded using the same epsilon. Specifically, an epsilon half as small as what you think of. $|f(x) - L| < \epsilon_1 \text{ and } |g(x) - M| < \epsilon_2 \implies$ $|f(x) - L| < \frac{\epsilon}{2} \text{ and } |g(x) - M| < \frac{\epsilon}{2}$ We cannot force the epsilon inequalities together like we did for the delta inequalities because the variables in the absolute sign are not the same and thus do not represent the same variable. However, we know that a + c < b + d (Problem 1.II) and that |x + y| ≤ |x| + |y| (Problem 1.I), so we can combine those two epsilon inequalities to form $|f(x) - L| < \frac{\epsilon}{2} \text{ and } |g(x) - M| < \frac{\epsilon}{2} \implies$ \begin{align} |f(x) - L| + |g(x) - M| &< \frac{\epsilon}{2} + \frac{\epsilon}{2} \\ |f(x) - L + g(x) - M| &< \epsilon \\ |(f(x) + g(x)) - (L + M)| &< \epsilon \end{align} The final statement can be expressed, using limit notation, as $|x - c| < \delta \implies |(f(x) + g(x)) - (L + M)| < \epsilon$ $\text{ means } \lim_{x \rightarrow c}{f(x) + g(x)} = L + M$ $\blacksquare$
##### Subtraction

Subtraction follows from the addition proof by imagining a function h that is the negation of the function g beforehand. In other words, imagine the function g in the proof as a variable for a negated function.

$\blacksquare$

##### Multiplication

Of the operations, the proof for multiplication is the most complex, as it relies on the greatest amount of inequality algebra. It also requires a seemingly contrived lemma to operate. We will start by proving the lemma, which is simply an algebraic relationship between inequalities, similar to that of the binomial theorem relates a summation of terms and a product.

 The lemma states that $\text{If } |a - c| < \min\left(1, \frac{x}{2(|d| + 1)}\right) \text{ and } |b - d| < \frac{x}{2(|c| + 1)}$ $\text{then } |ab - cd| < x$, for any number a, b, c, and d; and x > 0. The first part is to break the min function into two cases. $|a| < 1 + |c|$ and $|a - c| < \frac{x}{2(|d| + 1)}$ Then, rephrase the main function so that it will allow substitution. \begin{align}|ab - cd| &= |a(b - d) + d(a - c)| \\ &\le |a||b - d| + |d||a - c| \\ &< (1 + |c|) \cdot \frac{x}{2(|c| + 1)} + |d| \cdot \frac{x}{2(|d| + 1)} \\ &< \frac{x}{2} + \frac{|d|}{|d| + 1} \cdot \frac{x}{2} \\ &< \frac{x}{2} + \frac{x}{2} \\ &< x \end{align} $\blacksquare$

As you can see, the lemma itself describes a simple to prove and valid, yet very contrived and unnatural-looking relationship between numbers. But, this relationship is very attractive to be applied blindly for limits, because any value of a, b, c, and d inputted (even 0's) works, and that x > 0 is a condition that matches the ε variable.

As you will see below, we will apply this lemma for multiplication.

 Suppose the limit of two functions ƒ and g as x approaches c. The equations will be as follows. $|x - c| < \delta_1 \implies |f(x) - L| < \epsilon_1$ $|x - c| < \delta_2 \implies |g(x) - M| < \epsilon_2$ Since both epsilon-delta descriptions reference the same variables x and c in |x - c|, we can combine both equations by ensuring the smallest bound is used in both inequalities (since the bound range will be true for both), through the min function. $|x - c| < \delta_1 \text{ and } |x - c| < \delta_2 \implies$ $|x - c| < \delta = \min(\delta_1, \delta_2)$ Now, supposing that ε's equate to the following formulas, you can see that the epsilons are numerical in nature, so that the original function bears no relationship (which is good), and that they now are of the same form and condition of the lemma previously proved above (As a reminder, the epsilon value for the limit of ƒ and g individually need not equal epsilon, but the end result should). $|f(x) - L| < \min\left(1, \frac{\epsilon}{2(|M| + 1)}\right) \text{ and } |g(x) - M| < \frac{\epsilon}{2(|L| + 1)}$ $\because |x - c| < \delta = \min(\delta_1, \delta_2) \implies$ $|f(x) - L| < \epsilon_1 \text{ and } |g(x) - M| < \epsilon_2$ Therefore, you can apply the lemma to state that the relationship has been proven. $|f(x) - L| < \min\left(1, \frac{\epsilon}{2(|M| + 1)}\right) \text{ and } |g(x) - M| < \frac{\epsilon}{2(|L| + 1)}$ $\implies |(f(x)g(x) - LM| < \epsilon$ $\text{which means } \lim_{x \rightarrow c}{f(x)g(x)} = LM$ $\blacksquare$
##### Multiple

The proof for multiples of some function ƒ follow from the proof on multiplication. It however relies on the limit of a constant proof. Because of the proofs reliance on two previous proofs and that those proofs this one relies on are robust (they account for things like 0's), this proof is just as robust, even working when a = 0.

 Given a function ƒ and a constant a, the limit can be simplified first using the multiple of the limit, then using the limit of a constant function. \begin{align} \lim_{x \rightarrow c}{(a \cdot f(x))} &= \lim_{x \rightarrow c}{a} \cdot \lim_{x \rightarrow c}{f(x)} \\ &= a \cdot \lim_{x \rightarrow c}{f(x)} \end{align} $\blacksquare$
##### Reciprocal

Of the operations, the proof for the reciprocal is similar to that of multiplication. It too requires a seemingly contrived relationship between some mathematical statements in order to function, and relies on the argument that the formula or concepts attached to assuring that epsilon and delta's boundedness is maintained is what defines a valid limit. Anyways, let us begin with the "contrived relationship".

 The lemma states that $\text{If } b \ne 0 \text{ and } |a - b| < \min\left(\frac{|b|}{2}, \frac{x|b|^2}{2}\right)$ $\text{then } a \ne 0 \text{ and } \left|\frac{1}{a} - \frac{1}{b}\right| < x$, for any number a, and b. The first part is to break the min function into two cases. The first case can be simplified in this manner. \begin{align} |a - b| &< \frac{|b|}{2} \\ |b| - |a| \le |b - a| = |a - b| &< \frac{|b|}{2} \\ \frac{|b|}{2} - |a| &< 0 \\ \frac{|b|}{2} &< |a| \end{align} Asserting that a will not equal 0 (a ≠ 0), the reciprocal definition can be asserted for the inequality \begin{align} \frac{|b|}{2} &< |a| \\ \frac{|b|}{2|a|} &< 1 \\ \frac{1}{|a|} &< \frac{2}{|b|} \end{align} Focusing back onto the end goal inequality statement, we will now apply both the second case and the fact we have just derived to reach our conclusion. \begin{align} \left|\frac{1}{a} - \frac{1}{b}\right| &= \left|\frac{b - a}{ab}\right| \\ &= \frac{|b - a|}{|ab|} \\ &= \frac{1}{|a|} \cdot \frac{|b - a|}{|b|} \\ &< \frac{2}{|b|} \cdot \frac{x|b|^2}{2|b|} \\ &< x \end{align} $\blacksquare$

As you can see, the lemma itself describes a simple to prove and valid, yet very contrived and unnatural-looking relationship between numbers. But, this relationship is very attractive to be applied blindly for limits, because any value of a, and b inputted (not including 0's) works, and that x > 0 is a condition that matches the ε variable.

As you will see below, we will apply this lemma for the reciprocal. Note that the proof is a simple assertion statement.

 Given a limit of the reciprocal of the function ƒ, we can assert that epsilon's relationship is like the lemma proven in the previous table. From there, you can draw the same conclusion. $0 < |x - c| < \delta \implies |f(x) - L| < \epsilon$ $\text{Assert } \epsilon = \min\left(\frac{|L|}{2}, \frac{\epsilon|L|^2}{2}\right)$ $|f(x) - L| < \min\left(\frac{|b|}{2}, \frac{x|b|^2}{2}\right) \implies \left|\frac{1}{f(x)} - \frac{1}{L}\right| < \epsilon$ $\blacksquare$
##### Division

The proof for division of the function ƒ by g is a corollary based on the proof done for limits of multiplication and limits of reciprocals.

 Given a function ƒ and g, the limit can be expressed as a multiplication between the denominator and the reciprocal of the numerator. From there, the result is as depicted. \begin{align} \lim_{x \rightarrow c}{\frac{f(x)}{g(x)}} &= \lim_{x \rightarrow c}{f(x)} \cdot \lim_{x \rightarrow c}{\frac{1}{g(x)}} \\ &= L \cdot \frac{1}{M} \\ &= \frac{L}{M} \end{align} $\blacksquare$

As always, this proof has the obvious restriction that M cannot be 0.

### Limits of Functions

Here, we will prove the answers to many of the functions you may commonly see. As always, the following chart for quick recall is provided below.

List of Algebraic Operations for Limits
Name Meaning
Constant $\lim_{x \rightarrow c}{a} = a$
Linear $\lim_{x \rightarrow c}{z} = c$

Note that for the linear function, we used z instead of the usual x because the variable name x is already defined and is being used by the limit notation.

### Limits of Sequences

Note
The following heading requires knowledge of Sequences.

Consider the sequences $(x_n) = (\frac{1}{n}), (y_n = (\frac{-1}{n})$. Each converges to zero, but $(f(x_n)) = 1$ and $(f(y_n)) = 0$, and these have different limits as $n \rightarrow \infty$. Thus the limit does not exist.

### Limits of Discontinuity

Note
The following heading requires knowledge of Continuity.

We'll be giving many more examples in the section on continuity. Although discontinuity is more sensibly important using continuity (which is covered in the next chapter), the definition of discontinuity is actually defined in regards to limits.

#### Point Discontinuity

An example of point discontinuity would be the functions

Example 1
$f(x) = \begin{cases} 1 & \mbox{if }x = 0 \\ 0 & \mbox{if }x \not= 0 \\ \end{cases}$
Example 2
$g(x) = \frac{x(x-1)}{x-1}$

For the following functions,

Example 1
$\lim_{x \rightarrow 0} f(x) = 0$
Example 2
$\lim_{x \rightarrow 1} f(x) = 1$

The proof of which is the following:

#### Jump Discontinuity

• Let $f(x) = \begin{cases} 0 & \mbox{if }x\leq 0 \\ 1 & \mbox{if }x > 0 \\ \end{cases}$. Then $\lim_{x \rightarrow 0} f(x)$ does not exist.

### Limits of Unusual Functions

Many of the examples here may seem a bit contrived and appear quite nasty, with even more nastier proofs, but if done correctly, these examples (and the associating exercises) will solidify not only the methodology of a limit proof, but of how mathematics can, using verified theorems and behaviors, solve some seemingly unsolvable problems.

Our first example, often given as a demonstration of just how nasty functions can get (and how far a definition can take you), is

$f(x) = \begin{cases} 1/q & \mbox{if }x \text{ is rational, and thus } x = p/q \\ 0 & \mbox{else} \\ \end{cases}, \forall x \in (0, 1)$

For the function ƒ, $\lim_{x \rightarrow c} f(x) = 0$ for all numbers in the domain. Yes, really.

The first step in understanding the proof of this statement is to stop imagining limits and continuity as the same - that is, if the first step of this problem is to imagine the graph of this function and in a sense, zoom in until an answer can be deduced graphically. Do not be saddened if this is how you thought about how to work out this problem; this method is a simplified explanation of limits commonly taught in elementary mathematics and would thus be ingrained in you anyways.

This proof demonstrates a method of mathematical proof through manipulating theorems instead of manipulating numbers or variables to form the epsilon-delta model, which in turn implies the limit's validity; the existence of a limit. It also shows how a limit proof is actually an exercise in trying to relate two easily malleable inequalities together using valid theorems.

 Assert the definition of a limit is valid by validating (through derivation) of each aspect. First, we can assume ε > 0 as it is also assumed in the limit definition. We can also use the approaching number c, the limit l, and the function ƒ. From here, we will assign another variable n in relation to epsilon, as depicted in the adjacent column. $n > {1}/{\epsilon} \implies \epsilon > 1/n$ Now, suppose the set S composing of every rational number greater than 0, less than 1, and whose denominator cannot exceed n. These requirements are commonly depicted as the set represented to the left (although this particular depiction is more-so used as an example of a set whose elements cannot equal one another). Plus, we will add the clause that if c is a rational number, it won't be in this set either. The explanation of this clause will come later. $S = \left\{\frac{1}{2}, \frac{1}{3}, \frac{2}{3}, \frac{1}{4}, \frac{3}{4}, \frac{1}{5}, \frac{2}{5}, \frac{3}{5}, \frac{4}{5}, ..., \frac{1}{n}, ..., \frac{n - 1}{n}\right\} \backslash \{c\}$ From here, the property that this set is finite is apparent through the set S's enumeration definition (it is mathematically shown because the set is bounded through the numerator, denominator, and combination of numerator and denominator). This set, as it is a set, also contains a unique list of numbers. From here, you can find some number k with the following property. In other words, you can find the smallest distance. $\exists k \in S : |k - c| \text{ is minimized}$ We will define the variable δ as the following. This implies the following relationship. From here, you can see why the set cannot have the variable c in it. If it is in there, the delta relationship will be broken and thus our proof. Thus, we removed it from play, and in effect, we will always have some non-zero δ. $\delta = |k - c| \implies 0 < |x - c| < |k - c| = \delta$ The consequence of this definition is not just that x must be less than k. If x is irrational, then the method of deriving a delta from an epsilon value though a proxy variable n is valid and that the following limit interpretation is also valid. $0 < |x - c| < |k - c| = \delta \implies |f(x) - 0| = 0 < \epsilon$ and $\epsilon > 1/n$ Likewise, if x is rational, then x cannot be a member of the set S by definition and therefore means that the method of deriving a delta from an epsilon value though a proxy variable n is valid and that the following limit interpretation is also valid. $0 < |x - c| < |k - c| = \delta \implies |f(x) - 0| < \epsilon$ and $\epsilon > 1/n$ $\blacksquare$

The next example, of a similar vein, is

Note
This function is also not continuous at any point of its domain.
$g(x) = \begin{cases} 1 & \mbox{if }x \in \mathbb{Q} \\ 0 & \mbox{else} \\ \end{cases}$

For the function g, $\lim_{x \rightarrow c} g(x)$ does not exist for any $c \in \mathbb{R}$.

Given $x \in R$, let $x_n$ be any rational number in the interval $(\frac{-1}{n},\frac{1}{n})$, and let $y_n$ be any irrational number in the same interval ($x_n$ and $y_n$ are gauranteed to exist by density of the rationals and irrationals). Given any $\epsilon > 0, |x_n - 0| < \frac{1}{n}$ and $|y_n - 0| < \frac{1}{n}$, so $(x_n),(y_n) \rightarrow 0$. However, (g(x_n)) = 1 and (g(y_n)) = 0, so their limits are 1 and 0. Since these are not equal, $\lim_{y\rightarrow x} g(y)$ does not exist.

## Appendix

Here, we will expose more topics in regards to limits. First, we will give a review on the nature of functions. Recall that a function from a set X to a set Y is a mapping $f: X \rightarrow Y$ such that f(x) is a unique element of Y for every $x\in X$. In analysis, we tend to talk about functions from subsets $A \subseteq \mathbb{R}$ to $\mathbb{R}$.

The definition for the limit of a function is much the same as the definition for a sequence. In fact, as we will see later, it is possible to define functional limits in terms of sequential limits. For the moment, however, let us reevaluate the definition of a limit for a function ƒ given a generalized-enabled function:

Given a subset $A \subset \mathbb{R}$ and a function $f:A\rightarrow \mathbb{R}$, we say $\lim_{x \rightarrow c}f(x) = L$ if $\forall \epsilon > 0: \exists \delta: 0<|x-c|<\delta \implies |f(x)-L|<\epsilon$

### Sequential Limits

One curious result of thinking about real numbers as built upon natural numbers and the like (as we have structured our section on numbers in this wikibook) is that the definition of a limit, which we have used the real number version for real functions all this time, can be derived using sequential limits instead of axiomatically, as so:

Given a subset $A \subset \mathbb{R}$ and a function $f:A\rightarrow \mathbb{R}$, we say $\lim_{x \rightarrow c}f(x) = L$ if $\forall (x_n)_{n=1}^{\infty}$ such that $x_n \not= c, \lim_{n \rightarrow \infty}(x_n) = c$, and $\lim_{n \rightarrow \infty}(f(x_n)) = L$

Note that the requirement $x_n \not= c$ corresponds with the requirement $|x - c| > 0$.

As an exercise to test your understanding, prove that these two definitions are equivelant. Note that taking the contrapositive gives a good criterion for determining whether or not a function diverges:

If $\exists (x_n), (y_n): (x_n)\rightarrow c, (y_n)\rightarrow c$, and $\lim_{n \rightarrow \infty}(f(x_n)) \not= \lim_{n \rightarrow \infty}(f(y_n))$, then $\lim_{x \rightarrow c}f(x)$ does not exist.

### Definition on an Arbitrary Metric Space

Let $(X,d_1)$, and $(Y,d_2)$ be metric spaces. And let $f:X \to Y$

The limit as $x \in X$ approaches $a \in X$ of $f$ is equal to $L \in Y$ if $\forall \epsilon > 0 \text{ }\exists \delta \text{ such that if } 0

This is denoted $\lim_{x \to a} f = L$