Real Analysis/Limits

Contents

The challenge in understanding limits is not in its definition, but rather in its execution. Successfully completing a limit proof, using the epsilon-delta definition, involves learning many different concepts at once—most of which will be unfamiliar coming out from earlier mathematics. This chapter will serve as a guide in navigating these proofs, as the skills here will serve you well in higher mathematics.

One way to conceptualize the definition of a limit, and one which you may have been taught, is this: limx→cf(x)=L{\displaystyle \lim _{x\rightarrow c}f(x)=L} means that we can make f(x) as close as we like to L by making x close to c. However, in real analysis, you will need to be rigorous with your definition—and we have a standard definition for a limit.

This definition gives a lot of people a lot of trouble, but since it is so fundamental to higher mathematics, there are many ways to help solidify the definition down. This chapter will be a guide in solidifying the behavior of this definition and provide necessary insight into working with the definition, whereas the Exercises will help unravel the puzzle, solidify the concept, and enable you to execute this definition properly.

A graphical example of a function converging to a limit as it approaches infinity

It is very common, given limits, to work with the concept of infinity. However, the concept of infinity has yet to be well defined. Intuitively, we know that infinity represents endlessness and it is represented as ∞. Yet, infinity itself is not a number. The current limit definition will fail if we use infinity like a number. If you suppose some limit where c = ∞ and we use our original definition, it would mean that

You cannot "subtract by infinity" - infinity isn't a number nor is it really a variable.

Infinity cannot be bounded, yet by putting infinity in a |a−b|<x{\displaystyle |a-b|<x} format, it implies boundedness.

So, the definition needs to be rewritten, which is done in the following chart. The definitions for when either the limit as xapproaches positive or negative infinity; or the limit as ƒ(x) converges to positive or negative infinity are as follows:

Note

Yes, the approaching and converging distinction is important. You can look at it as either referencing the delta or the epsilon, respectively.

The concept of a limit: Whenever a point x is within δ units of c, f(x) is within ε units of L.

For all ε, only it, the ε variable, will be used to derive a δ.

This powerful statement basically states that δ is related to ε. To excuse mathematically rigorous language for a moment, δ can be imagined of like a function which outputs ε. This is actually important, as neither δ or ε are allowed to have variables, such as x, be part of their formulation.

This limit definition is designed to ignore the value of f(c) and whether or not c is even in the domain of ƒ.

The requirement |x−c|>0{\displaystyle |x-c|>0} provides the appeal in studying calculus, by removing the technicality of having to analyze the behavior at the point (which is usually undefined to begin with). It is the mathematical implementation of the idea that the behavior of a function near a point shouldn't be affected by its behavior at the point. Thus, f(x) need not be defined at c to have a limit there.

Given that limits are such a fundamental concept of calculus, it should be reasonable to expect that limits should have some intriguing properties to both warrant analysis as well as be a staple mathematical topic in both elementary, applied, and higher mathematics.

A limit is unique, in that there is always one and only one answer if the input is the same. This is commonly rephrased as "a function cannot approach two different limits at c". Limits having unique answers is very important, since if they don't, the use of limits will grow so complex that it will simply become unusable.

Theorem

Suppose a function ƒ such that the limit as x approaches c converges to L. If the limit of ƒ as it approaches c also converges to M, then L = M

By applying the corresponding theorems for sequential limits, we find that functional limits are both unique—they preserve algebraic operations and ordering—and that a corresponding "Squeeze Theorem" holds.

Since both epsilon-delta descriptions reference the same variables x and c in |x - c|, we can combine both equations by ensuring the smallest bound is used in both inequalities (since the bound range will be true for both), through the min function.

Because, due to the constraints placed on δ, it is the case that the previous expression, by definition because of our statement that it is a limit, must imply the epsilon expression. However, since the previous expression encompasses both epsilons, it now implies both epsilons simultaneously.

It should also be the case that, given that the entire delta expression is shared, that the epsilon value bounding both functions ƒ and g can also be bounded using the same epsilon. Specifically, an epsilon half as small as what you think of.

We cannot force the epsilon inequalities together like we did for the delta inequalities because the variables in the absolute sign are not the same and thus do not represent the same variable. However, we know that a + c < b + d (Problem 1.II) and that |x + y| ≤ |x| + |y| (Problem 1.I), so we can combine those two epsilon inequalities to form

Subtraction follows from the addition proof by imagining a function h that is the negation of the function g beforehand. In other words, imagine the function g in the proof as a variable for a negated function.

Of the operations, the proof for multiplication is the most complex, as it relies on the greatest amount of inequality algebra. It also requires a seemingly contrived lemma to operate. We will start by proving the lemma, which is simply an algebraic relationship between inequalities, similar to that of the binomial theorem relates a summation of terms and a product.

As you can see, the lemma itself describes a simple to prove and valid, yet very contrived and unnatural-looking relationship between numbers. But, this relationship is very attractive to be applied blindly for limits, because any value of a, b, c, and d inputted (even 0's) works, and that x > 0 is a condition that matches the ε variable.

As you will see below, we will apply this lemma for multiplication.

Proof of Limit Multiplication

Suppose the limit of two functions ƒ and g as x approaches c. The equations will be as follows.

Since both epsilon-delta descriptions reference the same variables x and c in |x - c|, we can combine both equations by ensuring the smallest bound is used in both inequalities (since the bound range will be true for both), through the min function.

Now, supposing that ε's equate to the following formulas, you can see that the epsilons are numerical in nature, so that the original function bears no relationship (which is good), and that they now are of the same form and condition of the lemma previously proved above (As a reminder, the epsilon value for the limit of ƒ and g individually need not equal epsilon, but the end result should).

The proof for multiples of some function ƒ follow from the proof on multiplication. It however relies on the limit of a constant proof. Because of the proofs reliance on two previous proofs and that those proofs this one relies on are robust (they account for things like 0's), this proof is just as robust, even working when a = 0.

Proof of Limit Multiple

Given a function ƒ and a constant a, the limit can be simplified first using the multiple of the limit, then using the limit of a constant function.

Of the operations, the proof for the reciprocal is similar to that of multiplication. It too requires a seemingly contrived relationship between some mathematical statements in order to function, and relies on the argument that the formula or concepts attached to assuring that epsilon and delta's boundedness is maintained is what defines a valid limit. Anyways, let us begin with the "contrived relationship".

As you can see, the lemma itself describes a simple to prove and valid, yet very contrived and unnatural-looking relationship between numbers. But, this relationship is very attractive to be applied blindly for limits, because any value of a, and b inputted (not including 0's) works, and that x > 0 is a condition that matches the ε variable.

As you will see below, we will apply this lemma for the reciprocal. Note that the proof is a simple assertion statement.

Proof of Limit for Reciprocals

Given a limit of the reciprocal of the function ƒ, we can assert that epsilon's relationship is like the lemma proven in the previous table. From there, you can draw the same conclusion.

Consider the sequences (xn)=(1n),(yn=(−1n){\displaystyle (x_{n})=({\frac {1}{n}}),(y_{n}=({\frac {-1}{n}})}. Each converges to zero, but (f(xn))=1{\displaystyle (f(x_{n}))=1} and (f(yn))=0{\displaystyle (f(y_{n}))=0}, and these have different limits as n→∞{\displaystyle n\rightarrow \infty }. Thus the limit does not exist.

We'll be giving many more examples in the section on continuity. Although discontinuity is more sensibly important using continuity (which is covered in the next chapter), the definition of discontinuity is actually defined in regards to limits.

Many of the examples here may seem a bit contrived and appear quite nasty, with even more nastier proofs, but if done correctly, these examples (and the associating exercises) will solidify not only the methodology of a limit proof, but of how mathematics can, using verified theorems and behaviors, solve some seemingly unsolvable problems.

Our first example, often given as a demonstration of just how nasty functions can get (and how far a definition can take you), is

For the function ƒ, limx→cf(x)=0{\displaystyle \lim _{x\rightarrow c}f(x)=0} for all numbers in the domain. Yes, really.

The first step in understanding the proof of this statement is to stop imagining limits and continuity as the same - that is, if the first step of this problem is to imagine the graph of this function and in a sense, zoom in until an answer can be deduced graphically. Do not be saddened if this is how you thought about how to work out this problem; this method is a simplified explanation of limits commonly taught in elementary mathematics and would thus be ingrained in you anyways.

This proof demonstrates a method of mathematical proof through manipulating theorems instead of manipulating numbers or variables to form the epsilon-delta model, which in turn implies the limit's validity; the existence of a limit. It also shows how a limit proof is actually an exercise in trying to relate two easily malleable inequalities together using valid theorems.

Proof that the limit equals 0

Assert the definition of a limit is valid by validating (through derivation) of each aspect. First, we can assume ε > 0 as it is also assumed in the limit definition. We can also use the approaching number c, the limit l, and the function ƒ. From here, we will assign another variable n in relation to epsilon, as depicted in the adjacent column.

n>1/ϵ⟹ϵ>1/n{\displaystyle n>{1}/{\epsilon }\implies \epsilon >1/n}

Now, suppose the set S composing of every rational number greater than 0, less than 1, and whose denominator cannot exceed n. These requirements are commonly depicted as the set represented to the left (although this particular depiction is more-so used as an example of a set whose elements cannot equal one another).

Plus, we will add the clause that if c is a rational number, it won't be in this set either. The explanation of this clause will come later.

From here, the property that this set is finite is apparent through the set S's enumeration definition (it is mathematically shown because the set is bounded through the numerator, denominator, and combination of numerator and denominator). This set, as it is a set, also contains a unique list of numbers. From here, you can find some number k with the following property. In other words, you can find the smallest distance.

We will define the variable δ as the following. This implies the following relationship.

From here, you can see why the set cannot have the variable c in it. If it is in there, the delta relationship will be broken and thus our proof. Thus, we removed it from play, and in effect, we will always have some non-zero δ.

The consequence of this definition is not just that x must be less than k. If x is irrational, then the method of deriving a delta from an epsilon value though a proxy variable n is valid and that the following limit interpretation is also valid.

Likewise, if x is rational, then x cannot be a member of the set S by definition and therefore means that the method of deriving a delta from an epsilon value though a proxy variable n is valid and that the following limit interpretation is also valid.

For the function g, limx→cg(x){\displaystyle \lim _{x\rightarrow c}g(x)} does not exist for any c∈R{\displaystyle c\in \mathbb {R} }.

Given x∈R{\displaystyle x\in R}, let xn{\displaystyle x_{n}} be any rational number in the interval (−1n,1n){\displaystyle ({\frac {-1}{n}},{\frac {1}{n}})}, and let yn{\displaystyle y_{n}} be any irrational number in the same interval (xn{\displaystyle x_{n}} and yn{\displaystyle y_{n}} are gauranteed to exist by density of the rationals and irrationals). Given any ϵ>0,|xn−0|<1n{\displaystyle \epsilon >0,|x_{n}-0|<{\frac {1}{n}}} and |yn−0|<1n{\displaystyle |y_{n}-0|<{\frac {1}{n}}}, so (xn),(yn)→0{\displaystyle (x_{n}),(y_{n})\rightarrow 0}. However, (g(x_n)) = 1 and (g(y_n)) = 0, so their limits are 1 and 0. Since these are not equal, limy→xg(y){\displaystyle \lim _{y\rightarrow x}g(y)} does not exist.

Here, we will expose more topics in regards to limits. First, we will give a review on the nature of functions. Recall that a function from a set X to a set Y is a mapping f:X→Y{\displaystyle f:X\rightarrow Y} such that f(x) is a unique element of Y for every x∈X{\displaystyle x\in X}. In analysis, we tend to talk about functions from subsets A⊆R{\displaystyle A\subseteq \mathbb {R} } to R{\displaystyle \mathbb {R} }.

The definition for the limit of a function is much the same as the definition for a sequence. In fact, as we will see later, it is possible to define functional limits in terms of sequential limits. For the moment, however, let us reevaluate the definition of a limit for a function ƒ given a generalized-enabled function:

One curious result of thinking about real numbers as built upon natural numbers and the like (as we have structured our section on numbers in this wikibook) is that the definition of a limit, which we have used the real number version for real functions all this time, can be derived using sequential limits instead of axiomatically, as so:

Note that the requirement xn≠c{\displaystyle x_{n}\not =c} corresponds with the requirement |x−c|>0{\displaystyle |x-c|>0}.

As an exercise to test your understanding, prove that these two definitions are equivelant. Note that taking the contrapositive gives a good criterion for determining whether or not a function diverges: