(last updated: 1:01:46 PM, October 05, 2020)

\(\large \S\) 2.3 - Infinite limits

You have now seen a variety of sequence theorems and are familiar with sequences converging to finite real numbers. Now we will look at some specific ways that sequences can diverge. In particular, squences that go off to plus or minus infinity.

Diverging to \(\pm\infty\)

Definition 2.3.1 A sequence \(a_n\) diverges to \(+\infty\) (tends to \(+\infty\)) if and only if for any \(M > 0\), there exists \(n^* \in \mathbb N\) such that \(a_n > M\) for all \(n \geq n^*\). If this is the case, we say that the limit exists and we write \(\lim_{n\rightarrow\infty}a_n= +\infty\).

Note that one generally can write \(\infty\) in place of \(+\infty\).

Definition (not explicitly in text) A sequence \(a_n\) diverges to \(-\infty\) if and only if for any \(K > 0\), there exists \(n^* \in \mathbb N\) such that \(a_n < -K\) for all \(n \geq n^*\). If this is the case, we say that the limit exists and we write \(\lim_{n\rightarrow\infty}a_n= -\infty\).

A note on existence of infinite limits: When \(\lim_{n\rightarrow\infty}a_n=\infty\), the limit doesn’t actually exist. “Existence” for us means that there is a real number that the expression refers to. You should think of “\(\lim_{n\rightarrow\infty}a_n=\infty\)” simply as a notation convention to mean, literally, the definition above, that \(a_n\) can be made eventually larger than any real number. I prefer to say that these infinite limits do not exist, but this is just a semantic issue, an issue with language, symbols, terminology, and meaning. You can think of it as “improper” use of the equals symbol.

We can always enlarge our mathematical universe to include objects that are not real numbers though! This way, we can define relations and operations for these new objects. As an example you are probably familiar with, consider \(\sqrt{-1}=i\). Note that \(i\not\in\mathbb R\). So this is not a standard use of the “\(=\)” symbol. We are simply defining a new mathamtical object that is not a real number, and then we define its properties and on the operations of addition and multiplication, etc. work with it. In this way we create the complex number system \(\mathbb C\) of which \(\mathbb R\) is a subset.

Extended real numbers

Extended real numbers (not in text)
We can create an extension of the real number system called \(\overline{\mathbb R}=[-\infty,\infty]=\mathbb R\cup\{-\infty\}\cup\{\infty\}\). We say that the symbols \(-\infty\) and \(\infty\) are less than and greater than, respectively, every real number. If \(\lim_{n\rightarrow\infty}a_n=\infty\), then we can say that the limit exists in the extended real number system. We can even say that \(a_n\) “converges” to \(\infty\) since this symbol now refers to an actual mathematical object in our number extended system, but convergence to the extended real numbers \(\pm\infty\) is defined separately as above and not using the \(\epsilon\) definition. Now the limit truly is properly equal to this extended, infinite, mathematical object.

All of our theorems generalize to the extended real number system except in cases where indeterminate forms result, which are \(\infty-\infty\), \(\frac\infty\infty\), \(0\cdot\infty\), \(1^\infty\), \(0^0\), and \(\infty^0\) (of course we can create more complicated indeterminate forms too). Note that \(\infty+\infty=\infty\), \(x\cdot\pm\infty=\pm\infty\) for all \(x\in{\mathbb R}, x>0\), \(x\cdot\pm\infty=\mp\infty\) for all \(x\in{\mathbb R}, x<0\), in particular, \(\infty\cdot\pm\infty=\pm\infty\), \(\infty^r=\infty\) for any \(r\in\mathbb R,r>0\), \((-\infty)^n=\infty\) if \(n\in\mathbb N\) is even and \((-\infty)^n=-\infty\) if \(n\in\mathbb N\) is odd, \(\infty^r=0\) for any \(r\in\mathbb R,r<0\).

In this way, you can do most arithmetic operations you are accustomed to with extended real numbers. This also makes taking limits easier. Consider that we now have the fact that \(\lim_{n\rightarrow\infty}n=\infty\in\overline{\mathbb R}\) so we can evaluate directly \(\lim_{n\rightarrow\infty}\frac1n=\frac1{\lim_{n\rightarrow\infty}n}=\frac1\infty=\infty^{-1}=0\). We now no longer have to even perform any kind of \(\epsilon\) argument whatsoever!

There are many “exotic” number systems besides the extended real numbers with operations such as addition or multiplication defined in their own way. Abstract Algebra deals with some of these objects in the form of groups, rings, and fields. They don’t always involve real numbers either! You might also be interested in reading about the hyperreals, surreals, nonstandard arithmetic, or nonstandard analysis. Just be forewarned though, that can be a “rabbit hole!”

Sequence behavior

Note on Sequence behavior:

A sequence of real numbers is either convergent (to a finite real number) or divergent.

A divergent sequence either diverges to \(+\infty\) or \(-\infty\) or is oscillatory.

An oscillatory sequence can be bounded or unbounded.

Examples

\(a_n=(-1)^n\) diverges but does not diverge to \(\pm\infty\). It is an oscillatory sequence.

\(a_n=(-n)^n\) also diverges and oscillates. It’s oscillations become increasingly large though, so it does not diverge to \(\pm\infty\). It is an oscillatory sequence.

\(a_n=n+(-1)^n\) also diverges to \(+\infty\). This sequence exhibits oscillatory-like behavior but is not technically oscillatory by the terminology in the textbook. The textbook specifies that any divergent sequence that doesn’t diverge to \(\pm\infty\) is defined as oscillatory.\(\square\)

Example (dense sequence of all rationals)

We discussed that the rational numbers are countable and dense in \(\mathbb R\). Again, they are dense because for any real number \(x\in\mathbb R\) and any \(\epsilon>0\), there are infinitely many rational numbers in the interval \((x-\epsilon,x+\epsilon)\). Since the rational numbers are countable, we can arrange them in an ordered list: \(\mathbb Q=\{q_1,q_2,q_3,\ldots\}\). Just let this define a sequence \(\{a_n\}\) by letting \(a_n=q_n\) for all \(n\). This sequence is oscillatory, and no matter what you choose as a cut-off \(n^*\), and any \(r\in\mathbb R\) and any \(\epsilon>0\), there are infinitely many \(a_n\in(r-\epsilon,r+\epsilon)\) with \(n\geq n^*\). \(\square\)

Example (dyadic rationals) Here is an example of another dense sequence that gets arbitrarily close to every real number infinitely-many times! It is sort of complicated, but not too bad. I don’t give a proof for this example here, but an online search for the “dyadic rationals” will give you more information.

show/hide example

Consider the set of dyadic rationals \(D=\left\{ \frac{2m+1}{2^n} \mid n\in\mathbb N, m\in\mathbb Z\right\}\). This is the set of all rational numbers whose denominator is a power of \(2\). This set is dense in \(\mathbb R\), which means that given any real number \(r\in\mathbb R\) and any \(\epsilon>0\) there are infinitely many \(x\in D\) such that \(|x-r|<\epsilon\). We’ll define an interesting oscillatory sequence.

Here is a grid with countably many copies of \(\mathbb Z\) laid out vertically indexed by \(n\in\mathbb N\): \[ \begin{matrix} n\in\mathbb N & 1 & 2 & 3 & \cdots \\\hline & \vdots & \vdots & \vdots & \! .^{\Large.^{\LARGE .}} \\ & -2 & -2 & -2 & \cdots \\ & -1 & -1 & -1 & \cdots \\ m\in\mathbb Z & 0 & 0 & 0 & \cdots \\ & 1 & 1 & 1 & \cdots \\ & 2 & 2 & 2 & \cdots \\ & 3 & 3 & 3 & \cdots \\ & \vdots & \vdots & \vdots & \ddots \\ \end{matrix} \] We can create a sequence \(a_n\) by starting on the left and tracing out a path through the entire grid in a sort of half-spiraling out shaped path: \[ \begin{matrix} n\in\mathbb N & 1 & 2 & 3 & \cdots \\\hline & \vdots & \vdots & \vdots & \! .^{\Large.^{\LARGE .}} \\ & -2^{\swarrow^{a_{18}}} & -2^{\swarrow^{a_{17}}} & -2^{\swarrow^{a_{16}}} & \cdots \\ & -1^{\swarrow^{a_3}} & -1^{\swarrow^{a_4}} & -1^{\swarrow^{a_{15}}} & \cdots \\ m\in\mathbb Z & 0^{\swarrow^{a_2}} & 0^{\swarrow^{a_5}} & 0^{\swarrow^{a_{14}}} & \cdots \\ & 1^{\swarrow^{a_1}} & 1^{\swarrow^{a_6}} & 1^{\swarrow^{a_{13}}} & \cdots \\ & 2^{\swarrow^{a_8}} & 2^{\swarrow^{a_7}} & 2^{\swarrow^{a_{12}}} & \cdots \\ & 3^{\swarrow^{a_9}} & 3^{\swarrow^{a_{10}}} & 3^{\swarrow^{a_{11}}} & \cdots \\ & \vdots & \vdots & \vdots & \ddots \\ \end{matrix} \] There are unlimited other options for tracing out the entire set of dyadic rationals. Since every cut-off \(n^*\) eliminate finitely-many terms, we know there are infinitely many terms in the future that come arbitrarily close to every real number. This sequence definitely does not converge, nor does it diverge to \(\pm\infty\). It is oscillatory. \(\square\)

Comparison

Theorem 2.3.2 (Comparison Theorem) If a sequence \(a_n\) diverges to \(+\infty\) and [there exists an \(n_1\in\mathbb N\) such that] \(a_n \leq b_n\) for all \(n \geq n_1\), then the sequence \(b_n\) must also diverge to \(+\infty\).

show/hide proof

Scratchwork: We want \(b_n>M\) for an arbitrary \(M>0\). We know we can make \(a_n>0\) for sufficiently large \(n\) (find a cut-off for this). We know that \(b_n\geq a_n\) for sufficiently large \(n\) (cut-off given in theorem statement, \(n_1\)). THen make sure \(n\) is beyond both of these cut-off \(n\) values, and we will have \(M<a_n\) and \(a_n\leq b_n\). This gives us \(b_n>M\) as desired.

Now we write up the official proof.

Proof. Let \(M>0\). Choose an \(n_2\) such that \(a_n>M\) for all \(n\geq n_2\). We also know that \(a_n\leq b_n\) for all \(n\geq n_1\). Now let \(n^*=\max\{n_1,n_2\}\). This gives us that for any \(n\geq n^*\) we have \(M<a_n\leq b_n\).

Thus for any arbitrary \(M>0\), we can find an \(n^*\) such that \(b_n>M\) for all \(n\geq n^*\). Thus \(b_n\) diverges to \(+\infty\). \(\blacksquare\)

The above theorem can also be rewritten for divergence to \(-\infty\) and the proof is nearly identical.

Example: Examine the sequence: \(a_n=\sqrt{n+\sqrt{n+\sqrt{n+\sqrt{n}}}}\)

Convince yourself that \(a_n>\sqrt n\) for all \(n\) and that \(\sqrt n\rightarrow\infty\), thus by a comparison we have that \(a_n\rightarrow\infty\). \(\square\)

Combining sequences when one diverges to \(+\infty\)

Theorem 2.3.3 If a sequence \(a_n\) diverges to \(\infty\) and a sequence \(b_n\) is bounded below by \(K\), then
  1. \(a_n+b_n\) diverges to \(\infty\)
  2. \(a_nb_n\) diverges to \(\infty\), provided that \(0<K\)
  3. \(ca_n\) diverges to \(\infty\) for all positive constants \(c\)
  4. \(ca_n\) diverges to \(-\infty\) for all negative constants \(c\)

show/hide proof

Proof.
  1. Let \(M>0\). We want \(a_n+b_n>M\). We know that \(b_n\geq K\), so that \(a_n+b_n\geq M+K\) is possible as long as \(a_n>M\). So pick \(n_1\) such that \(n\geq n_1\) implies that \(a_n>M+|K|\). Now \(a_n+b_n>M+|K|+K\). Note that \(K\) is not specified to be positive or negative, so \(|K|+K\) could equal \(0\) if \(K<0\) or \(2K\) if \(K\geq0\). Either way, \(|K|+K\geq0\) thus \(M+|K|+K\geq M\).
    So we have made \(a_n+b_n>M\) for an arbitrary \(M>0\) as desired. Thus \(a_n+b_n\rightarrow\infty\).
  2. Let \(M>0\). We want \(a_nb_n>M\). Given that \(0<K\), we know that \(\frac MK>0\) so we can choose an \(n^*\) such that for all \(n\geq n^*\) we have \(a_n>\frac MK\). Furthermore we have that \(b_n>K>0\) for all \(n\). Thus for all \(n\geq n^*\) we have that \(a_n b_n>\frac MK\cdot K=M\), thus \(a_nb_n\rightarrow\infty\). \(\blacksquare\)

Proofs of (c) and (d) are left for you to try!

Theorem 2.3.6 Consider a sequence \(a_n\), where \(a_n>0\), for all \(n\). Then \(a_n\) diverges to \(\infty\) if and only if the sequence \(\frac1{a_n}\) converges to zero.

show/hide proof

Proof. Let \(\epsilon>0\) and assume that \(a_n\rightarrow\infty\). Then we can find \(n^*\) such that \(n\geq n^*\) implies that \(a_n>\frac1\epsilon\). Note that this implies that \(\frac1{a_n}<\epsilon\), hence \(\frac1{a_n}\) converges to zero.

Now assume that \(\frac1{a_n}\) converges to zero and let \(M>0\). Then we can find an \(n^*\) such that for all \(n\geq n^*\) we have that \(\frac1{a_n}<\frac1M\). This gives us that \(a_n>M\) showing that \(a_n\) diverges to \(+\infty\). \(\blacksquare\)

Rates of growth

Now that we have discussed sequences diverging to infinity, we can see a few general points about rates of growth. This is something you should have seen in Calculus II when discussing infinite series.

Definition Suppose sequences \(a_n\) and \(b_n\) both diverge to \(+\infty\). We say that \(a_n\) is of higher order than \(b_n\) if \(\frac{a_n}{b_n}\) diverges to \(+\infty\) (or equivalently that \(b_n\) is of lower order than \(a_n\)). We say that \(a_n\) and \(b_n\) are of the same order if \(\frac{a_n}{b_n}\) converges to a nonzero (finite) real number.

Note that \(\frac{a_n}{b_n}\rightarrow0\) means that \(b_n\) is of higher order than \(a_n\).

The above definition tells us that \(a_n\) diverges to infinity much faster than \(b_n\) when \(a_n\) is of higher order. We will also say this like “\(a_n\) grows faster than \(b_n\).” diverge to infinity equally fast in some sense. This doesn’t mean they are equal though! When two sequences are of the same order, theyNote that the above definition is a much stronger statement than \(a_n>b_n\).

Consider the reasoning that \(\frac{a_n}{b_n}\rightarrow L\) means, that eventually, \(a_n\approx L\cdot b_n\). So that the long term behavior of the sequences is that they are only different by a multiplicative factor (approximately). In fact, given any \(\epsilon>0\), we could choose an \(n^*\) such that \((L-\epsilon)\cdot b_n \leq a_n\leq (L+\epsilon)\cdot b_n\) is true for all \(n\geq n^*\). This is really what we mean by \(a_n\approx L\cdot b_n\), technically.

Technically, \(\frac{a_n}{b_n}\rightarrow \infty\)means that for any \(M>0\) we can make \(a_n>M\cdot b_n\) eventually true for all sufficiently large \(n\).

Examples:

Here are some examples given without proof:

  1. \(a_n=n^2\) grows faster than \(b_n=n\).

  2. \(a_n=\sqrt n\) grows faster than \(b_n=\sqrt[k]n\) for any \(k>2\). Recall that “larger roots” are closer to one.

  3. \(a_n=a^n\) grows faster than \(b_n=n^r\) for any \(a>1,r>0\). Exponential growth always beats out any power function.

  4. Here is one with logarithms, but note that we haven’t technically covered logarithms yet, so we don’t know what they mean. I know you have seen them in other classes, so I provide them anyways: given \(a>1\) and \(b>0\), \(a_n=n^r\) grows faster than \(b_n=\log_b(n)\).

  5. Clearly, \(a_n=n^n\) grows faster than any of the other examples given here.

  6. Factorial growth, \(a_n=n!=n(n-1)(n-2)\cdots1\) is very fast, but is also beaten by \(n^n\).

Ratio test

Theorem 2.3.7 (Ratio test) Consider a sequence \(a_n\) of nonzero terms such that \(\displaystyle\lim_{n\rightarrow\infty} \left|\frac{a_{n+1}}{a_n}\right|=\alpha\), for \(\alpha\) a constant.
  1. If \(\alpha <1\), then \(\displaystyle\lim_{n\rightarrow\infty} a_n = 0\).
  2. If \(\alpha > 1\), then \(\displaystyle\lim_{n\rightarrow\infty} |a_n| = +\infty\). (The sequence \(|a_n|\) diverges to infinity, but \(a_n\) can diverges to either \(\pm\infty\).)
  3. If \(\alpha = 1\), then \(a_n\) may converge, diverge to plus or minus infinity, or oscillate

The proof of Theorem 2.3.7 (a) and (b) is in the text. You are advised to read it carefully! There is an exercise in the text for proving (c). This theorem will be useful later when we discuss convergence of infinite series/summations.

Big O notation

Definition 2.3.9 Suppose that \(a_n\) converges to \(A\), \(b_n\) converges to \(0\), with \(a_n\neq0\) and \(b_n\neq0\) for any \(n\in\mathbb N\). Then \(a_n\) converges to \(A\) with a rate of convergence \(O(b_n)\) if and only if \(|a_n-A|<K\cdot b_n\) for sufficiently large values of \(n\) and some positive constant \(K\). We write \(a_n=A+O(b_n)\) and say that \(a_n\) converges to \(A\) at approximately the same rate as \(b_n\) converges to its limit, and in short \(a_n\) is “big oh” of \(b_n\).

Big oh notation, \(a_n=O(b_n)\), with both sequences converging to zero essentially tells us that these two sequences are converging to zero at the same rate. We can also use this notation for sequences diverging to infinity though: \(7-10n+3n^2=O(n^2)\) since \(|7-10n+3n^2|<n^2+n^2+3n^2=5n^2\) for all \(n\) sufficiently large.

Note that if \(f(n)=O(g(n))\), then \(f(n)=O(c\cdot g(n))\) for any constant \(c\in\mathbb R\).

Cutting off terms from a sum with big O (not in text) If we have a summation of terms: \(\displaystyle\sum_{k=1}^\infty f_k(n)=f_1(n)+f_2(n)+f_3(n)+\cdots\) such that \(f_k(n)\rightarrow\infty\) for each \(k\) arrange by order, i.e. that \(f_k(n)\) goes to inifnity slower than \(f_{k+1}(n)\), then we can cut off the summation at any point (say at \(k=m-1\) and say \[\displaystyle\sum_{k=1}^\infty f_k(n)=\displaystyle\sum_{k=1}^{m-1} f_k(n)+O(f_m(n)).\] Note that this is a “true equality” and not an approximation. The “chopped-off terms” \[f_m(n)+f_{m+1}(n)+\cdots=O(f_m(n))\] because \(f_m(n)\) decays to zero the slowest of them all.

Example Prove that \(2+\frac5n-\frac3{\sqrt n} = 2+O(\frac1{\sqrt n})\).

Proof. Note that \(n>\sqrt n\) for sufficiently large \(n\), thus \(\frac1n<\frac1{\sqrt n}\) (as long as \(n>1\)). So we have that \(\left|\frac5n-\frac3{\sqrt n}\right|\leq \frac5n+\frac3{\sqrt n}\leq \frac5{\sqrt n}+\frac3{\sqrt n}=8\cdot\frac1{\sqrt n}\). So we have that \(\frac5n-\frac3{\sqrt n} = O(\frac1{\sqrt n})\).

Examples

  1. \(a_n=\frac3n+5=5+O(\frac1n)\).

  2. \(1+\frac2n+\frac3{n^2}+\frac4{n^3} = 1+ O(\frac1{n})\) because \(\frac3{n^2}\) and \(\frac4{n^3}\) decay much faster than \(\frac2n\) we only keep the dominant terms in the long run. It is also true that \(1+\frac2n+\frac3{n^2}+\frac4{n^3} = 1+ \frac2n+O(\frac1{n^2})\). So we have some flexibility in where we cut things off.

  3. \(1+\frac1{n}+\frac1{n^2} +\frac1{n^3} +\cdots +\frac1{n^k}= 1+O(\frac1{n})\) as long as \(k\geq1\) because all terms decay faster than \(\frac1n\).

  4. Note that we can also say that \(1+\frac1n+\frac1{n^2} +\frac1{n^3} +\cdots +\frac1{n^k}= 1+\frac1n+O(\frac1{n^2})\) so long as \(k\geq2\). Sometimes you may wish to keep other terms and only apply big O to the rest.

  5. \(1+\frac1n+\frac1{\sqrt n} = 1+O(\frac1{\sqrt n})\) because \(\frac1{\sqrt n}\) is the slowest decaying term.

\[ \diamond \S \diamond \]