(last updated: 3:41:39 PM, October 06, 2020)

\(\large \S\) 2.4 - Monotone sequences

You have now seen a variety of sequence theorems and are familiar with sequences converging to finite real numbers. Now we will look at some specific ways that sequences can diverge. In particular, squences that go off to plus or minus infinity.

Increasing and decreasing sequences

Definition 2.4.1 A sequence \(a_n\) is said to be

  1. increasing (or nondecreasing) if and only if for all \(n,m\in\mathbb N\) with \(n<m\), we have \(a_n \leq a_m\).
  2. eventually increasing if and only if there exists \(n^*\in\mathbb N\) such that for all \(n,m\in\mathbb N\) with \(n^*\leq n<m\), we have \(a_n \leq a_m\).
  3. strictly increasing if and only if for all \(n,m\in\mathbb N\) with \(n<m\), we have \(a_n<a_m\).
  4. eventually strictly increasing if and only if there exists \(n^*\in\mathbb N\) such that for all \(n,m\in\mathbb N\) with \(n^*\leq n<m\), we have \(a_n < a_m\).

Note that we define the same terms for decreasing sequences with inequalities appropriately flipped.

The statements in the following remark should be readily understood, but they are stated in order to call attention to them as different ways you can show that a sequence is increasing. Of course, you can work out similar statements for decreasing seuqences too.

Remark 2.4.3 Observe that the following are all equivalent statements for a sequence \(a_n\), where \(a_n>0\) for all \(n\).
  1. \(a_n\) is increasing.
  2. \(a_n\leq a_{n+1}\) for all \(n\).
  3. \(\frac{a_{n+1}}{a_n}\geq1\) for all \(n\). (Note that this is the only case that actually requires \(a_n>0\) for all \(n\).)
  4. \(a_{n+1}-a_n \geq 0\) for all \(n\).

A note on “taking limits of formulas”

Assume that \(a_n\rightarrow A\). We have already discussed that removing a finite number of terms does not affect the long term behavior. Thus \(a_{n+k}\rightarrow A\) as \(n\rightarrow\infty\) for any fixed constant \(k\in\mathbb N\) as well. This \(a_{n+k}\) is identical to \(a_n\) except that the first \(k\) terms have been discarded. It is like a “shifted” version of \(a_n\).

Firthermore we know that \(a_n\rightarrow A\in\mathbb R\) also implies that \(a_n^k\rightarrow A^k\) for fixed exponent \(k\in\mathbb N\) and that \(a_n^{\frac1k}\rightarrow A^{\frac1k}\). Of course we also can multiply by constants or add constants and we know how the limiting behavior is determined.

Example:

Assume \(a_n\rightarrow A\in\mathbb R\) with \(A\neq0\) and \(A>-1\). Find the limit of \(b_n=\sqrt{a_n+1}+a_{n+2}-\frac1{(a_{n+5})^2}\). We know that \(b_n\rightarrow\sqrt{A+1}+A-\frac1{A^2}\).

Passing the limit through operations in a formula is related to the idea of that operation being continuous (which we will rigorously develop in the next chapter). For now you can do that with polynomials and ratios of polynomials (as long as you do not generate indeterminate forms or zeros in a denominator). You can actually do this with rational and irrational exponents now as well, since I have defined what that means in my supplementary notes on exponents. Again, you need to watch out for things like getting a negative underneath an even root, or flipping an inequality due to multiplication or division with a negative though.

Recursive formulas

You will see recursively defined sequences often: \(a_{n+1}=f(a_n)\) is a common type. For example, consider \(a_{n+1}=(a_n-2)^2\) with \(a_1\) unknown. If this sequence is to converge to a real number \(A\), then we know that \(A=(A-2)^2\). Solving this will give \(A=1,4\). This does not guarantee convergence be any means though!

Consider the case \(a_1=0\), then \(a_2=4\) and \(a_3=4\), so we can see that \(a_n=4\) for all further \(n\) values. So for this choice of \(a_1\), the sequence does indeed converge to \(A=4\).

Try \(a_1=1\), then \(a_n=1\) for all \(n\), so the sequence converges to \(A=1\).

Try \(a_1=5\), then \(a_2=9\), \(a_3=49\), \(a_4=47^2\), etc. This sequence seems to grow without bound! This would need to be proved though. What we do know is that the sequence can only converge to \(A=1,4\) and no other possibilities for convergence exist. It could diverge to \(\pm\infty\) or even oscillate. We cannot ignore any possibility without careful proof.

The next theorem gives us a condition for knowing that convergence (or divergence) is certain.

Monotone convergence theorem

Theorem 2.4.4 (Monotone convergence theorem) For an increasing sequence \(a_n\), there are two possibilities:
  1. \(a_n\) is bounded above by a constant \(M\), in which case there exists \(L\leq M\) such that \(a_n\rightarrow L\)
  2. \(a_n\) is unbounded, in which case \(a_n\) diverges to \(\infty\).

show/hide proof

Proof of (a). Let \(S=\{a_n\mid n\in\mathbb N\}\) which is a set of real numbers and is bounded above by \(M\). Recall that by Definition 1.7.6, Completeness of \(\mathbb R\), we have that \(\sup S\in\mathbb R\). Let \(A=\sup S\). Now by Theorem 1.7.7, the approximation property of suprema, we have that for any \(\epsilon>0\) we can find an \(s\in S\) such that \(A-\epsilon<s\). However, we know that \(s=a_m\) for some \(m\in\mathbb N\). We also know that for any \(n>m\) we have that \(a_n\geq a_m\) since \(a_n\) is increasing. Now let \(n^*=m\). Then for any \(n\geq n^*\) we have that \(a_n\geq a_{n^*}\). Furthermore we have \(s=a_{n^*}\), thus \(A-\epsilon<a_{n^*}\). This gives \(A-a_{n^*}\leq A-a_{n}<\epsilon\). Finally this tells us that \(|a_n-A|=A-a_n<\epsilon\). Thus \(a_n\rightarrow A\).

Proof of (b). We are given that \(a_n\) is unbounded which means that there is no real number \(M\) such that \(|a_n|\leq M\) for all \(n\). In fact, since \(a_n\) is increasing, we know that \(a_1\) is a lower bound. We also know that \(a_n\) is eventually positive since otherwise zero would be an upper bound (making the sequence bounded). So given any \(M>0\) there exists an \(m\in\mathbb N\) such that \(a_m > M\). Otherwise the sequence would be bounded below by \(a_1\) and bounded above by \(M>0\), and hence bounded. Now let \(n^*=m\). Since the sequence is increasing, we know that \(M<a_{n^*}=a_m\leq a_n\) for all \(n\geq n^*\). Hence \(a_n\) diverges to \(+\infty\). \(\blacksquare\)

Remark (not in text) A sequence that is increasing and bounded form above converges. A sequence that is decreasing and bounded from below converges.

Here is the general process for monotone sequences. This is assuming the sequence is increasing. The same steps work for a decreasing sequence with inequalities appropriately reversed.

  1. Show it is increasing using induction. Show that \(a_1\leq a_2\), and then show that \(a_{n-1}\leq a_n\) implies that \(a_{n}\leq a_{n+1}\).

  2. Show it is bounded from above. Show that \(a_1\leq M\) for some \(M\), and then show that \(a_n\leq M\) implies that \(a_{n+1}\leq M\). Thus by induction the entire sequence is bounded above by \(M\).

  3. Since it is increasing and bounded from above we know it converges by the monotone convergence theorem. Assume \(a_n\rightarrow A\) and “take the limit” of the recursive formula and solve for \(A\).

Example Consider the sequence defined recursively by \(a_1=0\) and \(a_{n+1}=\sqrt{a_n+2}+3\) for \(n\geq1\). Prove convergence, and find the limit.

show/hide proof

Proof.

Show it is increasing:

We have that \(a_2=\sqrt{2}+3>0=a_1\). Now assume that \(a_n>a_{n-1}\). Therefor \(a_{n+1}=\sqrt{a_n+2}+3>\sqrt{a_{n-1}+2}+3=a_n\). We have show that \(a_n>a_{n-1}\) implies that \(a_{n+1}>a_n\). Thus, by induction, \(a_{n+1}>a_n\) for all \(n\). So we have that this sequence is in fact strictly increasing.

Show it is bounded from above:

We need to find a good bound. We have \(a_1=0\) and \(a_2=\sqrt{2}+3<5\). Let’s try \(a_n\leq 5\). If \(a_n\leq5\) then \(a_{n+1}=\sqrt{a_n+2}+3 \leq\sqrt7+3\approx 5.65\), so that doesn’t work. Maybe we can make the square root go away. Try \(a_n\leq7\) because \(7+2=9\) has a nice square root. and \(\sqrt9+3=6<7\). We have that \(a_n\leq7\) implies that \(a_{n+1}=\sqrt{a_n+2}+3 \leq\sqrt9+3=6 \leq 7\). So we now have that \(a_n\) is bounded from above by 7 for all \(n\).

Now since this sequence is increasing and bounded from above, we know it converges by the monotone convergence theorem.

Find the limit:

Since \(a_n\rightarrow A\in\mathbb R\), we can take the limit of both sides of the recursive formula to get \(A=\sqrt{A+2}+3\). Now solve for \(A\): \((A-3)^2=A+2\) so \(A^2-6A+9=A+2\) and \(A^2-7A+7=0\). Thus \(A=\frac{7\pm\sqrt{49-4\cdot7}}{2}=\frac72\pm\frac{\sqrt{21}}{2}\approx1.208712,5.791288\).

We know that \(a_2>3\) and the sequence is increasing, so \(A\neq\frac72-\frac{\sqrt{21}}{2}\approx 1.2087124\). Thus \(A=\frac72+\frac{\sqrt{21}}{2}\).

So we have proved that \(a_n\rightarrow \frac72+\frac{\sqrt{21}}{2}\). \(\square\)

Example Consider the sequence defined recursively by \(a_1\in(0,1)\) and \(a_{n+1}=1-\sqrt{1-a_n}\) for \(n\geq1\). Consider all cases, prove convergence, and find the limit.

I present two different proofs. The first uses the monotone convergence theorem and not any \(\epsilon\)-based arguments. The second is a direct proof using an \(\epsilon\)-argument.

show/hide proof 1

Proof 1. Note that \(x\in(0,1)\) implies that \(1-x\in(0,1)\). Also recall that \(x\in(0,1)\) implies that \(0<x<\sqrt x<1\). Rearranged the recursive formula to see that \(1-a_{n+1}=\sqrt{1-a_n}\). We can reason that \(1-a_n\in(0,1)\) for all \(n\) so that gives us that \(1-a_n< \sqrt{1-a_n}=1-a_{n+1}\). In this way we can see that \(1-a_n\) is an increasing sequence, which tells us that \(a_n\) is a decreasing sequence. We can also see that \(\sqrt{1-a_n}\in(0,1)\) for all \(n\), so it is a bounded sequence. Thus \(a_n\) converges by the monotone convergence theorem. Let \(a_n\rightarrow A\).

Note that the recursive definition \(a_{n+1}=1-\sqrt{1-a_n}\) is a relationship between the sequence \(b_n=a_{n+1}\) for \(n\geq1\) and \(a_n\). The sequence \(b_n\) is identical to \(a_n\) except that it starts one term in “the future.” Since \(a_n\rightarrow A\) we thus know that \(b_n\rightarrow A\). we discussed int eh past that discarding a finite number of terms of a sequence does not impact its convergence behavior. Taking the limit of both sides of the equation \(b_n=1-\sqrt{1-a_n}\) then gives \(A=1-\sqrt{1-A}\). So the limit \(A\) must satisfy this equation! Solving for \(A\) gives \(1-A=\sqrt{1-A}\). Note that we already know that \(A\in[0,1]\); since \(a_n\in(0,1)\) for all \(n\) it can’t possibly converge to a negative number some something beyond \(1\). Thus we get \((1-A)^2=1-A \ \Rightarrow \ (1-A)^2-(1-A)=0 \ \Rightarrow \ (1-A)((1-A)-1)=0 \ \Rightarrow \ (1-A)(-A)=0\)** thus \(A=0,1\) are the only possibilities. Note that you still need to be extremely careful even when solving simple algebra equations like this! We know that \(a_n<1\) for all \(n\) and is decreasing, so it cannot possibly converge to \(1\). Hence \(a_n\rightarrow0\). \(\square\)

show/hide proof 2

Proof 2. Let \(b_n=1-a_n\) for all \(n\). We will examine the sequence \(b_n\) instead.

Also note that \(b_n\in(0,1)\) for all \(n\), and is an increasing sequence (by the same argument as in proof 1 that \(a_n\) is decreasing). Finally let \(c_n=\frac1{b_n}\), and this \(c_n\) is a decreasing sequence. Note that \(c_n>1\) for all \(n\). So \(c_n\) is a decreasing sequence and is bounded below by \(1\) and hence converges by the monotone convergence theorem. Of course, we have that \(c_n=\frac{1}{1-a_n}\) and hence \(c_n\rightarrow\frac1{1-A}\) where \(a_n\rightarrow A\). We will show that \(c_n\rightarrow1\).

Let \(c_1=1+\delta>1\). Note that \(c_1=\frac1{1-a_1}\) so that \(\delta=\frac1{1-a_1}-1\). Now given that \(c_{n+1}=\sqrt{c_n}\), we can show that \(c_n=(c_1)^\frac{1}{2^{n-1}}=(1+\delta)^\frac{1}{2^{n-1}}\). Now we will argue that \((1+\delta)^\frac{1}{2^{n-1}}\rightarrow1\).

We want to show that \((1+\delta)^\frac{1}{2^{n-1}}-1<\epsilon\) (note that no absolute value bars are necessary since the quantity on the right is positive). This is equivalent to \(1+\delta<(1+\epsilon)^{2^{n-1}}\). By the Bernoulli inequality \((1+\epsilon)^{2^{n-1}}\geq 1+2^{n-1}\cdot \epsilon\) so we only need to get \(1+\delta<1+2^{n-1}\cdot \epsilon\) which will occur if we can find an \(n\) so large that \(\frac{\delta}{\epsilon}<2^{n-1}\). Normally, I would take a logarithm here, but I wish to avoid the use of logarithms since we have not formally introduced them.

Note thate \(2^k=(1+1)^k=1+k+\cdots+k+1\) by the binomial expansion. And of course, \(1+k+\cdots+k+1>1+k\) for all \(k\). So we have that \(2^{n-1}>1+(n-1)=n\) for all \(n\). So we just need to choose an \(n^*>\frac{\delta}{\epsilon}\) which will in turn make \(\frac{\delta}{\epsilon}<2^{n-1}\) for all \(n\geq n^*\). This then implies that \((1+\delta)^\frac{1}{2^{n-1}}-1<\epsilon\) for all \(n\geq n^*\). We conclude that \(c_n=(1+\delta)^\frac{1}{2^{n-1}}\) converges to \(1\).

We have shown that \(c_n\rightarrow1\) for \(c_n=\frac{1}{1-a_n}\). We already know that \(a_n\rightarrow A\) by monotone convergence (see proof 1). Thus we have that \(1=\frac1{1-A}\) showing us that \(A=0\). \(\square\)

Theorem 2.4.8 (Archimedean Order Property of \(\mathbb R\)) Let \(x\) be any real number. Then there exists a positive integer \(n^*\) greater than \(x\).

show/hide proof

Proof. Let \(x\in\mathbb R\) be an arbitrary real number. Assume that \(a_n=n\) is bounded above by \(x\). Then \(a_n\leq x\) for all \(n\in\mathbb N\). Since \(a_n\) is a strictly increasing sequence and is bounded from above by \(x\), we know it converges by the monotone convergence theorem. Assume \(a_n\rightarrow A\). However, we know recursively that \(a_{n+1}=a_n+1\), thus taking the limit of this recursive formula gives \(A=A+1\). This is a contradiction, so \(a_n\) cannot be bounded above by \(x\). \(\blacksquare\)

We already proved this (Theorem 1.7.8), but it is presented here again since it is an important property of the real number system.

Here is an example that is kind of strange, but I thought it was interesting, plus in the proof I use the Archimedean property.

Example Consider the sequence defined recursively by \(a_1\geq0\) and \(a_{n+1}=(1+a_n)^{1+a_n}\) for \(n\geq1\). Prove that this sequence diverges to infinity.

show/hide proof

Proof. Clearly \(a_n\geq0\) for all \(n\) because \((1+a_n)^{1+a_n}\geq 1+a_n\geq1\) when \(a_n\geq0\). Also, \(a_{n+1}=(1+a_n)^{1+a_n}\geq 1+a_n>a_n\) for any \(a_n\geq0\), thus this sequence is strictly increasing. Now we just need to show it is not bounded from above. Notice that \(a_n>1\) for \(n\geq3\) no matter what we choose for \(a_1\). This means that \(a_{n+1}=(1+a_n)^{1+a_n}> 1+a_n\) for \(n\) sufficiently large. So, in summary, we know that there is an \(m\in\mathbb N\) such that \(a_m>1\). This implies that \(a_{m+1}>1+a_m>1+1=2\). Going one step further shows \(a_{m+2}>1+a_{m+1}>1+2=3\). Using induction we can show that \(a_{m+k}>1+k\) for all \(k\in\mathbb N\). For any \(M>0\), there exists a natural number \(k\) such that \(M<1+k\) (Archimedean property), there for there is an index \(m+k\) such that \(a_{m+k}>1+k>M\). we conclude that \(M\) cannot be an upper bound on the sequence \(a_n\). Therefore since \(a_n\) is increasing and cannot be bounded from above, we know it diverges to \(+\infty\) by Theorem 2.4.4.

\[ \diamond \S \diamond \]