You appear to be on a device with a "narrow" screen width (*i.e.* you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.

### Section 1-9 : Comparison Test for Improper Integrals

Now that we’ve seen how to actually compute improper integrals we need to address one more topic about them. Often we aren’t concerned with the actual value of these integrals. Instead we might only be interested in whether the integral is convergent or divergent. Also, there will be some integrals that we simply won’t be able to integrate and yet we would still like to know if they converge or diverge.

To deal with this we’ve got a test for convergence or divergence that we can use to help us answer the question of convergence for an improper integral.

We will give this test only for a sub-case of the infinite interval integral, however versions of the test exist for the other sub-cases of the infinite interval integrals as well as integrals with discontinuous integrands.

#### Comparison Test

If \(f\left( x \right) \ge g\left( x \right) \ge 0\) on the interval \(\left[ {a,\infty } \right)\) then,

- If \(\displaystyle \int_{{\,a}}^{{\,\infty }}{{f\left( x \right)\,dx}}\) converges then so does \(\displaystyle \int_{{\,a}}^{{\,\infty }}{{g\left( x \right)\,dx}}\).
- If \(\displaystyle \int_{{\,a}}^{{\,\infty }}{{g\left( x \right)\,dx}}\) diverges then so does \(\displaystyle \int_{{\,a}}^{{\,\infty }}{{f\left( x \right)\,dx}}\).

Note that if you think in terms of area the Comparison Test makes a lot of sense. If \(f\left( x \right)\) is larger than \(g\left( x \right)\) then the area under \(f\left( x \right)\) must also be larger than the area under \(g\left( x \right)\).

So, if the area under the larger function is finite (*i.e.* \(\int_{{\,a}}^{{\,\infty }}{{f\left( x \right)\,dx}}\) converges) then the area under the smaller function must also be finite (*i.e.* \(\int_{{\,a}}^{{\,\infty }}{{g\left( x \right)\,dx}}\) converges). Likewise, if the area under the smaller function is infinite (*i.e.* \(\int_{{\,a}}^{{\,\infty }}{{g\left( x \right)\,dx}}\) diverges) then the area under the larger function must also be infinite (*i.e.* \(\int_{{\,a}}^{{\,\infty }}{{f\left( x \right)\,dx}}\) diverges).

Be careful not to misuse this test. If the smaller function converges there is no reason to believe that the larger will also converge (after all infinity is larger than a finite number…) and if the larger function diverges there is no reason to believe that the smaller function will also diverge.

Let’s work a couple of examples using the comparison test. Note that all we’ll be able to do is determine the convergence of the integral. We won’t be able to determine the value of the integrals and so won’t even bother with that.

Let’s take a second and think about how the Comparison Test works. If this integral is convergent then we’ll need to find a larger function that also converges on the same interval. Likewise, if this integral is divergent then we’ll need to find a smaller function that also diverges.

So, it seems like it would be nice to have some idea as to whether the integral converges or diverges ahead of time so we will know whether we will need to look for a larger (and convergent) function or a smaller (and divergent) function.

To get the guess for this function let’s notice that the numerator is nice and bounded because we know that,

\[0 \le {\cos ^2}x \le 1\]

Therefore, the numerator simply won’t get too large.

So, it seems likely that the denominator will determine the convergence/divergence of this integral and we know that

\[\int_{{\,2}}^{{\,\infty }}{{\frac{1}{{{x^2}}}\,dx}}\]converges since \(p = 2 > 1\) by the fact in the previous section. So, let’s guess that this integral will converge.

So we now know that we need to find a function that is larger than

\[\frac{{{{\cos }^2}x}}{{{x^2}}}\]and also converges. Making a fraction larger is actually a fairly simple process. We can either make the numerator larger or we can make the denominator smaller. In this case we can’t do a lot about the denominator in a way that will help. However, we can use the fact that \(0 \le {\cos ^2}x \le 1\) to make the numerator larger (*i.e.* we’ll replace the cosine with something we know to be larger, namely 1). So,

Now, as we’ve already noted

\[\int_{{\,2}}^{{\,\infty }}{{\frac{1}{{{x^2}}}\,dx}}\]converges and so by the Comparison Test we know that

\[\int_{{\,2}}^{{\,\infty }}{{\frac{{{{\cos }^2}x}}{{{x^2}}}\,dx}}\]must also converge.

Let’s first take a guess about the convergence of this integral. As noted after the fact in the last section about

\[\int_{{\,a}}^{{\,\infty }}{{\frac{1}{{{x^p}}}\,dx}}\]if the integrand goes to zero faster than \(\frac{1}{x}\) then the integral will probably converge. Now, we’ve got an exponential in the denominator which is approaching infinity much faster than the \(x\) and so it looks like this integral should probably converge.

So, we need a larger function that will also converge. In this case we can’t really make the numerator larger and so we’ll need to make the denominator smaller in order to make the function larger as a whole. We will need to be careful however. There are two ways to do this and only one, in this case only one, of them will work for us.

First, notice that since the lower limit of integration is 3 we can say that \(x \ge 3 > 0\) and we know that exponentials are always positive. So, the denominator is the sum of two positive terms and if we were to drop one of them the denominator would get smaller. This would in turn make the function larger.

The question then is which one to drop? Let’s first drop the exponential. Doing this gives,

\[\frac{1}{{x + {{\bf{e}}^x}}} < \frac{1}{x}\]This is a problem however, since

\[\int_{{\,3}}^{{\,\infty }}{{\frac{1}{x}\,dx}}\]diverges by the fact. We’ve got a larger function that is divergent. This doesn’t say anything about the smaller function. Therefore, we chose the wrong one to drop.

Let’s try it again and this time let’s drop the \(x\).

\[\frac{1}{{x + {{\bf{e}}^x}}} < \frac{1}{{{{\bf{e}}^x}}} = {{\bf{e}}^{ - x}}\]Also,

\[\begin{align*}\int_{{\,3}}^{{\,\infty }}{{{{\bf{e}}^{ - x}}\,dx}} & = \mathop {\lim }\limits_{t \to \infty } \int_{{\,3}}^{{\,t}}{{{{\bf{e}}^{ - x}}\,dx}}\\ & = \mathop {\lim }\limits_{t \to \infty } \left( { - {{\bf{e}}^{ - t}} + {{\bf{e}}^{ - 3}}} \right)\\ & = {{\bf{e}}^{ - 3}}\end{align*}\]So, \(\int_{{\,3}}^{{\,\infty }}{{{{\bf{e}}^{ - x}}\,dx}}\) is convergent. Therefore, by the Comparison test

\[\int_{{\,3}}^{{\,\infty }}{{\frac{1}{{x + {{\bf{e}}^x}}}\,dx}}\]is also convergent.

This is very similar to the previous example with a couple of very important differences. First, notice that the exponential now goes to zero as \(x\) increases instead of growing larger as it did in the previous example (because of the negative in the exponent). Also note that the exponential is now subtracted off the \(x\) instead of added onto it.

The fact that the exponential goes to zero means that this time the \(x\) in the denominator will probably dominate the term and that means that the integral probably diverges. We will therefore need to find a smaller function that also diverges.

Making fractions smaller is pretty much the same as making fractions larger. In this case we’ll need to either make the numerator smaller or the denominator larger.

This is where the second change will come into play. As before we know that both \(x\) and the exponential are positive. However, this time since we are subtracting the exponential from the \(x\) if we were to drop the exponential the denominator will become larger (we will no longer be subtracting a positive number off the \(x\)) and so the fraction will become smaller. In other words,

\[\frac{1}{{x - {{\bf{e}}^{ - x}}}} > \frac{1}{x}\]and we know that

\[\int_{{\,3}}^{{\,\infty }}{{\frac{1}{x}\,dx}}\]diverges and so by the Comparison Test we know that

\[\int_{{\,3}}^{{\,\infty }}{{\frac{1}{{x - {{\bf{e}}^{ - x}}}}\,dx}}\]must also diverge.

First notice that as with the first example, the numerator in this function is going to be bounded since the sine is never larger than 1. Therefore, since the exponent on the denominator is less than 1 we can guess that the integral will probably diverge. We will need a smaller function that also diverges.

We know that \(0 \le 3{\sin ^4}\left( {2x} \right) \le 3\). In particular, this term is positive and so if we drop it from the numerator the numerator will get smaller. This gives,

\[\frac{{1 + 3{{\sin }^4}\left( {2x} \right)}}{{\sqrt x }} > \frac{1}{{\sqrt x }}\]and

\[\int_{{\,1}}^{{\,\infty }}{{\frac{1}{{\sqrt x }}\,dx}}\]diverges so by the Comparison Test

\[\int_{{\,1}}^{{\,\infty }}{{\frac{{1 + 3{{\sin }^4}\left( {2x} \right)}}{{\sqrt x }}\,dx}}\]also diverges.

Up to this point all the examples used on manipulation of either the numerator or the denominator in order to use the Comparison Test. Don’t get so locked into that idea that you decide that is all you will ever have to do. Sometimes you will need to manipulate both the numerator and the denominator.

Let’s do an example like that.

In this case we can notice that because the cosine in the numerator is bounded the numerator will never get too large. Likewise, the sine in the denominator is bounded and so again that term will not get too large or too small.

That leaves only the square root in the denominator and because the exponent is less than one we can guess that the integral will probably diverge. Therefore, we will need a smaller function that also diverges.

We know that \(0 \le {\cos ^2}\left( x \right) \le 1\). In particular, this term is positive and so if we drop it from the numerator the numerator will get smaller. This gives,

\[\frac{{1 + {{\cos }^2}\left( x \right)}}{{\sqrt x \left[ {2 - {{\sin }^4}\left( x \right)} \right]}} > \frac{1}{{\sqrt x \left[ {2 - {{\sin }^4}\left( x \right)} \right]}}\]Next, we also know that \(0 \le {\sin ^4}\left( x \right) \le 1\). Again, this is a positive term and so if we no longer subtract this off from the 2 the term in the brackets will get larger and so the rational expression will get smaller. This gives,

\[\frac{{1 + {{\cos }^2}\left( x \right)}}{{\sqrt x \left[ {2 - {{\sin }^4}\left( x \right)} \right]}} > \frac{1}{{\sqrt x \left[ {2 - {{\sin }^4}\left( x \right)} \right]}} > \frac{1}{{2\sqrt x }}\]

Finally, we know that

\[\int_{{\,2}}^{{\,\infty }}{{\frac{1}{{2\sqrt x }}\,dx}}\]Diverges (the 2 in the denominator will not affect this) so by the Comparison Test

\[\int_{{\,2}}^{{\,\infty }}{{\frac{{1 + {{\cos }^2}\left( x \right)}}{{\sqrt x \left[ {2 - {{\sin }^4}\left( x \right)} \right]}}\,dx}}\]also diverges.

Okay, we’ve seen a few examples of the Comparison Test now. However, most of them worked pretty much the same way. All the functions were rational and all we did for most of them was add or subtract something from the numerator and/or the denominator to get what we want.

Let’s take a look at an example that works a little differently so we don’t get too locked into these ideas.

Normally, the presence of just an \(x\) in the denominator would lead us to guess divergent for this integral. However, the exponential in the numerator will approach zero so fast that instead we’ll need to guess that this integral converges.

To get a larger function we’ll use the fact that we know from the limits of integration that \(x > 1\). This means that if we just replace the \(x\) in the denominator with 1 (which is always smaller than \(x\)) we will make the denominator smaller and so the function will get larger.

\[\frac{{{{\bf{e}}^{ - x}}}}{x} < \frac{{{{\bf{e}}^{ - x}}}}{1} = {{\bf{e}}^{ - x}}\]and we can show that

\[\int_{{\,1}}^{{\,\infty }}{{{{\bf{e}}^{ - x}}\,dx}}\]converges. In fact, we’ve already done this for a lower limit of 3 and changing that to a 1 won’t change the convergence of the integral. Therefore, by the Comparison Test

\[\int_{{\,1}}^{{\,\infty }}{{\frac{{{{\bf{e}}^{ - x}}}}{x}\,dx}}\]also converges.

We should also really work an example that doesn’t involve a rational function since there is no reason to assume that we’ll always be working with rational functions.

We know that exponentials with negative exponents die down to zero very fast so it makes sense to guess that this integral will be convergent. We need a larger function, but this time we don’t have a fraction to work with so we’ll need to do something different.

We’ll take advantage of the fact that \({{\bf{e}}^{ - x}}\) is a decreasing function. This means that

\[{x_1} > {x_2}\hspace{0.25in}\hspace{0.25in} \Rightarrow \hspace{0.25in}\hspace{0.25in}{{\bf{e}}^{ - {x_1}}} < {{\bf{e}}^{ - {x_2}}}\]In other words, plug in a larger number and the function gets smaller.

From the limits of integration we know that \(x > 1\) and this means that if we square \(x\) it will get larger. Or,

\[{x^2} > x\hspace{0.25in}\hspace{0.25in}{\mbox{provided }}x > 1\]Note that we can only say this since \(x > 1\). This won’t be true if \(x \le 1\)! We can now use the fact that \({{\bf{e}}^{ - x}}\) is a decreasing function to get,

\[{{\bf{e}}^{ - {x^2}}} < {{\bf{e}}^{ - x}}\]So, \({{\bf{e}}^{ - x}}\) is a larger function than \({{\bf{e}}^{ - {x^2}}}\) and we know that

\[\int_{{\,1}}^{{\,\infty }}{{{{\bf{e}}^{ - x}}\,dx}}\]converges so by the Comparison Test we also know that

\[\int_{{\,1}}^{{\,\infty }}{{{{\bf{e}}^{ - {x^2}}}\,dx}}\]is convergent.

The last two examples made use of the fact that \(x > 1\). Let’s take a look at an example to see how we would have to go about these if the lower limit had been smaller than 1.

First, we need to note that \({{\bf{e}}^{ - {x^2}}} \le {{\bf{e}}^{ - x}}\) is only true on the interval \(\left[ {1,\infty } \right)\) as is illustrated in the graph below.

So, we can’t just proceed as we did in the previous example with the Comparison Test on the interval \(\left[ {\frac{1}{2},\infty } \right)\). However, this isn’t the problem it might at first appear to be. We can always write the integral as follows,

\[\begin{align*}\int_{{\,\frac{1}{2}}}^{{\,\infty }}{{{{\bf{e}}^{ - {x^2}}}\,dx}} & = \int_{{\,\frac{1}{2}}}^{{\,1}}{{{{\bf{e}}^{ - {x^2}}}\,dx}} + \int_{{\,1}}^{{\,\infty }}{{{{\bf{e}}^{ - {x^2}}}\,dx}}\\ & = 0.28554 + \int_{{\,1}}^{{\,\infty }}{{{{\bf{e}}^{ - {x^2}}}\,dx}}\end{align*}\]We used Mathematica to get the value of the first integral. Now, if the second integral converges it will have a finite value and so the sum of two finite values will also be finite and so the original integral will converge. Likewise, if the second integral diverges it will either be infinite or not have a value at all and adding a finite number onto this will not all of a sudden make it finite or exist and so the original integral will diverge. Therefore, this integral will converge or diverge depending only on the convergence of the second integral.

As we saw in Example 7 the second integral does converge and so the whole integral must also converge.

As we saw in this example, if we need to, we can split the integral up into one that doesn’t involve any problems and can be computed and one that may contain a problem that we can use the Comparison Test on to determine its convergence.

## Comparison theorem for improper integrals

It states:

If ???f(x)\geq g(x)\geq 0??? on ???[a,\infty)???, then

If ???\int_a^\infty f(x)\ dx??? converges then so does ???\int_a^\infty g(x)\ dx???

If ???\int_a^\infty g(x)\ dx??? diverges then so does ???\int_a^\infty f(x)\ dx???

Notice that it ** does not state**:

If ???f(x)\geq g(x)\geq 0??? on ???[a,\infty)???, then

If ???\int_a^\infty g(x)\ dx??? converges then so does ???\int_a^\infty f(x)\ dx???

If ???\int_a^\infty f(x)\ dx??? diverges then so does ???\int_a^\infty g(x)\ dx???

The comparison theorem will allow you to draw the first two conclusions, but ** not** the others. The reason is that we’re assuming ???f(x)\geq g(x)???.

Thinking about ???f(x)???:

If ???f(x)??? is greater than (above) ???g(x)???, then if ???f(x)??? converges, we know it will force ???g(x)??? to also converge. But if ???f(x)??? diverges, then we can’t draw any conclusion about ???g(x)??? because ???g(x)??? could diverge or converge below it.

Thinking about ???g(x)???:

If ???g(x)??? is less than (below) ???f(x)???, then if ???g(x)??? diverges, we know it will force ???f(x)??? to also diverge. But if ???g(x)??? converges, then we can’t draw any conclusion about ???f(x)??? because ???f(x)??? could diverge or converge above it.

Given an improper integral and asked to use comparison theorem to say whether it converges or diverges, our goal will be to find a comparison function that we know will either always be greater than the given function, or always be less than the given function.

Since ???f(x)\geq g(x)??? in the comparison theorem:

If we find a comparison function that is always greater than the given function, then the given function will be ???g(x)??? and the comparison function will be ???f(x)???.

In this case, in order to use the comparison theorem to draw a conclusion, we’d have to show that the comparison function ???f(x)??? converges. If we can show that the comparison function ???f(x)??? converges, then we’ve proven that the given function ???g(x)??? also converges.

If we find a comparison function that is always less than the given function, then the given function will be ???f(x)??? and the comparison function will be ???g(x)???.

In this case, in order to use the comparison theorem to draw a conclusion, we’d have to show that the comparison function ???g(x)??? diverges. If we can show that the comparison function ???g(x)??? diverges, then we’ve proven that the given function ???f(x)??? also diverges.

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains ***.kastatic.org** and ***.kasandbox.org** are unblocked.

### Comparison tests for convergence

### Video transcript

## 8.3: Integral and Comparison Tests

- Last updated

- Save as PDF

Knowing whether or not a series converges is very important, especially when we discusses Power Series. Theorems 60 and 61 give criteria for when Geometric and \(p\)-series converge, and Theorem 63 gives a quick test to determine if a series diverges. There are many important series whose convergence cannot be determined by these theorems, though, so we introduce a set of tests that allow us to handle a broad range of series. We start with the Integral Test.

### Integral Test

We stated in Section 8.1 that a sequence \(\{a_n\}\) is a function \(a(n)\) whose domain is \(\mathbb{N}\), the set of natural numbers. If we can extend \(a(n)\) to \(\mathbb{R}\), the real numbers, and it is both positive and decreasing on \([1,\infty)\), then the convergence of \( \sum\limits_{n=1}^\infty a_n\) is the same as \(\int\limits_1^\infty a(x)dx\).

theorem \(\PageIndex{1}\): integral test

Let a sequence \(\{a_n\}\) be defined by \(a_n=a(n)\), where \(a(n)\) is continuous, positive and decreasing on \([1,\infty)\). Then \( \sum\limits_{n=1}^\infty a_n\) converges, if, and only if, \(\int\limits_1^\infty a(x) dx\) converges.

We can demonstrate the truth of the Integral Test with two simple graphs. In Figure \(\PageIndex{1a}\), the height of each rectangle is \(a(n)=a_n\) for \(n=1,2,\ldots\), and clearly the rectangles enclose more area than the area under \(y=a(x)\). Therefore we can conclude that

\[\int\limits_1^\infty a(x) dx < \sum\limits_{n=1}^\infty a_n.\label{eq:integral_testa}\]

In Figure \(\PageIndex{1b}\), we draw rectangles under \(y=a(x)\) with the Right-Hand rule, starting with \(n=2\). This time, the area of the rectangles is less than the area under \(y=a(x)\), so \(\sum\limits_{n=2}^\infty a_n < \int\limits_1^\infty a(x) dx\). Note how this summation starts with \(n=2\); adding \(a_1\) to both sides lets us rewrite the summation starting with \(n=1\):

\[\sum\limits_{n=1}^\infty a_n < a_1 +\int\limits_1^\infty a(x) dx.\label{eq:integral_testb}\]

Combining Equations \ref{eq:integral_testa} and \ref{eq:integral_testb}, we have

\[\sum\limits_{n=1}^\infty a_n< a_1 +\int\limits_1^\infty a(x) dx < a_1 + \sum\limits_{n=1}^\infty a_n.\label{eq:integral_testc}\]

Theorem \(\PageIndex{1}\)

From Equation \ref{eq:integral_testc} we can make the following two statements:

- If \( \sum\limits_{n=1}^\infty a_n\) diverges, so does \(\int\limits_1^\infty a(x) dx\) (because \( \sum\limits_{n=1}^\infty a_n < a_1 +\int\limits_1^\infty a(x) dx)\)
- If \( \sum\limits_{n=1}^\infty a_n\) converges, so does \(\int\limits_1^\infty a(x) dx\) (because \( \int\limits_1^\infty a(x) dx < \sum\limits_{n=1}^\infty a_n.)\)

Therefore the series and integral either both converge or both diverge.

Theorem\(\PageIndex{1}\) allows us to extend this theorem to series where \(a(n)\) is positive and decreasing on \([b,\infty)\) for some \(b>1\).

Example \(\PageIndex{1}\): Using the Integral Test

Determine the convergence of \(\sum\limits_{n=1}^\infty \dfrac{\ln n}{n^2}\). (The terms of the sequence \(\{a_n\} = \{\ln n/n^2\}\) and the n\(^{\text{th}}\) partial sums are given in Figure \(\PageIndex{2}\)).**Solution**

Figure \(\PageIndex{2}\) implies that \(a(n) = (\ln n)/n^2\) is positive and decreasing on \([2,\infty)\). We can determine this analytically, too. We know \(a(n)\) is positive as both \(\ln n\) and \(n^2\) are positive on \([2,\infty)\). To determine that \(a(n)\) is decreasing, consider \(a^\prime(n) = (1-2\ln n)/n^3\), which is negative for \(n\geq 2\). Since \(a^\prime(n)\) is negative, \(a(n)\) is decreasing.

Applying the Integral Test, we test the convergence of \( \int\limits_1^\infty \dfrac{\ln x}{x^2} dx\). Integrating this improper integral requires the use of Integration by Parts, with \(u = \ln x\) and \(dv = 1/x^2 dx\).

\[\begin{align*}\int\limits_1^\infty \dfrac{\ln x}{x^2} dx &=\lim\limits_{b\to\infty} \int\limits_1^b \dfrac{\ln x}{x^2} dx\\ &=\lim\limits_{b\to\infty} -\dfrac1x\ln x\Big|_1^b + \int\limits_1^b\dfrac1{x^2} dx \\ &=\lim\limits_{b\to\infty} -\dfrac1x\ln x -\dfrac 1x\Big|_1^b\\ &=\lim\limits_{b\to\infty}1-\dfrac1b-\dfrac{\ln b}{b}.\quad \text{Apply L'H\(\hat o\)pital's Rule:}\\ &= 1. \end{align*}\]

Since \( \int\limits_1^\infty \dfrac{\ln x}{x^2} dx\) converges, so does \( \sum\limits_{n=1}^\infty \dfrac{\ln n}{n^2}\).

Theorem 61 was given without justification, stating that the general \(p\)-series \( \sum\limits_{n=1}^\infty \dfrac 1{(an+b)^p}\) converges if, and only if, \(p>1\). In the following example, we prove this to be true by applying the Integral Test.

Example \(\PageIndex{2}\): Using the Integral Test to establish Theorem 61

Use the Integral Test to prove that \( \sum\limits_{n=1}^\infty \dfrac1{(an+b)^p}\) converges if, and only if, \(p>1\).

**Solution**

Consider the integral \(\int\limits_1^\infty \dfrac1{(ax+b)^p} dx\); assuming \(p\neq 1\),

\[\begin{align*}

\int\limits_1^\infty \dfrac1{(ax+b)^p} dx &=\lim\limits_{c\to\infty} \int\limits_1^c \dfrac1{(ax+b)^p} dx \\

&=\lim\limits_{c\to\infty} \dfrac{1}{a(1-p)}(ax+b)^{1-p}\Big|_1^c\\

&=\lim\limits_{c\to\infty} \dfrac{1}{a(1-p)}\big((ac+b)^{1-p}-(a+b)^{1-p}\big).

\end{align*}\]

This limit converges if, and only if, \(p>1\). It is easy to show that the integral also diverges in the case of \(p=1\). (This result is similar to the work preceding Key Idea 21.)

Therefore \( \sum\limits_{n=1}^\infty \dfrac 1{(an+b)^p}\) converges if, and only if, \(p>1\).

We consider two more convergence tests in this section, both *comparison *tests. That is, we determine the convergence of one series by comparing it to another series with known convergence.

### Direct Comparison Test

theorem \(\PageIndex{1}\): direct comparison test

Let \(\{a_n\}\) and \(\{b_n\}\) be positive sequences where \(a_n\leq b_n\) for all \(n\geq N\), for some \(N\geq 1\).

- If \( \sum\limits_{n=1}^\infty b_n\) converges, then \( \sum\limits_{n=1}^\infty a_n\) converges.
- If \( \sum\limits_{n=1}^\infty a_n\) diverges, then \( \sum\limits_{n=1}^\infty b_n\) diverges.

**Note**: A sequence \(\{a_n\}\) is a **positive sequence **if \(a_n>0\) for all \(n\).

Because of Theorem 64, any theorem that relies on a positive sequence still holds true when \(a_n>0\) for all but a finite number of values of \(n\).

Example \(\PageIndex{3}\): Applying the Direct Comparison Test

Determine the convergence of \(\sum\limits_{n=1}^\infty \dfrac1{3^n+n^2}\).**Solution**

This series is neither a geometric or \(p\)-series, but seems related. We predict it will converge, so we look for a series with larger terms that converges. (Note too that the Integral Test seems difficult to apply here.)

Since \(3^n < 3^n+n^2\), \( \dfrac1{3^n}> \dfrac1{3^n+n^2}\) for all \(n\geq1\). The series \(\sum\limits_{n=1}^\infty \dfrac{1}{3^n}\) is a convergent geometric series; by Theorem 66, \( \sum\limits_{n=1}^\infty \dfrac1{3^n+n^2}\) converges.

Example \(\PageIndex{4}\): Applying the Direct Comparison Test

Determine the convergence of \(\sum\limits_{n=1}^\infty \dfrac{1}{n-\ln n}\).

**Solution**

We know the Harmonic Series \(\sum\limits_{n=1}^\infty \dfrac1n\) diverges, and it seems that the given series is closely related to it, hence we predict it will diverge.

Since \(n\geq n-\ln n\) for all \(n\geq 1\), \( \dfrac1n \leq \dfrac1{n-\ln n}\) for all \(n\geq 1\).

The Harmonic Series diverges, so we conclude that \(\sum\limits_{n=1}^\infty \dfrac{1}{n-\ln n}\) diverges as well.

The concept of direct comparison is powerful and often relatively easy to apply. Practice helps one develop the necessary intuition to quickly pick a proper series with which to compare. However, it is easy to construct a series for which it is difficult to apply the Direct Comparison Test.

Consider \(\sum\limits_{n=1}^\infty \dfrac1{n+\ln n}\). It is very similar to the divergent series given in Example 8.3.5. We suspect that it also diverges, as \( \dfrac 1n \approx \dfrac1{n+\ln n}\) for large \(n\). However, the inequality that we naturally want to use "goes the wrong way'': since \(n\leq n+\ln n\) for all \(n\geq 1\), \(\dfrac1n \geq \dfrac{1}{n+\ln n}\) for all \(n\geq 1\). The given series has terms *less than* the terms of a divergent series, and we cannot conclude anything from this.

Fortunately, we can apply another test to the given series to determine its convergence.

### Large Limit Comparison Test

Theorem 67: limit comparison test

Let \(\{a_n\}\) and \(\{b_n\}\) be positive sequences.

- If \(\lim_{n\to\infty} \dfrac{a_n}{b_n} = L\), where \(L\) is a positive real number, then \( \sum\limits_{n=1}^\infty a_n\) and \( \sum\limits_{n=1}^\infty b_n\) either both converge or both diverge.
- If \(\lim_{n\to\infty} \dfrac{a_n}{b_n} = 0\), then if \( \sum\limits_{n=1}^\infty b_n\) converges, then so does \( \sum\limits_{n=1}^\infty a_n\).
- If \(\lim_{n\to\infty} \dfrac{a_n}{b_n} = \infty\), then if \( \sum\limits_{n=1}^\infty b_n\) diverges, then so does \( \sum\limits_{n=1}^\infty a_n\).

Theorem 67 is most useful when the convergence of the series from \(\{b_n\}\) is known and we are trying to determine the convergence of the series from \(\{a_n\}\).

We use the Limit Comparison Test in the next example to examine the series \(\sum\limits_{n=1}^\infty \dfrac1{n+\ln n}\) which motivated this new test.

Example \(\PageIndex{5}\): Applying the Limit Comparison Test

Determine the convergence of \(\sum\limits_{n=1}^\infty \dfrac1{n+\ln n}\) using the Limit Comparison Test.

**Solution**

We compare the terms of \(\sum\limits_{n=1}^\infty \dfrac1{n+\ln n}\) to the terms of the Harmonic Sequence \(\sum\limits_{n=1}^\infty \dfrac1{n}\):

\[\begin{align*}

\lim_{n\to\infty}\dfrac{1/(n+\ln n)}{1/n} &=\lim\limits_{n\to\infty} \dfrac{n}{n+\ln n} \\

&= 1\quad \text{(after applying L'H\(\hat o\)pital's Rule)}.

\end{align*}\]

Since the Harmonic Series diverges, we conclude that \(\sum\limits_{n=1}^\infty \dfrac1{n+\ln n}\) diverges as well.

Example \(\PageIndex{6}\): Applying the Limit Comparison Test

Determine the convergence of \(\sum\limits_{n=1}^\infty \dfrac1{3^n-n^2}\)

**Solution**

This series is similar to the one in Example 8.3.3, but now we are considering "\(3^n-n^2\)'' instead of "\(3^n+n^2\).'' This difference makes applying the Direct Comparison Test difficult.

Instead, we use the Limit Comparison Test and compare with the series \(\sum\limits_{n=1}^\infty \dfrac1{3^n}\):

\[\begin{align*}

\lim_{n\to\infty}\dfrac{1/(3^n-n^2)}{1/3^n} &=\lim\limits_{n\to\infty}\dfrac{3^n}{3^n-n^2} \\

&= 1 \quad \text{(after applying L'H\(\hat o\)pital's Rule twice)}.

\end{align*}\]

We know \(\sum\limits_{n=1}^\infty \dfrac1{3^n}\) is a convergent geometric series, hence \(\sum\limits_{n=1}^\infty \dfrac1{3^n-n^2}\) converges as well.

As mentioned before, practice helps one develop the intuition to quickly choose a series with which to compare. A general rule of thumb is to pick a series based on the dominant term in the expression of \(\{a_n\}\). It is also helpful to note that factorials dominate exponentials, which dominate algebraic functions (e.g., polynomials), which dominate logarithms. In the previous example, the dominant term of \(\dfrac{1}{3^n-n^2}\) was \(3^n\), so we compared the series to \( \sum\limits_{n=1}^\infty \dfrac1{3^n}\). It is hard to apply the Limit Comparison Test to series containing factorials, though, as we have not learned how to apply L'H\(\hat o\)pital's Rule to \(n!\).

Example \(\PageIndex{7}\): Applying the Limit Comparison Test

Determine the convergence of \(\sum\limits_{n=1}^\infty \dfrac{\sqrt{n}+3}{n^2-n+1}\).

**Solution**

We naively attempt to apply the rule of thumb given above and note that the dominant term in the expression of the series is \(1/n^2\). Knowing that \( \sum\limits_{n=1}^\infty \dfrac1{n^2}\) converges, we attempt to apply the Limit Comparison Test:

\[\begin{align*}

\lim_{n\to\infty}\dfrac{(\sqrt{n}+3)/(n^2-n+1)}{1/n^2} &=\lim\limits_{n\to\infty}\dfrac{n^2(\sqrt n+3)}{n^2-n+1}\\

&= \infty \quad \text{(Apply L'H\(\hat o\)pital's Rule)}.

\end{align*}\]

Theorem 67 part (3) only applies when \(\sum\limits_{n=1}^\infty b_n\) diverges; in our case, it converges. Ultimately, our test has not revealed anything about the convergence of our series.

The problem is that we chose a poor series with which to compare. Since the numerator and denominator of the terms of the series are both algebraic functions, we should have compared our series to the dominant term of the numerator divided by the dominant term of the denominator.

The dominant term of the numerator is \(n^{1/2}\) and the dominant term of the denominator is \(n^2\). Thus we should compare the terms of the given series to \(n^{1/2}/n^2 = 1/n^{3/2}\):

\[\begin{align*}

\lim_{n\to\infty}\dfrac{(\sqrt{n}+3)/(n^2-n+1)}{1/n^{3/2}} &=\lim\limits_{n\to \infty} \dfrac{n^{3/2}(\sqrt n+3)}{n^2-n+1} \\

&= 1\quad \text{(Apply L'H\(\hat o\)pital's Rule)}.

\end{align*}\]

Since the \(p\)-series \(\sum\limits_{n=1}^\infty \dfrac1{n^{3/2}}\) converges, we conclude that \(\sum\limits_{n=1}^\infty \dfrac{\sqrt{n}+3}{n^2-n+1}\) converges as well.

We mentioned earlier that the Integral Test did not work well with series containing factorial terms. The next section introduces the **Ratio Test**, which does handle such series well. We also introduce the **Root Test**, which is good for series where each term is raised to a power.

### Contributors and Attributions

Gregory Hartman (Virginia Military Institute). Contributions were made by Troy Siemers and Dimplekumar Chalishajar of VMI and Brian Heinold of Mount Saint Mary's University. This content is copyrighted by a Creative Commons Attribution - Noncommercial (BY-NC) License. http://www.apexcalculus.com/

## Theorem calculus comparison

## Convergence tests for improper integrals

Quite often we do not really care for the precise value of an integral, we just need to know whether it converges or not. Since most integrals are rather difficult to evaluate, usually it is easier to just compare the integrated function to another, easier function, and then use this comparison to reach some conclusion. In this section we will only consider basic improper integrals - that is, with one "problem". More general integrals are always evaluated by splitting them into several basic integrals (with one problem), to each we then apply convergence tests. Because of the symmetry of the situation, we will state our comparison theorems for the case when the problem appears at the right endpoint.

First we will assume that the functions involved are positive. This greatly simplifies our situation. As we observed before, there are only two alternatives in this case: Either the area under the graph is finite or it is infinite.

Theorem(Comparison test).

Letbbe a real number orb= ∞, leta<b. Letfandgbe functions that are continuous and non-negative on [a,b) andf≤gon [a,b).If converges, then also converges.

If diverges, then also diverges.

The idea of this test should be clear from the picture:

If the area under the graph of *g* is finite, then so should be the smaller area under the graph of *f*. Conversely, if the region under the graph of *f* has infinite area, then the larger region under the graph of *g* should have it, too.

The picture also makes it clear that the comparison can only work in one direction and the above two implications cannot be true as equivalences. For instance, assume that the area under the graph of *f* is finite. Since the region under the graph of *g* is larger, no conclusion is possible: its area may be finite but also infinite. This is the main disadvantage of the Comparison test.

**Example:**

Decide whether the following integral converges:

This integral can be actually evaluated using partial fractions, but it is easier to answer this question using the Comparison test. We note that for positive *x*, . Since the integral converges (this we remember, see Properties and Examples), by the Comparison test, also the integral in question converges.

We just saw the main advantage of the Comparison test: very often it is very easy. This example was typical. Given a (complicated) function, we find a comparison function, typically a power, because their behaviour we know well. Then we try to establish some inequality. If we are lucky, we get the conclusion quite easily. If we are not lucky, we get to see the main disadvantage of this Test in action:

**Example:**

Decide whether the following integral converges:

Again, this integral can be actually evaluated using partial fractions, but we try it using the Comparison test. We note that for positive *x*, . The test integral converges (as before), but this time the comparison inequality goes the wrong way and no conclusion can be made. The Comparison test failed (as used). It should be noted that this problem can be in fact solved using the Comparison test, but it requires a more delicate choice of the test function *g*. The justification of the inequality involved requires some work, so in the end it is easier to use another test instead. The curious reader will find the solution via Comparison test here.

This example shows that it is not enough to make just some comparison. Given a function *f*, we try to find a suitable (simple) test function. If there is a natural candidate *h* that is smaller than *f*, it will be useful only if its integral diverges; we would look for such *h* if we suspected that the integral of *f* diverges and wanted to prove it. If there is a natural candidate *g* that is larger than *f*, it will be useful only if its integral converges; we would try to find such *g* if we suspected that the integral of *f* converges.

The Comparison test can be also thought of as a generalization of the following fact (*cf.*Properties of Riemann integral):

If

fandgare Riemann integrable on [a,b] andf≤gon [a,b], then

The Comparison test essentially says that the same is also true for non-negative functions and improper integrals. However, then the inequality between integrals is not strictly true (as they might not exist), rather, it has the following meaning: If the "smaller integral" is infinite, then the "larger" must also naturally be infinite, since only infinity satisfies the inequality ∞ ≤ *A*.

On the other hand, if the "larger integral" is finite, then so should be the "smaller" one, and its value must be less than or equal. This "inequality approach" also nicely illustrates why the Comparison test works one way only. We will show it by returning to our first two examples.

In the first one, we can imagine that by integrating the inequality we get the inequality

Now it seems natural that the given integral is convergent, we even get an upper bound for its value.

In the second example, we can imagine that by integrating the inequality we get the inequality

In this inequality, the given integral can be equal to a finite number but also to infinity (as surely ∞ ≥ 1/3). Thus no conclusion is possible.

Now it should also be clearer why we require that the function *f* be non-negative. If we allowed it do drop below the *x*-axis, we would have no control over how much area it accumulates there. Thus a comparison test for functions that change its sign a lot should involve two test functions, one that prevents *f* from being too large and another that prevents it for gaining too much area below the *x*-axis. Such a complicated test is usually not needed and we can do with an easier (but less powerful) test, a comparison test which keeps track of *f* by using the absolute value:

Theorem(Comparison test - an absolute value version).

Letbbe a real number orb= ∞, leta<b. Letfandgbe functions that are continuous on [a,b) and let |f| ≤gon [a,b).If converges, then also converges.

The next test is a much stronger tool than the Comparison test. In particular, its conclusion is stated as an equivalence, so it does not share the main disadvantage of the Comparison test. Its main disadvantage is that its correct application requires more work.

Theorem(Limit Comparison test).

Letbbe a real number orb= ∞, leta<b. Letfandgbe functions that are continuous on [a,b), letf≥ 0 there. Assume that the limit exists finite, but is not equal to zero. Then the integral converges if and only if the integral converges.

This test works in a somewhat different way. Given a function *f*, we find a test function *g* which does not necessarily have to be greater or smaller than *f*; in fact, it may sometimes go above and sometimes drop below *f*. The important thing is that as *x* gets close to *b* (from the appropriate side, i.e. from *a*), these two functions should be basically equal (up to a multiple). This is verified using "limit comparison at *b*", which is exactly the limit assumption in the Limit Comparison test. In this way we justify that our guess of a test function was correct. Typically (for the "right" guess) the limit should be equal to 1. That would mean that when *x* gets close to *b*, the ratio is about one. Multiplying out we get that if *x* is close to *b*, then *f* (*x*) ∼ *g*(*x*) (meaning that they are about the same). Then it seems natural that also

So these integrals should come out about the same, and the conclusion of the theorem seems clear. If one of the integrals is finite, so should be the other. If one of them diverges, so should the other.

The Limit Comparison test is also true for functions that are always negative. In fact, it can be applied even to functions that do not necessarily keep their signs. But in order to work, the changes should not happen "too often". Since exact specification of this condition is not worth the trouble, people usually ignore this more general view and just use this test with non-negative functions.

**Example:**

Decide whether the following integral converges:

If *x* is close to infinity, that is, if it is a really big number, then in the denominator the square will prevail and we can ignore the rest. This motivates our choice of the test function: *g*(*x*) = 1/x^{2}. Now we have to justify that our choice is correct:

The limit exists and is not equal to zero, so the given function and the chosen test function are indeed very similar around infinity. Since we know that the integral converges, it follows that also the given integral converges.

This was a typical Limit Comparison test problem. First we found a test function. Then we used limit to justify that our choice was correct. Then we looked at the corresponding integral with the test function, investigated its convergence, and finally we carried this conclusion to the given integral. For a summary of the strategy for choosing the right test function and some important examples we refer to the Methods Survey - Improper Integrals and Solved Problems - Improper Integrals.

It should be noted that the Limit Comparison test is not better (in the sense of more general) than the Comparison test. There are problems where the comparison via inequality can be achieved while the limit comparison is basically impossible. One such problem is in Solved Problems - Improper Integrals.

All three tests stated above have a companion version that handles the case when there is a problem at the left endpoint of the interval. Since the modifications are obvious, we prefer to show its application on one problem. We also use this opportunity to show how it works when the "problem" is not infinity but a vertical asymptote.

**Example:** Decide whether the following integral converges:

There is a problem at *x* = 2. We claim that if *x* is close to 2 (from the right), then the given function behaves almost exactly like . This claim has to be justified:

We see that our guess was correct as the limit yielded a non-zero number. Now we have to investigate the corresponding improper integral for our test function:

This integral is divergent, since it is one of the powers we investigated in Properties and examples and we remember it. By the Limit Comparison test, also the given integral diverges.

While the above solution followed correctly the Limit Comparison test, it may be worth to look at the meaning of the whole procedure. The limit result in the above comparison means that if *x* is really close to 2 (from the right), then . This implies also similarity between integrals from 2 to the right. The square root of 2 is a multiplicative constant, so we can factor it out and get comparison

Since the test integral on the right is divergent and multiplication by a non-zero number cannot fix it, also the integral on the left should be divergent. We also see that the multiplicative constant we obtain during the limit comparison can be ignored in our considerations, because it cannot influence the convergence of our integrals (but it needs to be non-zero for this).

The reader now probably wonders how we came up with the test function. Indeed, this is quite difficult and requires a considerable experience, even then it can be tricky. This is the main reason why the comparison tests are mostly used for problems with infinity, respectively negative infinity. There we can imagine that *x* is a really big number and our intuition can help us determine which parts of the given function become unimportant.

Back to Theory - Improper Integrals

After removing the last cups, I stayed outside to smoke a cigarette, while the women went to bed. I sat, smoked, thought. I also smoked. Rustle in the bushes. Probably a hedgehog.

### Now discussing:

- Desi driving school
- Les paul r9
- Joseph clyde daniels
- Hotels northgate seattle
- Fire force myanimelist
- 1969 corvette ebay
- Discipline priest abilities
- Nigerian christian movies

My ass is not ready to take it now. It hurts me. You slowly start screwing it into my hole. I can't help moaning.

**2419**2420 2421