Improper Integrals

I have some questions on the topic of improper integrals. I'm using Thomas' calculus 11th edition for reference, but I have a handful of other books providing me with the same information.

When they are defining improper integrals, they work with the hypothesis that f(x) must be continuous on [itex][a,\infty)[/tex].

From there they define
[itex]\int_a^{\infty} f(x)dx = \lim_{b \to \infty} \int_a^b f(x) dx. [/tex]
which is just fine. However, why do we need the hypothesis that f is continuous? I can't see any real need for this strong of a condition. I am thinking that that what they really want is boundedness on any finite subinterval of the real numbers, and forcing the function to be continuous is one way of doing this.

In the same way when they state the direct comparison test and the limit comparison test they require this condition of continuity. Is this for the same reason or a different reason?

Anyhow, I would greatly appreciate if there is anyone who could shine some light on this issue for me. It is a great frustration of mine that introductory calculus textbooks tend to refer to "more advanced texts" to justify and show certain results, yet those more advanced texts remain elusive to me.

In order to show you that you must, indeed, have more strict limitations on f than mere boundedness in order for the improper integral to have meaning, try and integrate the following f, from 0 to infinity:
[tex]f(x)=1, 2n\leq{x}<2n+1,n\in\matbb{N},f(x)=-1, 2n-1\leq{x}<2n,n\in\matbb{N}[/tex]

I think I probably need to be a little bit more clear with the question I am getting at.

Your example function jumps between 1 and -1 at every natural number, so overall the amount of area we pick up can never be more than 1, but when we look at the improper integral it doesn't exist because it isn't settling down on a single value. However, we could just as well do the same thing looking at a continuous function, cos(x). The improper integral would not exist as our oscillating function never settles down to a given value, even though on the whole it can never pick up more than a small amount of area. Nevertheless in both of these cases it is perfectly reasonable to look at the improper integral; the conclusion we reach is simply that it doesn't have a well-defined value.

My problem is that in the text book in order to even define the improper integral of a function f, they require that the function f be continuous. This is absolutely silly, because there are countless discontinuous functions where the improper integral makes perfect sense. What I'm trying to figure out is what the real condition they want on the function f to define an improper integral of it. Looking through the rest of the text, they avoid talking about boundedness at great lengths with other theorems about integration, and instead just require continuity, which gives them automatic boundedness on a compact interval (or closed subinterval of the real numbers if you prefer).

So here I wonder, is the real condition that they are after boundedness? Then, if we have boundedness it is reasonable enough to define the improper integral, even though it gives us no guarantee that it will converge? I don't think we need anything stronger, but I just wanted to make sure I wasn't overlooking some other condition.

Later on they define both the direct comparison test and the limit test for determining convergence of improper integrals. There they also have this extra hypothesis of continuity (which clearly isn't required). I think that just having boundedness of the functions here should be a strong enough hypothesis, but once again, I just wanted to make sure I wasn't overlooking something else.