So, how did you manage to convince yourself that taking square roots of fractions repeatedly has the same behavior? (Just kidding, you replace > with < of course).
–
Guess who it is.Aug 25 '10 at 16:24

You were a pretty smart kid! Up to the statement that any decreasing bounded below sequence has a limit, that is a perfectly good argument. And that statement more or less comes down to how you define real numbers in the first place...
–
David SpeyerAug 25 '10 at 18:18

@David: Thanks! Unfortunately, there has only been a decline :-( :-) You are right, it can be made rigorous. I suppose it comes from the axiom of existence of a supremum for subsets of R which are bounded above (or infimum).
–
AryabhataAug 25 '10 at 18:38

Later. To make the picture it is better to use the cosine (so you do not even have to make the picture because that is what's in the wikipedia page) mainly because the iteration is seen more clearly: for the square root, the sequence converges boringly and fast to be interesting.

PS. You can draw this kind of pictures, assuming you have acces to Mathematica, with the following code:

A nice series of solutions has already been given. It's hard to improve on any of them. But because this is an elementary question, it deserves an elementary answer. (Although "elementary" is subjective, being able to punch buttons on a calculator requires little mathematical sophistication, so we should attempt to minimize the formality and maximize the intuition in the explanation.)

When you begin with a value x on the calculator with x > 1, note that its square root lies between 1 and the midpoint of the interval [1, x].

(Proofs: The midpoint is (x+1)/2. Because ((x+1)/2)^2 - x = ((x-1)/2)^2 >= 0, the midpoint is no greater than the root of x. Non-algebraic version: draw graphs of the relations x = y^2 and y = (1+x)/2 on the same plot and note that the latter never falls below the former.)

It follows immediately (by induction if you want to be formal) that the iterated root, although never falling below 1, decreases faster than the iterated average with 1, which obviously (on geometric or arithmetic grounds) converges to 1.

The case of 0 < x < 1 is reduced to the case 1/x > 1 because Sqrt(1/x) = 1/Sqrt(x).

I think a better question is to analyze how the roots of a number X approach 1, depending on X. I.e. xa ~ 1 + a(lnX). Not only does this introduce the log, it gives a natural derivation of e which is easy to remember. I never understood what was "natural" about the natural log until I was an undergraduate, and I think the real value of this sequence is the way it leads so memorably and naturally to the derivation of e.

It's easy to see why you arrive at one from repeatedly taking square roots (as moron answered), since the sequence is strictly decreasing and bounded below by 1. I think a motivated person of almost any age can understand that, as long as it's explained slowly and clearly.

There's a logical gap here. It's consistent with all your assertions, for example, for iterated square roots of x > 2 to converge down to 2 and for iterated roots of x, 1 < x <= 2, to converge down to 1. At a minimum you need to invoke continuity.
–
whuberAug 26 '10 at 1:01

If you add something about how far towards 1 it moves that would solve the problem, but I'm to lazy to figure out how to add that.
–
BCSAug 26 '10 at 1:09