Recursion

A. Recursion works by starting with a computation over a large example
and computing the solution in terms of the same computation over 1 or more
smaller examples.

B. Any iteration can be implemented as recursion. In fact, in ML, you have
no choice, and in Scheme people will look down their noses if you make any
other choice.

Unfortunately, these statements are mutually contradictory, since iterations
often do not progress from large to small, they often go from
small to large or equal size.

Example: Newton method for solving equations recurs from one value of X to
another value of X. But the new value is only "smaller" in the sense that
it is closer to the solution. There's nothing structurally simpler about it;
it's just another floating point number.

(If your measure of the "size" of a problem is "How many more iterations
until the loop exits?" then of course all iterations go from large to small.
But that's circular.)

Tail recursion optimization

Besides the clumsiness of the recursive encoding of "interactive" is the
fact that the calling sequence gets longer and longer; hence,
the stack gets larger and larger and will eventually exhaust memory.

This can be fixed with tail-recursion optimization:

If the last step of function f is to call itself recursively, and if the
value returned by the call to f is the value returned by the recursive call
(as in newton2 and interactive2), then the activation record
for the first call of f can be replaced by the activation record
for the second call. (Note that their lexical context must be the same.)

The same is true of a call from f to g if the lexical context of f and g is
the same.

So a compiler with tail-recursion optimization can execute interactive2 without
ever blowing the stack. The only disadvantage is a slight overhead.
Functional programming enthusiasts are very proud of this.

Sometimes a recursive program has to be twiddled a bit to make it fit the
tail-recursion condition.

My own opinion: If the only recursive call in a function is one that satisfies
the tail-recursion condition, then generally it's easier to read if it's
written iteratively.

When should you actually use recursion? (E. Davis' opinion)

(Assuming that the PL or the exam question gives you any choice in the matter.)

A. When the algorithm actually works by reducing a large problem
to a smaller
one of the same type or (especially) when it works by reducing a large
problem to more than one smaller problems of the same type
(divide and conquer).

function f(...)
{ set up a lot of local state;
recursive call to f;
more computation involving the state set up before the recursive call;
}

The advantage of the recursion in case C is that the saving and restoring
of the local state is automatic. If you want to do this without recursion
you have to set up your own stack, push the state onto it, and then pop it.

function f(...)
{ loop
{ if (base condition of recursion) then exitloop;
set up local state;
push local state onto stack;
} /* end loop
loop
{ do end computation;
if stack is empty then exitloop;
state := pop stack;
}
}

This is very rarely worth doing. Perhaps, if the first part of the algorithm
creates a lot of state and only a little bit is used in the
end computation, then if you use your own stack, you can save only the part
you will need, whereas the recursive encoding will save the whole state. But
you can usually accomplish this recursively with a little cleverness in
the coding.