Helpful ideas and solutions for the Oracle application enthusiast. Check out the archives

Tuesday, June 27, 2006

Recursion vs Iteration

When writing code to do repetitive tasks, the two primary approaches are iteration and recursion. Generally you can use either one interchangeably, but potentially with different performance and complexity.

A recursive function calls itself (possibly more than once), with different parameters, and defines an exit clause that is guaranteed to be reached.

The advantages and disadvantages of the two are not always obvious, and you should really take it on a case-by-case basis. When in doubt: test it. But generally:

1. Recursion may be slower, and use greater resources, because of the extra function calls.

2. Recursion may lead to simpler, shorter, easier-to-understand functions, especially for mathematicians who are comfortable with induction formulae.

Either way, one of the most critical aspects of writing either a recursive or an iterative function is to have an exit clause (or "base case" for recursions) that is checked at every recursion/iteration and is guaranteed to be reached at some finite point, given any input. Otherwise you will get either:

Infinite Recursion: More and more functions being spawned, never closing, using up resources until there are none left, possibly crashing the system.

Infinite loop: A loop that cycles forever, burning CPU and never completing.

I can illustrate this point with my recursive example above. Do NOT try this, but if you inserted a negative number, what do you think would happen? Infinite recursion.

Now you have not only the basics on iteration and recursion, but, based on the topic I chose for the sample code, knowledge that I saw "Da Vinci Code" recently.

You are presenting two versions of Fibonacci and you are claming that the performance difference is due to "the extra function calls". That's very far from being a complete explanation and it can be very confusing for people reading this article.

The main difference is algorithmical. The recursion uses a trivial exponential algorithm (because the recursive calls are overlaping), whereas the iterative one uses a dynamic programing altorithm with linear execution time.

Btw. : Fibonacci is the first example of pure algorithmic gain in a dynamic programing course.

In order to build a correct benchmark you must - either chose a case where recursive and iterative versions have the same time complexity (say linear). For example, use the sum of the first n integers.- or explain that the poor performance of the recursive function from your example come from the huge algorithmic difference and not from the lack of performance of the recursive calls.

You should really update this post, because as it is it represents a pure disinformation.

I make many points, you feel one of my suggestions is correct but unclear, and you jump to "pure disinformation?" :)

Anyway, thanks for that clarification. The examples were provided to show how the same problem is solved using the different methods, not as benchmarks. But the better performance of the iterative function is a direct result of using iteration. You could never find as good an "algorithmic" solution using recursion: every time you'd get more recursive function calls than iterative loops.

I think this just goes to show that:"you should really take it on a case-by-case basis. When in doubt: test it."

> You could never find as good an "algorithmic" solution using recursion: > every time you'd get more recursive function calls than iterative loops

yes you can do it : use a lookup in an array (or hash table) before doing the recursive call. the size of the array is the maximum number of Fibonacci we can compute. In this case, you have n-1 calls, instead of 2^n calls (quite a difference, right ? ;-)).

Of course, in this case we have the same number of function calls as iterative loops, but there's another overhead : the extra lookups. But we still have a quite simple code, similar to the recursive one.

This can be used as a general technique to obtain the optimal tradeof between performance and code simplicity.

Do you believe that your provided example is true recursion? Even if so, you seem to be suggesting that you must use programming techniques to make recursion almost as fast as iteration. Which, in the end, is still an agreement with my initial assertions.

> you seem to be suggesting that you must use programming techniques to make recursion almost as fast as iteration

I'm only suggesting that 1. when you want to compare recursive and iterative algorithms, the only reliable way to get a fair comparison is to look at the *best* recursive and at the *best* iterative algorithms. Unfortunately you made a comparison between the worst possible recursive algorithm and a good iterative one. So your comparison can not lead to good conclusions regarding the "function call overhead".2. When implementing recursion, one should take into account that there are many ways to do it

> Which, in the end, is still an agreement with my initial assertions.

in your dreams :-)

> P.S. Testing is more reliable than thinking.

that's not always true. For instance, in your example, one can see immediately that the recursion performs in time 2^n (exponential) whereas the iteration performs in time n (linear). Once you get that, testing is useless (it takes longer and have no added value, it only confirms an obvious difference). If there is no strong evidence of the algorithmic difference, testing become more reliable.

and please note that I am not speaking of unit or functional testing, but of performance testing. Unit ant functional are obviously more reliable than thinking. But when doing performance testing, algorithmic thinking can be a valuable tool.

This is such a nice post, actually I'am new in Oracle industry and I don't know much about so that's why I'am looking on the web for tutorials and Oracle Consulting and Oracle Services to gain my knowledge about Oracle. Thank you so much for sharing such a nice post with us.