I was studying about Tail call recursion and came across some documentation that mentioned. Sun Java doesn't implement tail call optimization.
I wrote following code to calculate fibonacci number in 3 different ways:
1. Iterative
2. Head Recursive
3. Tail Recursive

Head Recursive method does not finish for n>50. Program looked like hanged. Any idea, why this could happen?

Tail recursive method took significantly less time as compared to Head recursion. Sometimes took even less time than Iterative method. Does it mean that java does some Tail call optimization internally?
And if it does, why I did it give StackOverflowError at n > 5000?

You should also observe that the fibonacciRecursive is not head recursive, and you've coded it to have a different complexity than the other two. (No it didn't hang. It just takes twice as long to calculate 51 than 50, ergo, 1024 times as long to calculate 60 than 50.)
–
Mooing DuckOct 25 '11 at 19:13

4 Answers
4

No, it does not. The HotSpot JIT compilers do not implement tail-call optimization.

The results you are observing are typical of the anomalies that you see in a Java benchmark that doesn't take account of JVM warmup. For instance, the "first few" times a method is called, it will be executed by the interpreter. Then the JIT compiler will compile the method ... and it will get faster.

To get meaningful results, put a loop around the whole lot and run it a number of times until the timings stabilize. Then discard the results from the early iterations.

... why I did it give StackOverflowError at n > 5000?

That's just evidence that there isn't any tail-call optimization happening.

For the first question, what is 2^50 (or something close)? Each number N in a recursive Fib function calls it twice (prior 2). Each of those calls 2 prior iterations, etc.. so it's grows to 2^(N-k) of recursion (k is probably 2 or 3).

The 2nd question is because the 2nd one is a straight N recursion. Instead of going double headed (N-1),(N-2), it simply builds up from M=1, M=2... M=N. Each step of the way, the N-1 value is retained for adding. Since it is an O(N) operation, it is comparable to the iterative method, the only difference being how the JIT compiler optimizes it. The problem with recursion though is that it requires a huge memory footprint for each level that you stack onto the frame - you run out of memory or stack space at some limit. It should still generally be slower than the iterative method.

Regarding point 1: Computing Fibonacci numbers recursively without memoization leads to a run time that is exponential in n. This goes for any programming language that does not automatically memoize function results (such as most mainstream non-functional languages, e.g. Java, C#, C++, ...). The reason is that the same functions will get called over and over again - e.g. f(8) will call f(7) and f(6); f(7) will call f(6) and f(5), so that f(6) gets called twice. This effect propagates and causes an exponential growth in the number of function calls. Here's a visualization of which functions get called:

Thanks Aasmund. You are correct. So recursive method has running time of O(c^n) which increases exponentially as n increases.
–
TTPMar 28 '11 at 0:33

1

@TTP: Yes. However, this effect isn't really due to the head recursion, but to the fact that the recursive method calls itself twice, and that it ends up getting called many times for the same argument value, and that it "forgets" which values it has already calculated. If you use memoization, you get linear runtime.
–
Aasmund EldhusetMar 28 '11 at 0:38