I have no idea what language that is, but that doesn't matter. I do know that the tree-like recursion pattern you've chosen (shown in the C-like code in the comments) is rather wasteful. It will calculate lower values multiple times, which is unnecessary. The higher n is, the more redundant calculations it gets. The Fibonacci sequences is linear, you don't need a tree-like recursion to solve it.

What's best is to count up to the desired index of the sequence, rather than working your way down and then collapsing dozens of method calls. A loop is sufficient.

Joel B Fant"An element of conflict in any discussion is a very good thing. Shows everybody's taking part, nobody left out. I like that."

I'm sorry that I can't be of any help with the assembly. I just hope that this is just a school assignment and (if so) that they teach you afterwards why recursion is not a good solution in the context of the Fibonacci sequence.

Joel B Fant"An element of conflict in any discussion is a very good thing. Shows everybody's taking part, nobody left out. I like that."

I'm sorry that I can't be of any help with the assembly. I just hope that this is just a school assignment and (if so) that they teach you afterwards why recursion is not a good solution in the context of the Fibonacci sequence.

I don't see how using recursion in such a situation can possibly be a bad route, unless of course you could proove this to me somehow?

Take a look at this LISP code which is a recursive version of the Fibonacci sequence.

You'll notice that the recursive version is far more expressive and much easily read and understood if one knows atleast the basics of recursion. Neither version is faster than the other, they are for the most part equal. And if this was done in C, or some other language, the recursive version will still look nicer.

Because it results in a lot of unnecessary recursive calls. Draw it out in a diagram, see how many times fib(1) gets called:

[code=fib(5)] 5
4 3
3 2 2 1
2 1 1 1
1[/code]
The funny thing is, whatever nth number of the sequence you want, fib(n) will end up calling fib(1) the number of times of that number in the sequence. The eighth number in the sequence is 21. That's how many times fib(1) will get called, not to mention the repeated (and therefore useless) calls to fib(2), fib(3), and so on. And even if you write the condition so that fib(1) never gets called (as in your Lisp version), you can't do the same for the other calls.

(edit: Oops, it doesn't cull fib(1) calls; I misread it.)

One's not faster than the other? Have you benchmarked the iterative versus the tree-recursive form with large values of n? I was also under the impression that the Lisp compiler can often optimize recursive functions into iterations (is it just the compiler, or does the interpreter do that to? I forget).

Mind you, I'm not saying that recursion shouldn't be used for the Fibonacci sequence. Just not this form of it. You could make a recursive function that starts at the beginning of the sequence (rather than 'starting' at the unknown value you want and working your way down, and not having an answer until every path of the tree has been travelled).

It's like painting lines on a road, but leaving the bucket of paint at the beginning instead of taking it with you. With every line, you have to keep going farther and farther away from the bucket.

The eloquence of the algorithm is more important than the readability of the code, in my opinion.

Last edited by LyonHaert; April 8th, 2006 at 03:07 PM.
Reason: spelling and correction

Joel B Fant"An element of conflict in any discussion is a very good thing. Shows everybody's taking part, nobody left out. I like that."

Yegg, I understand that readability and elegance of the code are tenets of Lisp, and that good practice in these usually leads to good algorithms in Lisp, too. I also understand that recursion is used a lot in Lisp programs, but I also recall that it is often tail-recursion, and linear. However, Lisp is a high-level language. I would argue that there is no such readability for assembly, therefore the point is moot.

On the point of speed, the most important thing low-level languages are good for is speed. Therefore you want to use the fastest algorithm, even in C, even if it's less readable. And with this particular algorithm, it's not even good for Lisp.

I tried the two you gave me (fibI for iterative, fibR for recursive). fibR(20) was well under a second, but from there it just starts climbing (fibR(30) took 11-12 seconds), while fibI(100000) takes under 5 seconds.

The one advantage Lisp has in these calculations is the lack of a limit on the size of an integer. Even a 64-bit unsigned integer is only good up to the 93rd number in the Fibonacci sequence (I did some C# benchmarks, fibR(49): 739590ms, while fibI(49): 0ms). No such limitation in Lisp, it calculates fibI(1000000) just fine, even if it takes a few minutes.

Unfortunately, we've kindof hijacked this thread, and nobody seems to know assembly 'round here. But the reason I made my original comments on the recursive algorithm is that -- unless that particular algorithm is required -- ok_good might be able to make a working program with a different algorithm... if the bug is in that part of his program.

Comments on this post

Yegg` agrees
: DIdn't think about that earlier.

Joel B Fant"An element of conflict in any discussion is a very good thing. Shows everybody's taking part, nobody left out. I like that."

Yeah, if I'd used bignums in the C++ benchmarks the numbers might have been a bit higher.

Even in Lisp/Haskell/nice-syntax-language, would you sort an array by checking all N! permutations of the elements just because it "looks good"? A line has to be drawn between speed and elegance. The simplest solution might not always be the best one.

Want a fib(n) with bloody damn SPEED? If you complain about the "magic numbers" thing in the code, saying how it reduces "readability" and "elegance", go ahead -- use your "superior" recursive version.

For the fibonacci problem in Haskell, it is possible to use tree recursion without the combinatorial overhead because the language processor automagically memoizes the results of repeated calls - any calls for previously calculated values are replaced with a table lookup. Of course, that technically is changing the algorithm - but it does give rather good performance without loosing expressiveness. Sure, you could memoize explicitly in other languages - SICP gives this specific example of it in Scheme (Section 3.3.3, exercise 3.27) - but it wouldn't be quite so elegant anymore...

In any case, your correct that the equation version is fastest, and in a case like this, magic numbers aren't really an issue - they values are a part of the identity, and don't really have a separate conceptual meaning. However, the fibonacci function is primarily used to demonstrate tree recursion - how it works, where and when you would use it, and why you wouldn't in most cases - and as an example of O(n^2) performance vs. the O(n) linear recursive version vs. the O(log n) iterative version (again, this is exactly what is done in sections 2.2 and 2.3 of SICP). The fibonacci sequence itself doesn't come up very often in general programming, but it is related to things which do, such as tree problem-space searches.

[EDIT: I should have read the comment more carefully. Still, the point about memoization is worth mentioning.]

This thread is now in deep hijack, but I thought I'd point out that the (phi^n - (1-phi)^n)/sqrt(5) formula is not great for computation because it requires real number operations and results in a real number, unlike the other algorithms, which can produce a result in essentially any form. If you actually want an integer result or want to know a Fibonacci number mod some integer, then that formula is not suitable for large values of n. (For small values of n, it will work, though.) A better way is the following recursive algorithm. In the interest of being vaguely on-topic at least for this forum, the following is implemented in the "Other Programming Language" Ada.

For comparison, compiling with alternative 1 (tree recursion) with argument 40 tends to report around 43,000,000 clock ticks, taking about 43 seconds. Compiling with alternative 2 (linear iterative) tends to report 7 clock ticks for argument 30. The highest possible input argument, 2^31-1, is feasible, but it takes time, reporting around 16,000,000 clock ticks (16 seconds). With alternative 3, every allowable input (0 up to 2^31-1) is handled instantaneously (11-12 clock ticks). In terms of big-O notation, alternative 1 is O(phi^n) time (not O(n^phi), which would be considerably better ) and O(n) space; alternative 2 is O(n) time and O(1) space (in practice O(log(n)) space, though); alternative 3 is O(log(n)) time and O(log(n)) space. Basically, each successive implementation is astronomically better than the previous one.

Due to technical difficulties (I'm still learning the language), I didn't include the round(phi^n/sqrt(5)) algorithm, but as I said, it is in a way fundamentally different from the other algorithms and is less flexible.