Of the implementations of Fibonacci

February 6, 2013

Fibonacci’s sequence

My first post generated more interest than expected, but even
more rewarding, some people pointed out some of my mistakes,
corrected me, and actually taught me a couple of things. I’ll try to keep up the rythm and publish at least once a week.

I’ve been asked once in an interview about efficient ways to compute Fibonacci. Below is what I tried to explain during the interview, but probably sounded like some kind of incoherent babbling about matrix to my interviewer.

I was talking about a little trick my computer science teacher taught my class, and which does not seem too famous.

If you feel like you already know everything about the Fibonacci sequence’s implementations, jump to the last implementation. If you don’t feel confortable with fibonacci, tag along as we will make our way through different implementations of fibonacci.

What’s Fibonacci anyway?

So Fibonacci sequence Fn is defined as follows :

If you’re not familiar with such notation, it basically says
that it starts 0, 1 and the next value is defined as the sum of the two precedent numbers.

The first numbers are therefore :

0, 1, 1, 2, 3, 5, 8, 13, ...

Recursive Implementation

The simplest way to implement fibonacci’s sequence is to
just write its definition.

Plain and simple right? Yes, as often with recursive
algorithm the complexity of this algorithm is huge.
Fun fact, the number of calls Cn required
to compute fibo(n) actually behaves like the fibonacci
sequence itself. It is very easy to show that

Bad news, as we will see in analytical formula, Fn is exponential. We will see an exact analytical formula for the n-th term of the fibonacci sequence in “Analytical formula”. At the moment, we can convince ourselves that Fibonacci is exponential by noticing that

Hence,

Let’s see how we work out a better algorithm.

Memoization

Most of the time, the no-brainer in optimization is to cache results.
Fibonacci is an obvious candidate for memoization. mentally visualize the recursive calls of Fibonacci, and you’ll notice that the function is called many times with the same arguments. If you find recursivity confusing, just notice that computing fibo(n) requires an exponential number of calls with arguments that takes values in [0, n]. There must be some calls with the same arguments right?

Memoization is a technique which consists on keeping the results of your results in a cache. Next calls will return the cached result.
In python, as long as arguments are [good candidates to be used as the keys of a hash, we can do that transparently with a decorator.

defmemoize(f):cache={}defaux(*args,**kargs):# dictionaries are not hashable,# we transform kargs in a tuple# of (k,v) items.k=(args,tuple(sorted(kargs.items())))ifknotincache:cache[k]=f(*args,**kargs)returncache[k]returnaux@memoizedeffibo(n):ifn==0:return0elifn==1:return1else:returnfibo(n-1)+fibo(n-2)

Dynamic Programming

What is nice with memoization is that results are cached on-the-fly, without requiring to make any change to the function. What is not so nice however is that the memory required. In order to compute fibo(n), we have been storing (n+1) (key,value)-pairs.

What we are doing naturally when trying to compute the first numbers of the sequence by hand is actually much more efficient. You just iteratively look at the two precedent numbers, sum them up, and write the new number. To compute the next number, you only had to look at the two precedent numbers.

We now have a solution which is linear in computational time and constant in memory.

Analytical Formula and the gold number

Let’s now try to find a closed-form expression of fibonacci.
To do so, we’ll recycle an idea from fibo_dynamic. Let’s introduce Xn, a sequence of vector that contains the two consecutives Fibonacci number Fn, Fn+1.

Smarter, better, stronger

We found a solution with a logarithmic complexity, but it is based on floating point multiplication. Our algorithm will give us back
floating point results that we will want to round or truncate.
Floating point have limitations, both in precision and they have
an upper bound as well.

Also, I didn’t enjoy at all going through the diagonalization of the matrix. If I were to write an algorithm to compute the n-th term of such a sequence, I don’t want to go through the trouble of
writing an algorithm to diagonalize the matrix.

Finally, this is a bit crazy, why do I have to use float, where everything in my problem is screaming integer.

Let’s get back at the matrix exponentiation :

The trick here is to apply the fast exponentiation algorithm directly on the matrix, without diagonalizing it. To do so, we need
to define matrix, and matrix multiplication, but this is way less cumbersome than doing what we did before, isn’t it?

Here is now the fibonacci implementation based on
integer matrix fast exponentiation.

We are doing 8 times more multiplications in this version, but that’s
integer multiplications.

Moreover, python is handling big integers like a champ.
You will be able to compute fibonacci(100000) efficiently now.

Where I lied

Actually everything here is a lie. The truth is that big integer multiplication shouldn’t be considered constant time, so that all the complexity described here does not relate to actual computational time.

Thanks to Tordek to point out an error on reddit.Thanks to Frank and Hameer for telling me about the error in the closed formula of Fibonacci.Thanks to Mogo for telling me about the error Xn and clicking on an advertisment.