The Nth Fibonacci Number in O(log N)

This is O(polylog N) in arithmetic operations, but not on an actual computer. On an actual computer it's O(multiplication of N-bit numbers), which in Python is n1.6ish. That doesn't make it less cool though.

Taking the modulus could be O(1) if the base is convenient, but I get your point.

Not sure why you'd think the constant would be O(z) though, if you'd store all the results it would be O(1). But even if you don't store them you can (very roughly) bound the length of the period by z2 (you can actually bound it by 6z, but that's not even that important), so the highest Fibonacci number you'll need to calculate is F(z2) which can be done in O(log(z2)) = O(log(z)) using the algorithm OP posted.

Can someone in the know explain to me why we can't just find the closed form solution? The Fibonacci sequence is a linear recurrence with constant coefficients, so such a closed form exists. Is the issue one of numerical stability?

Right, I thought I remembered seeing that at some point. I should be more clear: given that we can find a constant time solution to the problem posed in the article, if there any advantage to using the author's approach? Is the solution simply not well known?

You can take the matrix Q and diagonalize it, i.e. find a diagonal matrix D and another matrix P such that Q = PDP-1 . This is done by calculating Q's eigenvalues and eigenvectors, which you learn about in Linear Algebra. Now it's easy to calculate Qn , since Qn = PD^nP-1 , and to find Dn you just raise each diagonal entry to the n'th power. This will give you the closed-form expression mentioned. See, for example, Wikipedia.

contiguous block arrays have constant access time. afaik many programs (and co-processors) "compute" functions like sin, cos, this way. it would of course be limited by actual physical memory so we couldn't actually store unbounded fibonacci series. but in theory if we had a look-up table we'd able to "compute" in constant time.

Here's another way to think about it: The problem with the explicit formula is that you're raising the irrational numbers (1+sqrt(5)) and (1-sqrt(5)) to power n. As you've said with when done smartly this is O(log(n)). But now you're stuck dipping into the real numbers to get a the final integer answer -- this can complicate things when making a practical implementation for very large n. But we can be smarter than this: multiplication of numbers of the form a + b sqrt(5) where a,b are integers is closed

(a + b sqrt(5))(c + d sqrt(5)) = (ac + 5 bd) + sqrt(5)(ad + bc)

Using this fact you can come up with ways of computing (1+sqrt(5))n and (1-sqrt(5))n using only integer arithmetic, manipulating the two components a,b. The most compact way of expressing it will use 2 by 2 matrices (think of (a,b) as a vector) and start to look quite similar to this method discussed in the linked article.

Not that you would every really need F(N) for extremely large N, unless it's for a programming exercise. In which case you'd probably have a modulus. Finding say F(n) mod 1010 (last ten digits) using this method only takes O(log N) time.

As others have said, simply use the closed form solution. That can be done symbolically, so there will be no rounding errors.

If r is one of the eigenvalues of the matrix, then the other is 1-r. Let f denote the square root of 5, so f can be set equal to 2r-1. Then the nth term is

(r^n - (1-r)^n)/f

Do this on a CAS where r is defied as the solution of r2 - r - 1 = 0. So what happens is the above expression is evaluated as a polynomial, except r2 is constantly being replaced with r+1. Also, large exponents are done by successive squaring, which takes log(n) time.

Writing rn as polynomial a+br by successive squaring and constantly replacing r2 = r+1 is a good idea. However, actually plugging r into the polynomial is a bad idea, since the coefficient b grows extremely large. If b is ~10100 you will need ~100 digits of r.

Edit: note we don't have O(log(n)) because
we still need to multiply huge numbers.

In fact, by applying the Cayley-Hamilton theorem, it's pretty easy to see that b=F(n) is the nth Fibonacci number. So the polynomial representation is still a good idea since it's less computationally expensive than matrix multiplication. For deep linear recurrences like Lagged Fibonacci, this is a much needed speedup.

Or you could just diagonalize the matrix before exponentiating it (meaning you only have to exponentiate the eigenvalues): Mq-1 = P*Aq-1*P-1 where P is a matrix of the eigenvectors and A is a diagonal matrix of the eigenvalues.

However, you might run into problems with arithmetic precision since the eigenvalues are irrational: (1+sqrt(5))/2 and (1-sqrt(5))/2

However, you might run into problems with arithmetic precision since the eigenvalues are irrational

Which is exactly why this isn't done. What you're suggesting trades 8 integer multiplications (7 if you use Strassen's algorithm for multiplying 2x2 matrices) for 2 floating-point multiplications. Seeing as this algorithm is only particularly useful when N is very large anyway, this is a terrible thing to do.

You know its kind of absurd that i had to prove this in an algebra test at the second year in uni. And I had no idea that the final result was related to fibonacci numbers. Mind = blown (tough we had to diaganolize)

The closed form doesn't really help you that much. If you want the actual integer value of F(n) (decimal representation), the you absolutely cannot calculate it in constant time. Nor can you even calculate it in logarithmic time.

Best I think anyone can do is O(n log2 n) -- atleast that's the best I can come up with.

This subreddit is for discussion of mathematical links and questions. Please read the FAQ and the rules below before posting.

If you're asking for help understanding something mathematical, post in the Simple Question thread or /r/learnmath. This includes reference requests - also see our lists of recommended books and free online resources. Here is a more recent thread with book recommendations.