No idea, I never use Python. But the "python" command here uses python2. After a while I was wondering why it was not finishing...

Now you know why I put days or month on it. I was thinking that could be a good approach because is only one calculation but each variable contains a lot of significant numbers and it takes times. It is by far slower than the integer version.

For fibo(478496) you need to calculate with a floating variable containing 1 Million digits + some digits to prevent truncation.

No idea, I never use Python. But the "python" command here uses python2. After a while I was wondering why it was not finishing...

Now you know why I put days or month on it. I was thinking that could be a good approach because is only one calculation but each variable contains a lot of significant numbers and it takes times. It is by far slower than the integer version.

For fibo(478496) you need to calculate with a floating variable containing 1 Million digits + some digits to prevent truncation.

I would have thought, to be absolutely certain the rounding error doesn't contaminate the integer part of the result, that

extra digits >= 2 log(n)/log(2)

You appear to get away with fewer, which might mean my bound is overly pessimistic. It would be interesting to test more values to see.

Damn right. I'm not a Numerics guy by any stretch but that really make my head spin.

Me too.

As far as I can tell the ratio between successive fibo's converges on the Golden Ratio, give or take a factor of root 5 somewhere.

That turns a series of summations into some kind of geometric series of multiplications. A power of n.

Not sure why the correct integer result pops out of this, rather than an "approximately about" real number kind of result.

Well yes, there's determination and there's "where's the padded armless jacket and the big strong folk that know how to dress you in it safely".

You are doing that "Your holding it wrong" thing again.

Evidently we were talking at cross purposes for a long time as I struggled to get a result out of Smalltalk.

I just wanted to do a simple thing. Run a program that took a single integer parameter in and put a single integer result out. In fact the input parameter was optional. Such a simple thing that we have twenty other languages/dialects/run times that do that with no fuss, even if one has to install them from source code.

In order to do that simple thing in the Smalltalk language you wanted me to buy into a whole graphical user interface, a whole different environment. Let's face it, a whole different operating system as that what Smalltalk was/is.

I would not say that is an example of using the correct "correct powertool". It's more like bringing in a huge excavator to scratch an itch!

By the way, I never did get an explaination as to why the Smalltalk I wrote that runs under GNU Smalltalk does not run under Squeak Smalltalk.

I just wanted to do a simple thing. Run a program that took a single integer parameter in and put a single integer result out. In fact the input parameter was optional. Such a simple thing that we have twenty other languages/dialects/run times that do that with no fuss, even if one has to install them from source code.

Now *I* see that latter part to be at least as much work as downloading Squeak, maybe more. I think we're (duh!) coming from very different daily lives, where we each see our normal daily experience as 'normal'. Which is hilarious if you compare it to most of the humans around...

I see a situation where someone asks "I want to draw a square and save a bitmap of it", which I answered by suggesting a Paint tool, downloading it, using a couple of menu options and done - wheras you wanted something more like "here's an OpenSCAD file, run it with `openscd -file ./thisFile.opn`. Which would mean downloading that, maybe conencting to a github thing, compiling etc. Which seems most sensible depends on a lot of personal baggage and needs. In this case the Paint tool is programmable enough that it can be changed to do what you wanted as well as its normal job.

By the way, I never did get an explaination as to why the Smalltalk I wrote that runs under GNU Smalltalk does not run under Squeak Smalltalk.

Really? Oh, well it's not really any different to the Python thing of 2.7 & 3.whatever. Time, preferences of the people developing the system, perceived needs, all that. Different libraries get bult/chosen and sometimes they don't work identically or include exactly the same things.

GNU Smalltalk was put together by some people that wanted something scripty and they emphasised using just textfiles etc. They chose some 'interesting' syntactical sugar to handle reading in code from those files; no idea why, or what their criteria might have been.
Most other Smalltalk systems have stuck with using the original 'chunk file' format that my old boss Glenn Krasner came up with, because it worked well enough for what it was meant to do. They also stick with the saveable image idea - I'm sure I mentioned that "Smalltalk is saved but not born again". Other code sharing tools have been developed to work different ways - the common Monticello tool uses chunk file stuff wrapped up in a zip file and an agreed convention for the structure of files within that. anothe tool that aims to make use of GitHub storage has been using other strategies in the hope of using git versioning for artefacts as well as plain code. One system uses a server Smalltalk image to serve up code in precompiled lumps so that the client system can be tiny and not need the compiler or many other tools; it does some very cool stuff with talking to html or javascript or Cairo to produce the UI when needed.

Making Smalltalk on ARM since 1986; making your Scratch better since 2012

I'm not a Numerics guy by any stretch but that really make my head spin.

As far as I can tell the ratio between successive fibo's converges on the Golden Ratio, give or take a factor of root 5 somewhere.

Rather than numerical, here is an algebraic approach using the golden ratio along with exact big-number arithmetic. By simple algebra, we have

(a+b√5)(c+d√5)=(ac+5bd)+(ad+bc)√5

so it is easy to compute powers of the golden ratio just by keeping track of the rational part and the coefficient of √5. A convenient simplification comes from the value of the golden ratio itself. To see this, take powers of ϕ to obtain

Unfortunately, the code is not very fast because I left out the Karatsuba algorithm and didn't bother with any micro-optimizations similar to the ones in the FreeBasic fibo.bas program.

Since each divide-and-conquer step when calculating ϕ^n uses 4 or 8 big-number multiplications, even in the best case this algebraic method based on the golden ratio will be intrinsically 2 to 4 times slower compared to the doubling formula.

No idea, I never use Python. But the "python" command here uses python2. After a while I was wondering why it was not finishing...

Now you know why I put days or month on it. I was thinking that could be a good approach because is only one calculation but each variable contains a lot of significant numbers and it takes times. It is by far slower than the integer version.

For fibo(478496) you need to calculate with a floating variable containing 1 Million digits + some digits to prevent truncation.

I removed the print function to measure the speed of the algorithm. It's about 80 time slower for fibo(478496), although the decimal module is compiled C code. Using Python3:

I'm not sure, they both end by printing a big integer. Seems the conversion from big decimal representation to integer is a lot faster than the conversion of whatever internal big integer representation to integer takes.

This calls for an adaption of original Python algorithm from using regular integers to big decimals....

You have v declared as unsigned long long, but you are multiplying two unsigned long values, that will result in an unsigned long value (just the lower half of the result) which is then promoted and stored in v, the upper half of the result is thrown away.

The return type of a multiply is the same type as its operands so to get an unsigned long long answer you need to promote both (or one, C will auto promote the lower type to match) of the operands to unsigned long long. With optimisations on gcc can detect that the upper halves of the operands are always zero and just emit a single 64b = 32b x 32b instruction.
Note, on ARM the smallest mul operates on 32 bits (not counting NEON) so any type smaller (like short) will be auto promoted to 32 bit for multiplication so you can get away with int = short × short but it isn't guaranteed to work on other architectures.

I'm not sure, they both end by printing a big integer. Seems the conversion from big decimal representation to integer is a lot faster than the conversion of whatever internal big integer representation to integer takes.

This calls for an adaption of original Python algorithm from using regular integers to big decimals....

ad 1) Perhaps one of the rare cases where things run much faster using a 64 Bit OS.
ad 2) Faster with printing than without? Looks strange.

Minimal Kiosk Browser (kweb)
Slim, fast webkit browser with support for audio+video+playlists+youtube+pdf+download
Optional fullscreen kiosk mode and command interface for embedded applications
Includes omxplayerGUI, an X front end for omxplayer

I'm not sure, they both end by printing a big integer. Seems the conversion from big decimal representation to integer is a lot faster than the conversion of whatever internal big integer representation to integer takes.

From the name one might imagine the bigdecimal subroutine library is using a base-10^k internally. The underlying C code was almost surely optimised for 64-bit integers, which would explain why it runs so slow on Raspbian where 64-bit division is emulated in software. It seems likely the intended use was for financial data such as the national debt.

The difficulty with floating point is that all the multiplications needed to produce the quantity φ^n must be performed with more than million-digit precision for the final result to have at least that much. The divide-and-conquer way of computing an integer power is demonstrated by the goldpow function of the Pascal code. As evident, either one or two multiplications are performed per recursion. Assuming the worst case yields an upper bound of

2 log(n)/log(2)

on the total number of bigdecimal multiplications needed to compute the n-th Fibonacci number.

Since the lower bound also scales by log(n), then the runtime Tn needed to compute F(n) ≈ φ^n satisfies

Tn = O(n^α log(n))

assuming an O(n^α) multiplication algorithm is used.

There is no logarithm when using the doubling formulas. Therefore, provided α for the underlying multiplication routines is the same, it is theoretically guaranteed the floating-point method will run slower for n large enough.

When the computer has enough memory and is fast enough that computing with n large is a natural activity, then the use of efficient algorithms becomes more important. Stay tuned for the billion-digit Raspberry-Pi-4 Fibonacci challenge where the value α=log(3)/log(2) of Karatsuba multiplication is much too slow.

But, but...in the end both fibo.py and fibo_phi.pi are printing the final result from the same Python integer data type.

I think fibo_phi is printing the final result as a Decimal type after rounding the fractional part to the nearest integer. Although the result quacks like a duck, the giveaway is that it prints much faster. If you don't mind the colour pink, see also

realloc(fibs, sizeof(Fibs_struct)*fibs_size);
Do you really want to realloc fibs and throw away it's new location if it got moved?

Then What do I do? if the fibo ask is only 1 and the other 10000000.

You have v declared as unsigned long long, but you are multiplying two unsigned long values, that will result in an unsigned long value (just the lower half of the result) which is then promoted and stored in v, the upper half of the result is thrown away.

I think fibo_phi is printing the final result as a Decimal type after rounding the fractional part to the nearest integer. Although the result quacks like a duck, the giveaway is that it prints much faster. If you don't mind the colour pink, see also