I try some examples from Rosettacode and encounter an issue with the provided Ackermann example: When running it "unmodified" (I replaced the utf-8 variable names by latin-1 ones), I get (similar, but now copyable):

$ perl6 t/ackermann.p6
65533
19729 digits starting with 20035299304068464649790723515602557504478254755697...
Cannot unbox 65536 bit wide bigint into native integer
in sub A at t/ackermann.p6 line 3
in sub A at t/ackermann.p6 line 11
in sub A at t/ackermann.p6 line 3
in block <unit> at t/ackermann.p6 line 17

Removing the proto declaration in line 3 (by commenting out):

$ perl6 t/ackermann.p6
65533
19729 digits starting with 20035299304068464649790723515602557504478254755697...
Numeric overflow
in sub A at t/ackermann.p6 line 8
in sub A at t/ackermann.p6 line 11
in block <unit> at t/ackermann.p6 line 17

What went wrong? The program doesn't allocate much memory. Is the natural integer kind-of limited?

I replaced in the code from Ackermann function the 𝑚 with m and the 𝑛 with n for better terminal interaction for copying errors and tried to comment out proto declaration. I also asked Liz ;)

2 Answers
2

Please read JJ's answer first. It's breezy and led to this answer which is effectively an elaboration of it.

TL;DRA(4,3) is a very big number, one that cannot be computed in this universe. But Rakudo will try. As it does you will blow past reasonable limits related to memory allocation and indexing if you use the caching version and limits related to numeric calculations if you don't.

A(4,3) is not computable in practice

Even for small inputs (4,3, say) the values of the Ackermann function become so large that they cannot be feasibly computed, and in fact their decimal expansions cannot even be stored in the entire physical universe.

So computing A(4,3).say is impossible (in this universe).

It must inevitably lead to an overflow of even arbitrary precision integer arithmetic. It's just a matter of when and how.

While computing A(4,2) the indexes (m and n) remain small enough that the computation completes without overflowing the default array's indexing limit.

This limit is a "native" integer (note: not a "natural" integer). A "native" integer is what P6 calls the fixed width integers supported by the hardware it's running on, typically a long long which in turn is typically 64 bits.

But in trying to compute A(4,3) the algorithm generates a 65536 bits (8192 bytes) wide integer index. Such an integer could be as big as 265536, a 20,032 decimal digit number. But the biggest index allowed is a 64 bit native integer. So unless you comment out the caching line that uses an array, then for A(4,3) the program ends up throwing the exception:

Cannot unbox 65536 bit wide bigint into native integer

Limits to allocations and indexing of the default array type

As already explained, there is no array that could be big enough to help fully compute A(4,3). In addition, a 64 bit integer is already a pretty big index (9,223,372,036,854,775,807). That said, P6 can accommodate larger arrays so I'll discuss that briefly below because the theoretical possibilities might be of interest for other problems.

What if someone wanted to do larger indexing?

And, using this mechanism, it's possible that someone could write an array implementation that supports larger integer indexing than is supported by the default array type (presumably by layering logic on top of the underlying platform's instructions).

If such an alternative were created and called BigArray then the cache line could be replaced with:

Again, this still wouldn't be enough to store interim results for fully computing A(4,3) but my point was to show use of custom array types.

Numeric overflow

When you comment out the caching you get:

Numeric overflow

P6/Rakudo do arbitrary precision arithmetic. While this is sometimes called infinite precision it isn't (can't be) actually infinite but is instead, well, "arbitrary", which in practice in computing means "sane" for some definition of "sane".

This classically means running out of memory to store a number. But in Rakudo's case I think there's an attempt to keep things sane by switching from a truly vast Int to a Num (a floating point number) before completely running out of RAM. But then computing A(4,3) eventually overflows even a double float.

So while the caching blows up sooner, the code is bound to blow up later anyway, and then you'd get a numeric overflow that would either manifest as an out of memory error or a numeric overflow error as it is in this case..

I waited some time to get an opportunity. Now I have to use cpu time and memory to simulate stress in different ways. And P6/Rakudo helps a lot. But Ackermann dies much to fast ;)
– SnoFeb 6 at 7:49

"But Ackermann dies much too fast" :) Couple ideas... 1 Find or implement an algo or three that match algos and data structures from Big O Cheat Sheet. 2 Visit freenode IRC chat channel for MoarVM by clicking here and then START button, and typing .ask timotimo Have you got any suggestions for P6 code that would stress cpu and memory in interesting ways? or something like that.
– raiphFeb 6 at 10:41

Thanks for the offer - for the moment I have some algos which do the job ;) Some pure CPU, some memory & CPU, some I/O & CPU. The examples provided by the Perl6 community are manifold. I just wanted to know why Ackermann dies in such a (for me) incomprehensible way - to learn something about P6 and don't do the same mistake twice ...
– SnoFeb 6 at 15:10

So creating a BigArray might not be helpful anyway. You'll have to create your own ** too, that always works with Int, but you seem to have hit the (not so infinite) limit of the infinite precision Ints.