I'm struggling with the way to calculate an expression like $n^{915937897123891}$ where $n$ could be really any number between 1 and the power itself.

I'm trying to program (C#) this and therefor performance is key.

I tried

for(i <= 915937897123891)
value = n * value;

But that is obviously very very slow.

I wander if it is possible to achieve this using bitwise operators.
Also I would like to know what steps I could take to improve the way the calculate this.
Additionally I would also like to know how this should be solved without a calculator, just with (much?) paper and pen.

I want to say use exponentiation by squaring, but first, are you sure you can even represent a number that big in whatever C# data type you are using??
–
RahulJul 28 '12 at 18:46

@Rahul Narain: I use the BigInteger class (framework 4.0) which supports powers only with an Int32 as exponent, I need support for a bigger number then Int32. BigInteger can support numbers up to your systems memory.
–
MixxiphoidJul 28 '12 at 18:48

10

Okay, but remember that $915937897123891^{915937897123891}$ has about $10^{16}$ decimal digits. You'll need a few petabytes just to hold that number in memory.
–
RahulJul 28 '12 at 18:51

2 Answers
2

In some situations, the time analysis is fully determined by realizing that the direct algorithm uses $\Theta(n)$ multiplies, but the square-and-multiply algorithm only uses $\Theta(\log n)$ multiplies. However, numbers grow large in this calculation, and the cost of multiplication becomes significant and must be accounted for.

Let $N$ be the exponent and $M$ be the bit size of $n$. The direct algorithm has to, in its $k$-th multiply, compute an $M \times kM$ product (that is, the product of an $n$ bit number by a $kn$-bit number). Using high school multiplication, this costs $kM^2$, and the overall cost is

$$ \sum_{k=1}^{N-1} k M^2 \approx \frac{1}{2} N^2 M^2 $$

With $N$ as large as it is, that's a lot of work! Also, because $n$ is so small, the high school multiplication algorithm is essentially the only algorithm that can be used to compute the products -- the faster multiplication algorithms I mention below can't be used to speed things up.

In the repeated squaring approach, the largest squaring is $NM/2 \times NM/2$. The next largest is $NM/4 \times NM/4$, and so forth. The total cost of the squarings is

Similarly, the worst cost case for the multiplications done is that it does one $M \times NM$ multiply, one $M \times NM/2$ multiply, and so forth, which add up to

$$ \sum_{i=0}^{\infty} M (M N) 2^{-i} = 2 M^2 N $$

The total cost is then in the same ballpark either way, and there are several other factors at work that would decide which is faster.

However, when both factors are large, there are better multiplication algorithms than the high school multiplication algorithm. For the best ones, multiplication can be done in essentially linear time (really, in $\Theta(n \log n \log \log n)$ time), in which case the cost of the squarings would add up to something closer to $MN$ time (really, $MN \log(MN) \log \log(MN)$), which is much, much, much faster.

I don't know if BigInteger uses the best algorithms for large numbers. But it certainly uses something at least as good as Karatsuba, in which case the cost of the squarings adds up to something in the vicinity of $(NM)^{1.585}$.

The link you posted really helped out a lot. It is still slow, so I will be working on it. (probably my wrong implementation)
–
MixxiphoidJul 29 '12 at 16:46

This is a very misleading analysis, because the actual cost of doing a multiplication is the most significant factor of the runtime.
–
HurkylJan 20 '13 at 20:53

@Hurkyl: You have a fair point. I didn't want to make any extraneous assumptions regarding the time complexity of the multiplication algorithm. If you suggest a better analysis, I can edit it into the answer (or I can mark it community wiki and you can add it yourself).
–
RahulJan 20 '13 at 21:17

I think if you assume that multiplication is linear time OP's algorithm (unrealistically fast), and is done merely with Karatsuba for repeated squaring, you can still see the improvement. IIRC, actually adding up the costs isn't too painful if you assume either that multiplication is $n^e$ or if you assume it is $n \log n$. (IIRC, if A >> B, then a product whose factors are size $A$ and size $B$ will work out to costing something like $A M(B)/B$, where $M(x)$ is the cost of multiplying things of size $x$)
–
HurkylJan 20 '13 at 21:34

And it's interesting to note that if you use high school multiplication, both algorithms actually take the same amount of time: repeated squaring does fewer operations, but they are much more expensive and that exactly compensates. (repeated squaring benefits much, much more from good multiplication algorithms)
–
HurkylJan 20 '13 at 21:51

As Rahul Narain notes in the comments, your computer almost certainly doesn't have enough memory to store $2^{915937897123891}$, let alone $915937897123891^{915937897123891}$. Typically, the one practical context where exact calculations using such large exponents occur is modular exponentiation. This can be done either using the naïve repeated multiplication method in your question, or more efficiently using the repeated squaring method described by Hurkyl.

In any case, the crucial trick is to reduce the intermediate values by the modulus after every iteration, like this:

for (1 .. exponent):
value = (n * value) mod modulus;

This keeps your intermediate values from continually increasing. For more details, see e.g. the Wikipedia article I linked to above.