That's because every iteration divides 1 by n! and the dividend grows extremely fast.

In "Surely You're Joking, Mr. Feynman!" (great book, you should read it if you didn't already) R. P. Feynman said:

"One day at Princeton I was sitting in the lounge and overheard some mathematicians talking about the series for ex, which is 1 + x + x2/2! +
x3/3! Each term you get by multiplying the preceding term by x and dividing by the next number. For example, to get the next term after x4/4! you
multiply that term by x and divide by 5. It's very simple."

Yes it's very simple indeed. Why it's not been applied in such a great, modern and popular language? Is it because people just forget about simple solutions today?

The same simple rule might be applied to BigDecimal.exp() which originally uses the same straightforward interpretation of power series.
Feynman's pure ruby version of BigMath.exp (the ext version seems now pointless anyway):

Having fast exp() allows us to speed up BigMath.log(). Especially for calculations with large precision.

The area hyperbolic tangent power series performs better when the domain (x) of the function is closer to 1.
Additionally for x > 10 there is a significant linear performance degradation proportional to x.

So the first thing would be to narrow "no decimal shift" domain limitation to just 0.1 <= x <= 10.
The current implementation of BigMath.log uses range: 0.1 <= x < 100.

But this is just a prerequisite.

The real performance boost we gain from the following rule:

Let's suppose y ~ log(x) where y is calculated with much lesser precision than we actually need.
We may find then such an A:

A = x / exp(y)

which is very close to 1.

Now we can use it to calculate logarithm with the accurate precision from: