First of all, don't expect this approximate series to give reasonable results for x > 1.0 (and less than -1, of course), there are theorems that prohibit that. Also, you need to compile with highest optimization level for this kind of applications.

I actually tested your code, and some with my own improvements, to see if it actually outperformed the built-in log() function, and it does, by 40% (with optimization at max). With no optimizations during compilation, your method outperforms the built-in function by 25%, but I made a slightly better version which outperforms the built-in function by about 75%. That version is this (simply reduces the number of multiplication to a minimum):

With no optimization, I get:
built-in function: 10 million evaluations in 386 milliseconds.
your versions: 10 million evaluations in 280 milliseconds.
my improved version: 10 million evaluations in 100 milliseconds.

However, with optimizations turned on to the max (-O3), I get:
built-in function: 10 million evaluations in 10 milliseconds.
your versions: 10 million evaluations in 6 milliseconds.
my improved version: 10 million evaluations in 6 milliseconds.

Which goes to show that the compiler can do a lot more optimizations than you can. Trust your compiler, don't waste your time doing micro-optimizations like this, unless you have solid evidence that you need to do so.

BTW, I also turned my version of the function into a tail-recursion of arbitrary order of approximation. The result is that, with highest optimization and having the order known as a compile-time constant, it can compute the series up to any order 40% faster than the built-in log function and with the same precision (at order 80). But, of course, it only works between -1 and 1, beyond that, it diverges, as expected from math theorems that prohibit accurate approximation of a log() function beyond a small radius around 1. This is probably why the built-in log is slower, because it has to perform range checks and special transformations to be able to compute the result for any value. In that perspective, I would use the built-in function, unless you are absolutely sure the argument is very close to 1 (or x = 0, in your function), because the higher the order of approximation, the more violent the divergence is beyond x = 1.0.

Which goes to show that the compiler can do a lot more optimizations than you can. Trust your compiler, don't waste your time doing micro-optimizations like this, unless you have solid evidence that you need to do so.

Could the reason he's wasting his time is to learn to program, not outdo the compiler? There are many reasons to reprogram a built-in function -- first of which is to learn, not replace. Even in K&R they wasted their time by programming built in functions -- in multiple ways.

Could the reason he's wasting his time is to learn to program, not outdo the compiler? There are many reasons to reprogram a built-in function -- first of which is to learn, not replace. Even in K&R they wasted their time by programming built in functions -- in multiple ways.

I imagine educational purposes fit into the "unless you have solid evidence that you need to do so" clause.

Maybe it is to learn that trying to outdo the compiler is more often than not a waste of time. I think that is a very valuable lesson, makes you focus on the important things. I agree that if you are learning the basics of writing functions, rewriting built-in math functions is as good as anything else (especially since you have a "correct" function to compare your results against). But citing "for the purpose of learning" to justify anything is a bit lame because everything can be justified for its learning-value, so it goes without saying, as far as I'm concerned. And learning-value is very subjective, so I prefer to exclude it from my judgement of the usefulness of doing something in programming. So, when I say "waste of time" it's implied that I mean "waste of time (except if it has learning-value to you)", you could attach that caveat to every one of my statements about the usefulness of anything in programming.

Reimplementing a built-in math function will teach you, in order:
0) how to compute a math expression.
1) how to write a function.
2) how to call a function.
3) how to test a function for correctness.
3a) basic issues of numerical analysis (round-off errors, numerical stability, etc.)
4) that built-in functions are very efficient compared to hand-rolled code.
5) that beating the performance of a built-in function is very difficult.
6) that the compiler is far better than you are at optimizing code.
7) that micro-optimizations (like writing special, more efficient versions of built-in functions) is better left to the compiler, unless there is strong evidence that the compiler's best effort are not good enough and that performance is critical with these functions.
8) don't optimize prematurely.

Considering the OP is only at step 0, I may have been carried away up the learning ladder by tackling steps 4 to 8.

What you are looking for is a method for calculating the inverse of
the exponential function. If you don't want to use the library exponential,
you can implement the Taylor expansion, which is rapidly convergent
for all values of x, or even try to accelerate it by getting a good
value for e (= 2.7182818...) and multiplying up the integer part of x,
while using the Taylor expansion for the fractional part of x.

Then you need to invert it. There are lots of methods, a simple
bisection method should work fine, and there are more clever methods that
you can read about, in e.g. Numerical Recipes (yes, I know it has lots
of bugs, but I've used several of NRs 1dim root solvers, and they
work fine). They generally make some assumption about the nature of the
function (e.g. polynomial, or inverse polynoial), and interpolate from there.

You will need to bracket the root for most of them. A good
bracket for the root can be had by e.g finding the pair of integers x_+ and x_-
such that exp(x+) > exp(x) > exp(x_-). Or, you can use log base 2, whose
integer part can be calculated directly by counting the number of right-shifts
until you get to 1, and then using the fact that log_2(y) = log_e(y)/log_e(2).
log_e(2) = 0.693, or you can find this one by inversion also...

BTW, Newton-Raphson would work fine for inverting the
exponential, here, since for values below it always overshoots,
but for values above the correct answer it always undershoots
(due to convexity), and so after at most one iteration you are
always on the high side, edging toward the solution with quadratic
convergence.

One more thing you can do, btw, is extend the Taylor expansion
solution that you have to the full positive real line, by
dividing the argument by two, successively, until the number is
between 1 and 2, and then adding back n ln 2, where n is the number
of divides you did (implement it by multiplying by 0.5, since multiplies
are cheaper than divides). Since you might still have to go to
very high order in the Taylor expansion, you can then divide by smaller numbers,
like 1.2, to get it closer to the basin of relatively rapid convergence.
You just need to calculate ln 2 = 0.693... and log of any other numbers
you divide out by...