What you said was "Why does lnX!=XlnX-X?". Now you tell us not only did you NOT mean "ln X!= X ln X- X, but you also tell us that you already KNOW the answer to your (unstated) question. What was your purpose in posting that?

?
i am referring to the Stirling approximation(sorry, i forgot to add that at the end of my question)...
i saw that equation in the "advanced engineering mathematics" book by kreyszig as part of the solution to a problem...
but what i wonder is how did the stirling approximation come from?

?
i am referring to the Stirling approximation(sorry, i forgot to add that at the end of my question)...
i saw that equation in the "advanced engineering mathematics" book by kreyszig as part of the solution to a problem...
but what i wonder is how did the stirling approximation come from?

[tex]\log(x!)=\sum_{n=1}^x \log(n) \sim \int_0^x \log(t) dt=x\log(x)-x[/tex]
where ~ here means goes to asymptotically for large x
that is the integral becomes a good approximation of the sum as x becomes large.

You can approximate the sum by an integral. If you draw a graph, the sum [itex]\sum_{n=1}^{x}\ln x[/itex] is equal to the area of x rectangles, each of width 1. The heights are ln(1),ln(2),...,ln(x).
So you can approximate this area by the integral [itex]\int_1^x \ln t dt[/itex]. Drawing a picture may help.

It is a Riemann sum we partition (0,x) into (we assume here x is a natural number)
[0,1],[1,2],[2,3],...,[n-2,n-1],[n-1,n]
and chose as the point of evaluation for each interval the right boundry
we can consider one term in the Reimann sum as an approximation to the integral over the region of the term.
[tex]\log(n) \sim \int_{n-1}^n \log(x)dx=\log(e^{-1}(1+\frac{1}{n-1})^{n-1}n)[/tex]
clearly this will be a good approximation if x is large and not so good is n is not large. Thus the approximation over (0,x) cannot make up for its poor start, but the relative error gets better and better. So we have asymptodic convergence. The absolute error will never be small, but the relative error will. Often since x! grows rapidly we do not mind the absolute error being high (or moderate) so long as the relative error is low.

Suppose in measuring a distance of 100 meters, I make an error of 10 cm.

The absolute error is 10 cm. The relative error is that "relative" to the entire measurement: 10 cm/100m = 0.1 m/100m= 0.001 (and, of course, has no units).

There is an Engineering rule of thumb: when you add measurements, the absolute errors add. When you multiply measurements, the relative errors add.

That is, if I measure distance y with absolute error at most &Delta;y and distance x with absolute error at most &Delta;x, then the true values of x and y might be as low as x- &Delta;x and y- &Delta;y. The true value of x+ y might be as low as (x-&Delta;x)+(y-&Delta;y)= (x+y)- (&Delta;x+&Delta;y) The true values of x and y might be as large as x+&Deltax and y+&Delta;y. The true value of x+ y might be as large as (x+&Delta;x)+ (y+&Delta;y)= (x+y)+(&Delta;x+&Delta;y). That is, the error in x+y might be as large as &Delta;x+ &Delta;y.

On the other hand, if I multiply instead of adding, the true value of xy might be as low as (x- &Delta;x)(y- &Delta;y)= xy- (x&Delta;y+ y&Delta;x)+ (&Delta;x&Delta;y) which, ignoring the "second order" term &Delta;x&Delta;y (that's why this is a "rule of thumb" rather than an exact formula), is xy- (x&Delta;y+ y&Delta;y). The true value of xy might be as large as xy+ (x&Delta;y+ x&Delta;x). The absolute error might be as large as x&Delta;y+ y&Delta;x which depend on x and y as well as the absolute errors &Delta;x and &Delta;y. However, the "relative" error in xy is (x&Delta;y+ y&Delta;x)/xy= &Delta;y/y+ &Delta;x/x, the sum of the two relative errors in x and y.

thanks! It makes a lot more sense now...
but there's still one I don't get:
What's the difference between the absolute and relative error?

like HallsofIvy said
absolute error=|approximate-exact|
relative error=|approximate-exact|/exact
think about approximating (x+1)^2 with x^2 for large
the relative error becomes small
the absolute error grows
the approximation
log(x!)~log(x)-x
does the same