Awesome. I can't work on this right now but I'm glad the real experts have taken on the issue.

Real experts would have included a perfect solution for arguments at or close to the Gamma minimum. ;-) There was a similar problem when Lambert's W was implemented in XROM, but back then I could provide a solution. Let's see if the community can come up with something similar here.

(02-08-2015 05:39 PM)Bit Wrote: Awesome. I can't work on this right now but I'm glad the real experts have taken on the issue.

Real experts would have included a perfect solution for arguments at or close to the Gamma minimum. ;-) There was a similar problem when Lambert's W was implemented in XROM, but back then I could provide a solution. Let's see if the community can come up with something similar here.

I'm curious - and I'll be up-front here, I really know little about this topic, other than the barest familiarity of gamma, inverse gamma, digamma, etc. - how is accurarcy for such values important? I am seriously not trying to minimze it; as many people are busily testing and pushing the envelope, it clearly must have some practical application, I (and I suspect other lurkers) would like to know how it applies, and why such precision can have practical use or impact.

(02-08-2015 08:56 PM)rprosperi Wrote: I (and I suspect other lurkers) would like to know how it applies, and why such precision can have practical use or impact.

From a mathematical perspective the "algorithm designer" does not know how the algorithm will be used, or how accurate the requirements of it's application will be. So he strives to achieve the "Maximum Possible" accuracy given the limits of the floating point number system used in the implementation. Many algorithms exhibit strange behaviors when their results approach minimums, maximums, zero, or infinity, so it is a challenge to keep errors to a minimum over the algorithm's full operational range. Dieter was illustrating how his algorithm performed at the bottom end of this operational range (where the gamma function reaches its minimum value in the positive domain). He demonstrated how the algorithm
gracefully gave an error when it reached this lower limit. This is why he needed to show so many significant digits.

(02-08-2015 08:56 PM)rprosperi Wrote: I (and I suspect other lurkers) would like to know how it applies, and why such precision can have practical use or impact.

From a mathematical perspective the "Algorithm" designer does not know how the algorithm will be used, or how accurate the requirements of it's application will be. So he strives to achieve the "Maximum Possible" accuracy given the limits of the floating point number system used in the implementation.

Sure, this is certainly correct and admirable when the balance of effort and resources are justified by the need. I'm just asking about the need, as I don't see the practical applications driving this need.

As truly impressive as the 34S is in its functional breadth, depth and accuracy, there are probably thousands of areas, which if examined with enough close scrutiny, also could need similar tweaking. I am curious why it's warranted here. Note my comments are due to simply not knowing how a function's innaccurcy in the 34th digit matters to any real-world application.

Maybe the answer is to simply fix it because we now know it's broken - which is a perfectly fine explanation; it just seems like there is more to it. Thanks for your patience with my curiosity.

(02-08-2015 09:44 PM)rprosperi Wrote: Sure, this is certainly correct and admirable when the balance of effort and resources are justified by the need. I'm just asking about the need, as I don't see the practical applications driving this need.

Not everyone "Needs" to put the calculator into Double Precision mode, but when one does s/he expects the results to be more accurate. It is very rare indeed that one NEEDS 34 digits of accuracy, but when you perform an operation in Double Precision mode wouldn't you expect the 34 digit result of any function to be reasonably correct? My point is why ask a person why he NEEDS something? Isn't it better to just assume that he has a valid need, and deliver the results he expects. The alternative would be to inject arbitrary errors assuming that his need for accuracy is not valid. (In this case one would usually add a disclaimer explaining the accuracy limits of the algorithm so that people don't rely on it.) There are rarely "PERFECT" algorithms, so researchers are always STRIVING to improve them balancing factors such as program size, computation speed, and accuracy. Some find these challenges interesting, competitive, and fun, and others may not.

(02-08-2015 09:44 PM)rprosperi Wrote: Sure, this is certainly correct and admirable when the balance of effort and resources are justified by the need. I'm just asking about the need, as I don't see the practical applications driving this need.

Not everyone "Needs" to put the calculator into Double Precision mode, but when one does he expects the results to be more accurate. It is very rare indeed that one NEEDS 34 digits of accuracy, but when you perform an operation in Double Precision mode wouldn't you expect the 34 digit result of any function to reasonably correct? My point is why ask a person why he NEEDS something? Isn't it better to just assume that he has a valid need, and deliver the results he expects. The alternative would be to inject arbitrary errors assuming that his need for accuracy is not valid. (In this case you would need to add a disclaimer explaining the accuracy limits of the algorithm so that people don't rely on it.) There are rarely "PERFECT" algorithms, so researchers are always STRIVING to improve them balancing factors such as program size, computation speed, and accuracy. Some find these challenges interesting, competitive, and fun, and others may not.

Thanks for explaining that viewpoint Barry, it helps. I think I asked my question the wrong way. It seems it came out sounding like "why bother doing this?", which was not my intent. I guess I meant to ask closer to "what application benefits from these functions having such precise capabilities?" I realize that this thread is more about building accurate and precise tools to be used by others for who knows what purpose, I'm just fishing for some of those purposes, if anyone here knows. Maybe not. Even if that's the case, it is rewarding to see such interesting collaboration and contributions to continually sharpen the tools.

(02-08-2015 10:54 PM)rprosperi Wrote: Thanks for explaining that viewpoint Barry, it helps. I think I asked my question the wrong way. It seems it came out sounding like "why bother doing this?", which was not my intent. I guess I meant to ask closer to "what application benefits from these functions having such precise capabilities?" I realize that this thread is more about building accurate and precise tools to be used by others for who knows what purpose, I'm just fishing for some of those purposes, if anyone here knows. Maybe not. Even if that's the case, it is rewarding to see such interesting collaboration and contributions to continually sharpen the tools.

To be honest I don't know what one would use a high accuracy inverse gamma function for. I have never needed one. I am sure that in some obscure corner of science people use this function and need it to be accurate, but I couldn't tell you off the top of my head how it would relate to a real-world application or need. As a nuclear physicist Walter probably uses more of the functions in the WP-34S for his everyday computations than just about anyone in this forum. Perhaps he or someone else in this forum can answer your question. Even though I don't personally know how/why the function is needed, I can appreciate the elegance of it's implementation like looking at fine painting or sculpture. I am relatively new to "Algorithm Design" myself. I did help fix the WP34S's complex hyperbolic tangent function, and Polar to Rectangular function, and helped Torsten Manz improve several algorithms in his HP-15C simulator including the Gamma function, and most of the complex trig, and hyperbolic trig functions here, but I am by no means an expert. I could not have written the algorithms that Bit and Dieter submitted. I know just enough to begin to appreciate some of the techniques used to keep the errors small, but not enough to optimize them for program size or speed. There is a real science and art to "Algorithm Design". I own several books on the subject and it gets deep pretty fast.

(02-08-2015 11:06 PM)BarryMead Wrote: To be honest I don't know what one would use a high accuracy inverse gamma function for. I have never needed one.

Squeezing out the best possible, most accurate result is the fun of it. Who needs a real world application? ;-)

(02-08-2015 11:06 PM)BarryMead Wrote: I could not have written the algorithms that Bit and Dieter submitted. I know just enough to begin to appreciate some of the techniques used to keep the errors small,

That's about the level I'm working on, as far as the Gamma/Digamma function is concerned. ;-)

The tricky part still remains unsolved: getting accurate results for arguments extremely close to the Gamma minimum at 1,46163... etc. The current algorithm could do this, provided the Digamma function is sufficiently accurate and the working precision is about 53 (!) digits (required for Gamma of a 34-digit argument). So there still is a lot to do.

(02-08-2015 07:02 AM)Paul Dale Wrote: If the digamma route is interesting and custom images are acceptable, I've implemented this function both in xrom (trunk/xrom/digamma.wp34s) and native C (trunk/unused/digamma.c).

I would support this suggestion of a high-precision digamma function. Looking at the current code that comes with the emulator (both in the lib and the source in digamma.wp34s) the implementation seems to be a "based on Jean-Marc Baillard's HP-41 digamma code". Here the chosen method (use an asymptotic series expansion up to x8 for all x>8) is fine because it returns approx. 10 valid digits, i.e. the 41's working precision. On a 34s we should expect more. ;-)

So an accurate digamma function still is missing. On the other hand the one or other 34s function is, hmmm... "optional". Consider the inverse of Lambert's W: a simple ex RCLx L does the trick just as well.

(02-16-2015 07:07 AM)Dieter Wrote: So an accurate digamma function still is missing. On the other hand the one or other 34s function is, hmmm... "optional". Consider the inverse of Lambert's W: a simple ex RCLx L does the trick just as well.

The built-in digamma function that Pauli mentioned, and which isn't available by default, is more accurate. Perhaps up to 33 digits in double precision mode if I see it correctly. If you don't have a build environment but would like to play with the built-in function, I can send you some binaries that have it enabled.

(02-16-2015 05:46 PM)Bit Wrote: The built-in digamma function that Pauli mentioned, and which isn't available by default, is more accurate. Perhaps up to 33 digits in double precision mode if I see it correctly.

That's what should be expected with the internal 39-digit precision of C-coded functions. ;-)

Back to the current solution: The Ψ code in the user code library was designed for 10-digit accuracy on a 41 series calculator. It uses the asymptotic method given in Abramowitz & Stegun (1972) 6.3.18 for x > 8 and four terms of the series. Smaller arguments are evaluated via Ψ(x+1) = Ψ(x)+1/x.

For the 34s I suggest using x > 16 (in SP) and six terms, which – when evaluated with sufficient precision – should provide an absolute error order of 10–18. The results are even better with a slight tweak: instead of B12 = –691/2730 I simply use –1/4. ;-)

Here is some experimental code that should work for positive arguments. It is not yet complete and cannot replace the current Ψ routine, but I think it points into the right direction:

(02-16-2015 05:46 PM)Bit Wrote: If you don't have a build environment but would like to play with the built-in function, I can send you some binaries that have it enabled.

Thank you very much. But let me try first what can be done with user code. The mentioned results might give a first impression, now let's see what happens if the results move very close to zero, i.e. near the Gamma minimum at 1,4616...

I wonder what the internal digamma function might return for Ψ(1,4616321449768362) and ...363? Can you post the results with all 34 digits?

(02-16-2015 05:46 PM)Bit Wrote: The built-in digamma function that Pauli mentioned, and which isn't available by default, is more accurate. Perhaps up to 33 digits in double precision mode if I see it correctly.

That's what should be expected with the internal 39-digit precision of C-coded functions. ;-)

I should've clarified that I tried the XROM code (trunk/xrom/digamma.wp34s), not the C version.

(02-16-2015 07:09 PM)Dieter Wrote: I wonder what the internal digamma function might return for Ψ(1,4616321449768362) and ...363? Can you post the results with all 34 digits?

(02-16-2015 07:09 PM)Dieter Wrote: I wonder what the internal digamma function might return for Ψ(1,4616321449768362) and ...363? Can you post the results with all 34 digits?

I don't think I ever tuned up the C version of the digamma routine. The series expansion there is relatively short and aimed for single precision. The XROM version uses significantly more terms (if you are in double precision).

I also don't remember why I stored the series constants as reciprocals in XROM -- probably saving a few bytes but slower.

(02-16-2015 08:46 PM)Paul Dale Wrote: I don't think I ever tuned up the C version of the digamma routine. The series expansion there is relatively short and aimed for single precision. The XROM version uses significantly more terms (if you are in double precision).

Yes, I just looked at the code in digamma.wp34s.

Generally there is a tradeoff between the number of terms used and the threshold for the minimum x used for the series. The current version has a constant threshold of 10 (SP) resp. 20 (DP) and a fixed number of terms in the series. This number of terms can be substantially reduced if the threshold is increased. I am using 16 in SP resp. 256 (!) in DP, combined with merely six terms (up to x^12) with good results and a similar accuracy level (cf. my reply to Bit's post).

IMHO fine-tuning this relation (threshold vs. number of terms) is the clue to adequate accuracy. The major relevant problem that still exists is accuracy for results very close to zero. Some additional digits can be squeezed out by carefully rearranging the order intermediate results are added together. But much is lost due to digit cancelling, so a certain absolute error remains. Which reduces the number of valid digits in results close to zero.

(02-16-2015 08:46 PM)Paul Dale Wrote: I also don't remember why I stored the series constants as reciprocals in XROM -- probably saving a few bytes but slower.

I wonder where these constants (recalled by CNST→J) are stored. Is it possible to generate custom constants in XROM code?

As in constants from the CNST function? Then yes. There are a number of such constants in compile_consts.c -- look for most of the SYSCONST entries. Due to some thoughtful code by Marcus, these are stored in either single or double precision -- the smaller format is used if possible.