Re: libm tests vs. alpha

On Wed, Apr 09, 2014 at 03:28:33PM +0200, Martin Husemann wrote:
> Currently many libm tests are failing on alpha.
>
> This is mostly due to constructs like:
>
> const double x = -1.0f / 0.0;
>
> and then some using the resulting -inf as input to some libm function.
>
...
>
> However, it seems on alpha you can not disable the "div by zero" traps,
> so we always die with a SIGFPE. Do I understand this correctly?
> What should we do about it?
I've not looked into these tests much - except to realise that the
'test to noise' raitio is very low.
For the exp2() tests I just used compile time initialiers for
infinity and nan - seemed to work fine.
An associated problem is generating signalling under/overflow
within libm itself.
Some MD helper routines would help.
> While looking at it, it occured to me that setregs() maybe should
> init fpcr of a new fpu context with all exception traps disabled (in
> the "not inheriting" case) - like:
>
> if (__predict_true((l->l_md.md_flags & IEEE_INHERIT) == 0)) {
> l->l_md.md_flags &= ~MDLWP_FP_C;
> pcb->pcb_fp.fpr_cr = FPCR_DYN(FP_RN);
> // or in FPCR_INED, FPCR_UNFD, FPCR_UNDZ, FPCR_OVFD, FPCR_DZED....
> }
>
>
> or should this be controlled by the compilers -mfp-trap-mode= settings?
I've not seen anything equivalent in i386.
fork() just copies the parent's state, exec sets up the fixed defaults
('no traps enabled' and standard rounding modes).
> And a last question: should we compile the libm tests on alpha
> with some flags like -mieee-with-inexact and maybe -mtrap-precision=i ?
NFI.
I've not written any code that did a significant amount of FP since the
1970s!
joerg will have a strong opinion about FP.
David
--
David Laight: david%l8s.co.uk@localhost