I think you’re facing the same concerns everyone who has ever used any software for any important purpose has faced since computers began. Without knowing and being certain of the underlying code, we’re all just hoping.

To the extent Math.NET results have been ‘accepted’ by anyone are somewhat irrelevant, IMO, since it is no guarantee they have tested and verified the specific features required in YOUR applications. Or that their tests and verifications were correct.

My only suggestion is that you do what I and other engineers do, and that is take known inputs and see if the the software agrees with known outputs, specific to your applications. The same process most use when developing their own software.

I totally agree with you: we engineers anyhow must verify our software again on our own. And this is, what I actually already do to some extent: verifying lots of the results coming from Math.NET with acceptance tests in BDD style, which are validated with R. But, this is just “validation by example” lets say - and still no proof of the correctness.

Our customers do scientific research based on the results, our software is providing. For them, it would be quite important to have a solid base, they take the actual values from. Publishing papers with wrong or incorrect data might not be that good for the reputation.

For me, the question remains, if the community here plans to validate the framework against well accepted math libraries, used in scientific research, such as R or SPSS (maybe also Matlab).
Are there plans to do something like this?

Some of the statistical algorithms are verified against published NIST data sets, but we do not yet have data sets to cover all algorithms this way. Some unit tests also verify against expected values from R, MATLAB and Mathematica, but these are hardly thorough enough to what I expect you are looking for.

I do not have specific plans, but I’d happily help to integrate more data sets to verify our implementations against.