Daniel Fischer wrote:
===================================
> > Is it only my machime, or can you confirm that for
> > the Ackermann benchmark, it's very good for C that
they
> > chose 9 and not a larger value? For 10, we are
> > significantly faster and for 11,12,13, we can run
> > rings around the C-programme:
Sebastian Sylvan wrote:
===================================
> This is interesting. Hopefully it's not intentional,
> but it's quite obvious that for benchmarks where the
fastest
> time is only a few fractions of a second, languages
with more
> complex runtime systems will be unfairly slow due to
the
> startup cost.
[...]
> In other words I'd prefer if all benchmarks are
> reconfigured to target an execution time of at least
a few
> seconds for the fastest benchmarks.
I can confirm that it was not intentional, though we
have
been aware of the problem. The original shootout used
even
smaller values of N. About a year ago, we increased
the values
to the levels you see now.
As hardware (and implementations) have improved, it is
probably
time to bump the values yet again.
Part of the problem was that some languages
(*cough*-ruby-*cough*) have extremely poor support for
recursive
calls, and will encounter stack overflow or other
problems when
N is above 7 or 8. We've changed things a bit to
supply higher
stack depths to avoid this, but at some point we just
have to
bow to reality and mark Python and Ruby up as failures
in the
Ackermann test (let the hate-mail begin, yet again!).
We've increased the timeouts a bit to help, and the
stack depth,
so I'll rerun the ackermann benchmarks with 9 as the
lowest
level, and extending to 10 and 11 at the higher end.
Thanks,
-Brent