I agree that something in this direction should be done.
Garbage collection details can affect performance a lot.

Exposing one or two variables isn't difficult. I'm wondering
why this hasn't been done yet. One guess is that it would
not at all be portable to other systems (JRuby,...).

No reason MRI should not consider making their GC tunable IMHO.

I was only trying to guess reasons why this hasn't happened yet,
and that was the only one I was able to come up with (but not a very
good one, I agree).

Regards, Martin.

I think GC tunables at startup or during runtime should be considered implementation dependent. Java has many different GCs. Certainly not all GC's have the same tunable parameters. Even if they did would you really expect a tunable to work on different GC impls as expected?

If this is doable at runtime, then it would be nice if the GC API would allow a name to be associated with the parameter being tweaked so we could ignore the setting if using the wrong GC. The name is just a random idea, but everyone should get the point...

I think GC tunables at startup or during runtime should be considered
implementation dependent. Java has many different GCs. Certainly not
all GC's have the same tunable parameters. Even if they did would you
really expect a tunable to work on different GC impls as expected?

I'd expand this to point out that all Java's GCs have many, many
possible settings as well, but almost all are configured at startup.
There are very few settings that can be tweaked at runtime, for a couple
reasons:

Hotspot makes some decisions about how to allocate and reallocate heap
space immediately at startup.

Hotspot also adjust GC settings (heap ratios, tenuring rates, etc)
based on runtime information. So tweaking things at runtime would be
forcing profiled settings to be thrown out.

But in general, having some set of command-line configurable settings
would be a great idea, if it could be done without a performance impact.
JRuby users are becoming more accustomed to having many available config
settings when they need them.

=begin
I've attached a patch adding getters and setters for HEAP_MIN_SLOTS and GC_MALLOC_LIMIT. All tests still pass, and the ruby-benchmark-suite shows no slowdown.

It works by simply adding two extra static variables that is initially defined to the currently-existing constants, and using that instead of the compile-time constants throughout the code. I then add getters and setters retrieve and modify those variables.
=end

Though I think a few rough "modes" is acceptable (e.g., -server and
-clients of Java), such a specific parameter tuning is not Ruby way.
But it may be the only viable way to provide GC options independent of
the Ruby implementation. HEAP_MIN_SLOTS is also an implementation detail
(I think).

Another way would be to mimic --server and --client on MRI in a way that
optimizes the GC as expected (--client optimizes for startup speed and
code running once, --server optimizes for long-running and background
processes).

=begin
Attaching demo file.
A little contrived still, but on my box, the things that help it improve are (believe it or not)

-#define HEAP_MIN_SLOTS 10000
+#define HEAP_MIN_SLOTS 100000

12.3s normal gc.c

to 10.4s

Perhaps defining more space up front makes it not use N2 garbage traversals as it grows? It appears that's useful when you know you're going to be needing the space eventually anyway...in reality you could also put it as the default and it wouldn't hurt "small scripts" too much either, I'd imagine.

changing MALLOC_LIMIT has been shown effective in rails apps though I don't have any immediate numbers handy [1]

As a side note, with 1.9.1 (default) it takes 14.8s, so some improvement already--thanks guys!

If we do eventually go to a --server --client model, --server could include some other optimizations, too, like lookup cacheing

While we're on the subject of GC optimization, you might be some speedup by making gc_stress settings only available if GC_DEBUG is defined--I highly doubt gc_stress is used much in the wild, though I could be proven wrong.

A little contrived still, but on my box, the things that help it improve are (believe it or not)

-#define HEAP_MIN_SLOTS 10000
+#define HEAP_MIN_SLOTS 100000

12.3s normal gc.c

to 10.4s

Thanks. 24.2s -> 16.5s on my environment.

I actually tried with ruby-benchmark-suite and Michael's patch.
But I couldn't confirm improvement by setting GC.malloc_limit. Now
I'm sure that in this case, the performance improvement seems to be
achieved by changing HEAP_MIN_SLOTS, rather than MALLOC_LIMIT.

If we do eventually go to a --server --client model, --server could include some other optimizations, too, like lookup cacheing

Note that the model is just my opinion :-)

I even know some committers dislike such a performance configuration.
I don't know matz's opinion, but I expect him to dislike too.