> There is an interesting and maybe related feature of many current> systems. Many now do not actually allocate pages when requested, but> wait until the allocated memory is modified.

That reminds me on sparse files, which also are stored only to their
really used extent.

> As I understand it, the problem with this method comes when the system> is actually short on memory, but there is no way to indicate that to> the program.

What is "short on memory"?

Short of *physical* memory is the usual state of general purpose (not
realtime) multi-tasking systems, reached when the memory requirements of
the system itself, and of all started processes, together exceed the
physical memory capacity. But that's not a critical state with paged
RAM, because only a fraction of the total virtual memory has to reside
in physical memory, at any given time.

Every process can run with a handful of RAM pages, containing the
currently executing instruction and the related data. The availability
of more pages depends on the overall load and scheduling strategies of
the system.

In such an environment I don't see a need for any indication of a
temporary shortage of resources, in detail because the processes have
almost no chance to react in a timely and meaningful way.

IMO the only situation, where some information about the machine state
were really useful, is during the determination of the "best" size of
buffers (caches), which shall reduce the amount of system I/O calls.
When these buffers are oversized, the reduced rate of system calls can
be overcompensated by an increased swapping rate of the system. Then
every process could resize its buffers, according to the current
*performance* of the overall system. But what's a *useful* formula for
the determination of the actually "best" buffer size? Except for
extremely oversized buffers, near the free virtual address space of an
process, I doubt that the system or a process can *measure* the runtime
impact of a reduction or enlargement of some buffer.

Even when a process implements an pool of buffers, which can be flushed
when a performance degradation suggests to do so, the effect IMO would
not be noticeable. Either the cached information is required in actual
processing, then it doesn't matter whether it's read back into the RAM
programmatically or by system swapping code. Or the currently not
required information will be swapped out of RAM, as soon as the system
feels a need for doing so.

Did I miss something?

> [This is getting a bit farafield from compilers. Some years ago I spun> off a separate mailing list for GC discussions, which isn't very active> but has over 700 subscribers.>> To subscribe: send a message containing subscribe to gclist-request@lists.iecc.com.> -John]

You're right, memory de/allocation strategies more are an OS than a
compiler or language topic. I only wanted to add some possibly more
important considerations about memory usage, apart from supporting the
lazyness of coders by offering them a GC system ;-)