Should it be agreed that caching is worthwhile I would propose a very
> simple implementation. We only really need to cache a small handful of
> array data pointers for the fast allocate deallocate cycle that appear
> in common numpy usage.
> For example a small list of maybe 4 pointers storing the 4 largest
> recent deallocations. New allocations just pick the first memory block
> of sufficient size.
> The cache would only be active on systems that support MADV_FREE (which
> is linux 4.5 and probably BSD too).
>
> So what do you think of this idea?
>
This is an interesting thought, and potentially a nontrivial speedup with
zero user effort. But coming up with an appropriate caching policy is going
to be tricky. The thing is, for each array, numpy grabs a block "the right
size", and that size can easily vary by orders of magnitude, even within
the temporaries of a single expression as a result of broadcasting. So
simply giving each new array the smallest cached block that will fit could
easily result in small arrays in giant allocated blocks, wasting
non-reclaimable memory. So really you want to recycle blocks of the same
size, or nearly, which argues for a fairly large cache, with smart indexing
of some kind.
How much difference is this likely to make? Note that numpy is now in some
cases able to eliminate allocation of temporary arrays.
I think the only way to answer these questions is to set up a trial
implementation, with user-switchable behaviour (which should include the
ability for users to switch it on even when MADV_FREE is not available) and
sensible statistics reporting. Then volunteers can run various numpy
workloads past it.
Anne