This patch builds on the suggestion previously given by Cristoph, with onemajor difference: it still keeps the cache dispatcher and the cache duplicates.But its internals are completely different.

I no longer mess with the cache cores when pages are allocated. (except fordestruction, that happens a bit later, but that's quite simple). All of thatis done by the page allocator, by recognizing the __GFP_SLABMEMCG flag.

The catch here is that 99% of the time, the task doing the dispatch will bethe same allocating the page. It doesn't hold only when tasks are moving around.But that's an acceptable price to pay, at least for me. Moving around won't break,it will at the most put us on a state where a cache has a page that is accontedto a different cgroup. Or, if that cgroups is destroyed, not accounted to anyone.If that ever hurts anyone, this is solvable by a reaper, or by a full cache scanwhen the task moves.

/*+ * Will only have any effect when __GFP_SLABMEMCG is set.+ * This is verified in the (always inline) callee+ */+ if (!mem_cgroup_new_kmem_page(gfp_mask, &handle, order))+ return NULL;++ /* * Check the zones suitable for the gfp_mask contain at least one * valid zone. It's possible to have an empty zonelist as a result * of GFP_THISNODE and a memoryless node@@ -2474,6 +2482,8 @@ out: if (unlikely(!put_mems_allowed(cpuset_mems_cookie) && !page)) goto retry_cpuset;