Re: Slab: Node rotor for freeing alien caches and remote per cpu pages.

From: Ravikiran G Thirumalai
Date: Thu Feb 23 2006 - 20:25:41 EST


On Thu, Feb 23, 2006 at 11:41:51AM -0800, Christoph Lameter wrote:
> On Thu, 23 Feb 2006, Andrew Morton wrote:
>
> > Christoph Lameter <clameter@xxxxxxxxxxxx> wrote:
> > >
> > > The cache reaper currently tries to free all alien caches and all remote
> > > per cpu pages in each pass of cache_reap.
> >
> > umm, why? We have a reap timer per cpu - why doesn't each CPU drain its
> > own stuff and its own node's stuff and leave the other nodes&cpus alone?
>
> Each cpu has per cpu pages on remote nodes and also has alien caches
> on remote nodes. These are only accessible from the processor using them.

Actually, all cpus on the node share the alien_cache, and the alien_cache is
one per remote node (for the cachep). So currently each cpu on the node
drains the same alien_cache onto all the remote nodes in the per-cpu eventd.

What is probably very expensive here at drain_alien_cache is free_block
getting called from the foreign node, and freeing remote pages.
We have a patch-set here to drop-in the alien objects from the current node to
the respective alien node's drop box, and that drop box will be cleared
locally (so that freeing happens locally). This would happen off cache_reap.
(I was holding from posting it because akpm complained about slab.c
being full on -mm. Maybe I should post it now...).

Round robin might still be useful for drain_alien_cache with that approach,
but maybe init_reap_node should initialize the per-cpu reap_node with a skew
for cpus on the same node (so all cpus of a node do not drain to the same
foreign node when the eventd runs?)

Round robin for drain_remote_pages is going to be useful for us too I think.

Thanks,
Kiran
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/