Re: [PATCH v2] mm/slub: Run free_partial() outside of the kmem_cache_node->list_lock

From: Vladimir Davydov
Date: Tue Aug 09 2016 - 23:20:24 EST


On Tue, Aug 09, 2016 at 04:27:46PM +0100, Chris Wilson wrote:
...
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff45..58f0eb6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3479,6 +3479,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
> */
> static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> {
> + LIST_HEAD(partial_list);

nit: slabs added to this list are not partially used - they are free, so
let's call it 'free_slabs' or 'discard_list' or just 'discard', please

> struct page *page, *h;
>
> BUG_ON(irqs_disabled());
> @@ -3486,13 +3487,16 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> list_for_each_entry_safe(page, h, &n->partial, lru) {
> if (!page->inuse) {
> remove_partial(n, page);
> - discard_slab(s, page);
> + list_add(&page->lru, &partial_list);

If there are objects left in the cache on destruction, the cache won't
be destroyed. Instead it will be left on the slab_list and can get
reused later. So we should use list_move() here to always leave
n->partial in a consistent state, even in case of a leak.

> } else {
> list_slab_objects(s, page,
> "Objects remaining in %s on __kmem_cache_shutdown()");
> }
> }
> spin_unlock_irq(&n->list_lock);
> +
> + list_for_each_entry_safe(page, h, &partial_list, lru)
> + discard_slab(s, page);
> }
>
> /*