Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG

From: Jason A. Donenfeld
Date: Tue Jun 19 2018 - 17:54:20 EST


On Tue, Jun 19, 2018 at 11:34 PM Shakeel Butt <shakeelb@xxxxxxxxxx> wrote:
>
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
>
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.
>
> Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
> Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx>
> Reported-by: Jason A . Donenfeld <Jason@xxxxxxxxx>
> Cc: Christoph Lameter <cl@xxxxxxxxx>
> Cc: Pekka Enberg <penberg@xxxxxxxxxx>
> Cc: David Rientjes <rientjes@xxxxxxxxxx>
> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: <stable@xxxxxxxxxxxxxxx>
> ---
> mm/slub.c | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index a3b8467c14af..731c02b371ae 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
>
> bool __kmem_cache_empty(struct kmem_cache *s)
> {
> - int node;
> + int cpu, node;
> struct kmem_cache_node *n;
>
> + /*
> + * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> + * check slabs for all cpus.
> + */
> + if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> + for_each_online_cpu(cpu) {
> + struct kmem_cache_cpu *c;
> +
> + c = per_cpu_ptr(s->cpu_slab, cpu);
> + if (c->page || slub_percpu_partial(c))
> + return false;
> + }
> + }
> +
> for_each_kmem_cache_node(s, node, n)
> if (n->nr_partial || slabs_node(s, node))
> return false;
> --
> 2.18.0.rc1.244.gcf134e6275-goog
>

I can confirm that this fixes the test case on build.wireguard.com.