Re: [Slub allocator] There are chances thatkmem_cache_cpu->freelist gets lost if the process happens to be rescheduledto a differenet cpu before the local_irq_save() completes in __slab_alloc()

From: Eric Dumazet
Date: Mon Dec 12 2011 - 22:57:21 EST


Le lundi 12 dÃcembre 2011 Ã 18:39 +0100, Eric Dumazet a Ãcrit :
> Le lundi 12 dÃcembre 2011 Ã 09:50 -0600, Christoph Lameter a Ãcrit :
>
> > Correct. Issue was introduced in 2.6.39.
> >
> > Acked-by: Christoph Lameter <cl@xxxxxxxxx>
> >
>
> Indeed, I reproduced the leak with hackbench and lot of threads.
>
> Thanks
>
> [PATCH] slub: fix a possible memleak in __slab_alloc()
>
> Zhihua Che reported a possible memleak in slub allocator on
> CONFIG_PREEMPT=y builds.
>
> It is possible current thread migrates right before disabling irqs in
> __slab_alloc(). We must check again c->freelist, and perform a normal
> allocation instead of scratching c->freelist.
>
> Many thanks to Zhihua Che for spotting this bug, introduced in 2.6.39
>
> Reported-by: zhihua che <zhihua.che@xxxxxxxxx>
> Signed-off-by: Eric Dumazet <eric.dumazet@xxxxxxxxx>
> Acked-by: Christoph Lameter <cl@xxxxxxxxx>
> CC: Pekka Enberg <penberg@xxxxxxxxxxxxxx>
> CC: stable@xxxxxxxxxxxxxxx
> ---
> mm/slub.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index ed3334d..923d238 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2166,6 +2166,12 @@ redo:
> goto new_slab;
> }
>
> +#ifdef CONFIG_PREEMPT
> + object = c->freelist;
> + if (object)
> + goto load_freelist;
> +#endif
> +
> stat(s, ALLOC_SLOWPATH);
>
> do {
>

Thinking again about the issue, I believe its not a CONFIG_PREEMPT only
one.

We can be interrupted and the IRQ handler can free an object an populate
freelist too. So the check must always be done.

Thanks

[PATCH v2] slub: fix a possible memleak in __slab_alloc()

Zhihua Che reported a possible memleak in slub allocator on
CONFIG_PREEMPT=y builds.

It is possible current thread migrates right before disabling irqs in
__slab_alloc(). We must check again c->freelist, and perform a normal
allocation instead of scratching c->freelist.

Many thanks to Zhihua Che for spotting this bug, introduced in 2.6.39

V2: Its also possible an IRQ freed one (or several) object(s) and
populated c->freelist, so its not a CONFIG_PREEMPT only problem.

Reported-by: zhihua che <zhihua.che@xxxxxxxxx>
Signed-off-by: Eric Dumazet <eric.dumazet@xxxxxxxxx>
CC: Christoph Lameter <cl@xxxxxxxxx>
CC: Pekka Enberg <penberg@xxxxxxxxxxxxxx>
---
mm/slub.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/mm/slub.c b/mm/slub.c
index ed3334d..1a919f0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2166,6 +2166,11 @@ redo:
goto new_slab;
}

+ /* must check again c->freelist in case of cpu migration or IRQ */
+ object = c->freelist;
+ if (object)
+ goto load_freelist;
+
stat(s, ALLOC_SLOWPATH);

do {


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/