Re: [PATCH] slub: missing test for partial pages flush work inflush_all

From: Jeff Layton
Date: Wed May 16 2012 - 07:05:48 EST


On Sun, 13 May 2012 09:53:15 +0300
Gilad Ben-Yossef <gilad@xxxxxxxxxxxxx> wrote:

> On Fri, May 11, 2012 at 7:14 PM, Christoph Lameter <cl@xxxxxxxxx> wrote:
> > Didn't I already ack this before?
> >
> > Acked-by: Christoph Lameter <cl@xxxxxxxxx>
> >
>
> Yes, you did, but the patch description and title was lacking and
> Majianpeng kindly fixed it, hence the re-send, I guess.
>
> I've added Andrew, since he took my original commit that introduces
> the bug that this patch by Majianpeng fixes (and also LKML).
>
> This fix really needs to get into 3.4, otherwise we'll be breaking
> slub. What's the best way to go about that?
>
> Thanks!
> Gilad
>
> > On Fri, 11 May 2012, majianpeng wrote:
> >
> >> Subject: [PATCH] slub: missing test for partial pages flush work in flush_all
> >>
> >> Find some kernel message like:
> >> SLUB raid5-md127: kmem_cache_destroy called for cache that still has objects.
> >> Pid: 6143, comm: mdadm Tainted: G Â Â Â Â Â O 3.4.0-rc6+ Â Â Â Â#75
> >> Call Trace:
> >> [<ffffffff811227f8>] kmem_cache_destroy+0x328/0x400
> >> [<ffffffffa005ff1d>] free_conf+0x2d/0xf0 [raid456]
> >> [<ffffffffa0060791>] stop+0x41/0x60 [raid456]
> >> [<ffffffffa000276a>] md_stop+0x1a/0x60 [md_mod]
> >> [<ffffffffa000c974>] do_md_stop+0x74/0x470 [md_mod]
> >> [<ffffffffa000d0ff>] md_ioctl+0xff/0x11f0 [md_mod]
> >> [<ffffffff8127c958>] blkdev_ioctl+0xd8/0x7a0
> >> [<ffffffff8115ef6b>] block_ioctl+0x3b/0x40
> >> [<ffffffff8113b9c6>] do_vfs_ioctl+0x96/0x560
> >> [<ffffffff8113bf21>] sys_ioctl+0x91/0xa0
> >> [<ffffffff816e9d22>] system_call_fastpath+0x16/0x1b
> >>
> >> Then using kmemleak can found those messages:
> >> unreferenced object 0xffff8800b6db7380 (size 112):
> >> Â comm "mdadm", pid 5783, jiffies 4294810749 (age 90.589s)
> >> Â hex dump (first 32 bytes):
> >> Â Â 01 01 db b6 ad 4e ad de ff ff ff ff ff ff ff ff Â.....N..........
> >> Â Â ff ff ff ff ff ff ff ff 98 40 4a 82 ff ff ff ff Â.........@xxxxxx
> >> Â backtrace:
> >> Â Â [<ffffffff816b52c1>] kmemleak_alloc+0x21/0x50
> >> Â Â [<ffffffff8111a11b>] kmem_cache_alloc+0xeb/0x1b0
> >> Â Â [<ffffffff8111c431>] kmem_cache_open+0x2f1/0x430
> >> Â Â [<ffffffff8111c6c8>] kmem_cache_create+0x158/0x320
> >> Â Â [<ffffffffa008f979>] setup_conf+0x649/0x770 [raid456]
> >> Â Â [<ffffffffa009044b>] run+0x68b/0x840 [raid456]
> >> Â Â [<ffffffffa000bde9>] md_run+0x529/0x940 [md_mod]
> >> Â Â [<ffffffffa000c218>] do_md_run+0x18/0xc0 [md_mod]
> >> Â Â [<ffffffffa000dba8>] md_ioctl+0xba8/0x11f0 [md_mod]
> >> Â Â [<ffffffff81272b28>] blkdev_ioctl+0xd8/0x7a0
> >> Â Â [<ffffffff81155bfb>] block_ioctl+0x3b/0x40
> >> Â Â [<ffffffff811326d6>] do_vfs_ioctl+0x96/0x560
> >> Â Â [<ffffffff81132c31>] sys_ioctl+0x91/0xa0
> >> Â Â [<ffffffff816dd3a2>] system_call_fastpath+0x16/0x1b
> >> Â Â [<ffffffffffffffff>] 0xffffffffffffffff
> >>
> >> This bug introduced by Commit a8364d5555b2030d093cde0f0795.The
> >> commit did not include checks for per cpu partial pages being present on a
> >> cpu.
> >>
> >> Signed-off-by: majianpeng <majianpeng@xxxxxxxxx>
> >> ---
> >> Âmm/slub.c | Â Â2 +-
> >> Â1 files changed, 1 insertions(+), 1 deletions(-)
> >>
> >> diff --git a/mm/slub.c b/mm/slub.c
> >> index ffe13fd..6fce08f 100644
> >> --- a/mm/slub.c
> >> +++ b/mm/slub.c
> >> @@ -2040,7 +2040,7 @@ static bool has_cpu_slab(int cpu, void *info)
> >> Â Â Â struct kmem_cache *s = info;
> >> Â Â Â struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
> >>
> >> - Â Â return !!(c->page);
> >> + Â Â return c->page || c->partial;
> >> Â}
> >>
> >> Âstatic void flush_all(struct kmem_cache *s)
> >>
>
>
>

FWIW, this patch fixed a similar warning that I was seeing on module
unload with cifs.ko. I agree it would be good to get it in for 3.4...

Tested-by: Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/