Re: [PATCH v6 1/4] mm/slub: enable debugging memory wasting of kmalloc

From: John Thomson
Date: Tue Nov 01 2022 - 15:40:17 EST




On Tue, 1 Nov 2022, at 13:55, Feng Tang wrote:
> On Tue, Nov 01, 2022 at 06:42:23PM +0800, Hyeonggon Yoo wrote:
>> setup_arch() is too early to use slab allocators.
>> I think slab received NULL pointer because kmalloc is not initialized.
>>
>> It seems arch/mips/ralink/mt7621.c is using slab too early.
>
> Cool! it is finally root caused :) Thanks!
>
> The following patch should solve it and give it a warning message, though
> I'm not sure if there is other holes.
>
> Thanks,
> Feng
>
> ---
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 33b1886b06eb..429c21b7ecbc 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -1043,7 +1043,14 @@ size_t __ksize(const void *object)
> #ifdef CONFIG_TRACING
> void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size)
> {
> - void *ret = __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE,
> + void *ret;
> +
> + if (unlikely(ZERO_OR_NULL_PTR(s))) {
> + WARN_ON_ONCE(1);
> + return s;
> + }
> +
> + ret = __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE,
> size, _RET_IP_);
>
> trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags, NUMA_NO_NODE);
> diff --git a/mm/slub.c b/mm/slub.c
> index 157527d7101b..85d24bb6eda7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3410,8 +3410,14 @@ static __always_inline
> void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru,
> gfp_t gfpflags)
> {
> - void *ret = slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size);
> + void *ret;
>
> + if (unlikely(ZERO_OR_NULL_PTR(s))) {
> + WARN_ON_ONCE(1);
> + return s;
> + }
> +
> + ret = slab_alloc(s, lru, gfpflags, _RET_IP_, s->object_size);
> trace_kmem_cache_alloc(_RET_IP_, ret, s, gfpflags, NUMA_NO_NODE);
>
> return ret;

Yes, thank you, that patch atop v6.1-rc3 lets me boot, and shows the warning and stack dump.
Will you submit that, or how do we want to proceed?

transfer started ......................................... transfer ok, time=2.11s
setting up elf image... OK
jumping to kernel code
zimage at: 80B842A0 810B4BC0

Uncompressing Linux at load address 80001000

Copy device tree to address 80B80EE0

Now, booting the kernel...

[ 0.000000] Linux version 6.1.0-rc3+ (john@john) (mipsel-buildroot-linux-gnu-gcc.br_real (Buildroot 2021.11-4428-g6b6741b) 12.2.0, GNU ld (GNU Binutils) 2.39) #73 SMP Wed Nov 2 05:10:01 AEST 2022
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] WARNING: CPU: 0 PID: 0 at mm/slub.c:3416 kmem_cache_alloc+0x5a4/0x5e8
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 6.1.0-rc3+ #73
[ 0.000000] Stack : 810fff78 80084d98 00000000 00000004 00000000 00000000 80889d04 80c90000
[ 0.000000] 80920000 807bd328 8089d368 80923bd3 00000000 00000001 80889cb0 00000000
[ 0.000000] 00000000 00000000 807bd328 8084bcb1 00000002 00000002 00000001 6d6f4320
[ 0.000000] 00000000 80c97d3d 80c97d68 fffffffc 807bd328 00000000 00000000 00000000
[ 0.000000] 00000000 a0000000 80910000 8110a0b4 00000000 00000020 80010000 80010000
[ 0.000000] ...
[ 0.000000] Call Trace:
[ 0.000000] [<80008260>] show_stack+0x28/0xf0
[ 0.000000] [<8070c958>] dump_stack_lvl+0x60/0x80
[ 0.000000] [<8002e184>] __warn+0xc4/0xf8
[ 0.000000] [<8002e210>] warn_slowpath_fmt+0x58/0xa4
[ 0.000000] [<801c0fac>] kmem_cache_alloc+0x5a4/0x5e8
[ 0.000000] [<8092856c>] prom_soc_init+0x1fc/0x2b4
[ 0.000000] [<80928060>] prom_init+0x44/0xf0
[ 0.000000] [<80929214>] setup_arch+0x4c/0x6a8
[ 0.000000] [<809257e0>] start_kernel+0x88/0x7c0
[ 0.000000]
[ 0.000000] ---[ end trace 0000000000000000 ]---
[ 0.000000] SoC Type: MediaTek MT7621 ver:1 eco:3
[ 0.000000] printk: bootconsole [early0] enabled

Thank you for working through this with me.
I will try to address the root cause in mt7621.c.
It looks like other arch/** soc_device_register users use postcore_initcall, device_initcall,
or the ARM DT_MACHINE_START .init_machine. A quick hack to use postcore_initcall in mt7621
avoided this zero ptr kmem_cache passed to kmem_cache_alloc_lru.


Thanks,

--
John Thomson