Re: [PATCH] slub: fix unreclaimable slab stat for bulk free

From: Kefeng Wang
Date: Tue Aug 03 2021 - 10:24:46 EST



On 2021/7/29 22:03, Shakeel Butt wrote:
On Wed, Jul 28, 2021 at 11:52 PM Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> wrote:

On 2021/7/28 23:53, Shakeel Butt wrote:
SLUB uses page allocator for higher order allocations and update
unreclaimable slab stat for such allocations. At the moment, the bulk
free for SLUB does not share code with normal free code path for these
type of allocations and have missed the stat update. So, fix the stat
update by common code. The user visible impact of the bug is the
potential of inconsistent unreclaimable slab stat visible through
meminfo and vmstat.

Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx>
---
mm/slub.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 6dad2b6fda6f..03770291aa6b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3238,6 +3238,16 @@ struct detached_freelist {
struct kmem_cache *s;
};

+static inline void free_nonslab_page(struct page *page)
+{
+ unsigned int order = compound_order(page);
+
+ VM_BUG_ON_PAGE(!PageCompound(page), page);
Could we add WARN_ON here, or we got nothing when CONFIG_DEBUG_VM is
disabled.
I don't have a strong opinion on this. Please send a patch with
reasoning if you want WARN_ON_ONCE here.

Ok, we met a BUG_ON(!PageCompound(page)) in kfree() twice in lts4.4, we are still debugging it.

It's different to analyses due to no vmcore, and can't be reproduced.

WARN_ON() here could help us to notice the issue.

Also is there any experience or known fix/way to debug this kinds of issue? memory corruption?

Any suggestion will be appreciated, thanks.



.