Re: [PATCH v3] mm, slub: change run-time assertion in kmalloc_index() to compile-time

From: Marco Elver
Date: Thu May 13 2021 - 06:31:49 EST


On Thu, May 13, 2021 at 10:51AM +0200, Vlastimil Babka wrote:
> On 5/13/21 8:28 AM, Hyeonggon Yoo wrote:
> > On Wed, May 12, 2021 at 08:40:24PM -0700, Andrew Morton wrote:
> >> On Thu, 13 May 2021 12:12:20 +0900 Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> wrote:
> >> > On Wed, May 12, 2021 at 07:52:27PM -0700, Andrew Morton wrote:
> >> > > This explodes in mysterious ways. The patch as I have it is appended,
> >> > > for reference.
> >> > >
> >> > > gcc-10.3.0 allmodconfig.
> >> > >
> >> > > This patch suppresses the error:
> >>
> >> Ah, yes, of course, your patch changes kmalloc_index() to require that
> >> it always is called with a constant `size'. kfence_test doesn't do
> >> that.
> >>
> >> kfence is being a bit naughty here - the other kmalloc_index() callers
> >> only comple up the call after verifying that `size' is a compile-time
> >> constant.
>
> Agreed.

It's just a test, and performance doesn't matter for it.

The thing is this function lives in <linux/slab.h>, isn't prefixed with
__ or anything like that, so it really does look like a public function.

> >> Would something like this work?
>
> I'd prefer if we kept kmalloc_index() for constant sizes only. The broken build
> then warns anyone using it the wrong way that they shouldn't.

Agreed. Andrew's size_is_constant would do that. Also see my suggestion
below to keep the same interface.

> Besides, it really
> shouldn't be used outside of slab.

It's an allocator test. If we want to facilitate testing, it must be
allowed to verify or set up test cases that test boundary conditions
based on internal state.

In the case of kfence_test it wants:  the cache's alignment to create
accesses that fall on alignment boundaries; and to verify obj_to_index()
and objs_per_slab_page() are set up correctly.

I think the requirements are:

1. Make the interface hard to abuse. Adding the BUILD_BUG_ON does that.
2. Facilitate testing.

> But if kfence test really needs this, we could perhaps extract the index
> determining part out of kmalloc_slab().

That would duplicate kmalloc_index()? I don't see the need, let's keep
things simple.

> Hmm or I guess the kfence tests could just use kmalloc_slab() directly?

kmalloc_slab() is internal to slab and should not be exported. It'd
require exporting because the tests can be built as modules.
kmalloc_index() works perfectly fine, and the test really doesn't care
about performance of kmalloc_index(). :-)

See my suggestion below that builds on Andrew's size_is_constant but
would retain the old interface and support testing.

Thanks,
-- Marco

------ >8 ------

From: Marco Elver <elver@xxxxxxxxxx>
Subject: [PATCH] kfence: test: fix for "mm, slub: change run-time assertion in
kmalloc_index() to compile-time"

Signed-off-by: Marco Elver <elver@xxxxxxxxxx>
---
include/linux/slab.h | 9 +++++++--
mm/kfence/kfence_test.c | 5 +++--
2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 27d142564557..7a10bdc4b7a9 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -350,7 +350,8 @@ static __always_inline enum kmalloc_cache_type kmalloc_type(gfp_t flags)
* Note: there's no need to optimize kmalloc_index because it's evaluated
* in compile-time.
*/
-static __always_inline unsigned int kmalloc_index(size_t size)
+static __always_inline unsigned int __kmalloc_index(size_t size,
+ bool size_is_constant)
{
if (!size)
return 0;
@@ -386,11 +387,15 @@ static __always_inline unsigned int kmalloc_index(size_t size)
if (size <= 16 * 1024 * 1024) return 24;
if (size <= 32 * 1024 * 1024) return 25;

- BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()");
+ if (size_is_constant)
+ BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()");
+ else
+ BUG();

/* Will never be reached. Needed because the compiler may complain */
return -1;
}
+#define kmalloc_index(s) __kmalloc_index(s, true)
#endif /* !CONFIG_SLOB */

void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc;
diff --git a/mm/kfence/kfence_test.c b/mm/kfence/kfence_test.c
index 4acf4251ee04..7f24b9bcb2ec 100644
--- a/mm/kfence/kfence_test.c
+++ b/mm/kfence/kfence_test.c
@@ -197,7 +197,7 @@ static void test_cache_destroy(void)

static inline size_t kmalloc_cache_alignment(size_t size)
{
- return kmalloc_caches[kmalloc_type(GFP_KERNEL)][kmalloc_index(size)]->align;
+ return kmalloc_caches[kmalloc_type(GFP_KERNEL)][__kmalloc_index(size, false)]->align;
}

/* Must always inline to match stack trace against caller. */
@@ -267,7 +267,8 @@ static void *test_alloc(struct kunit *test, size_t size, gfp_t gfp, enum allocat

if (is_kfence_address(alloc)) {
struct page *page = virt_to_head_page(alloc);
- struct kmem_cache *s = test_cache ?: kmalloc_caches[kmalloc_type(GFP_KERNEL)][kmalloc_index(size)];
+ struct kmem_cache *s = test_cache ?:
+ kmalloc_caches[kmalloc_type(GFP_KERNEL)][__kmalloc_index(size, false)];

/*
* Verify that various helpers return the right values
--
2.31.1.607.g51e8a6a459-goog