Re: [PATCH] mm: extend max struct page size for kmsan

From: Alexander Duyck
Date: Mon Jan 30 2023 - 11:30:01 EST


On Mon, Jan 30, 2023 at 5:07 AM Arnd Bergmann <arnd@xxxxxxxxxx> wrote:
>
> From: Arnd Bergmann <arnd@xxxxxxxx>
>
> After x86 has enabled support for KMSAN, it has become possible
> to have larger 'struct page' than was expected when commit
> 5470dea49f53 ("mm: use mm_zero_struct_page from SPARC on all 64b
> architectures") was merged:
>
> include/linux/mm.h:156:10: warning: no case matching constant switch condition '96'
> switch (sizeof(struct page)) {
>
> Extend the maximum accordingly.
>
> Fixes: 5470dea49f53 ("mm: use mm_zero_struct_page from SPARC on all 64b architectures")
> Fixes: 4ca8cc8d1bbe ("x86: kmsan: enable KMSAN builds for x86")

Rather than saying this fixes the code that enables the config flags I
might be more comfortable with listing the commit that added the two
pointers to the struct:
Fixes: f80be4571b19 ("kmsan: add KMSAN runtime core")

It will make it easier to identify where the lines where added that
actually increased the size of the page struct.

> Signed-off-by: Arnd Bergmann <arnd@xxxxxxxx>
> ---
> This seems to show up extremely rarely in randconfig builds, but
> enough to trigger my build machine.
>
> I saw a related discussion at [1] about raising MAX_STRUCT_PAGE_SIZE,
> but as I understand it, that needs to be addressed separately.
>
> [1] https://lore.kernel.org/lkml/20220701142310.2188015-11-glider@xxxxxxxxxx/
> ---
> include/linux/mm.h | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b73ba2e5cfd2..aa39d5ddace1 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -137,7 +137,7 @@ extern int mmap_rnd_compat_bits __read_mostly;
> * define their own version of this macro in <asm/pgtable.h>
> */
> #if BITS_PER_LONG == 64
> -/* This function must be updated when the size of struct page grows above 80
> +/* This function must be updated when the size of struct page grows above 96
> * or reduces below 56. The idea that compiler optimizes out switch()
> * statement, and only leaves move/store instructions. Also the compiler can
> * combine write statements if they are both assignments and can be reordered,
> @@ -148,12 +148,18 @@ static inline void __mm_zero_struct_page(struct page *page)
> {
> unsigned long *_pp = (void *)page;
>
> - /* Check that struct page is either 56, 64, 72, or 80 bytes */
> + /* Check that struct page is either 56, 64, 72, 80, 88 or 96 bytes */
> BUILD_BUG_ON(sizeof(struct page) & 7);
> BUILD_BUG_ON(sizeof(struct page) < 56);
> - BUILD_BUG_ON(sizeof(struct page) > 80);
> + BUILD_BUG_ON(sizeof(struct page) > 96);
>
> switch (sizeof(struct page)) {
> + case 96:
> + _pp[11] = 0;
> + fallthrough;
> + case 88:
> + _pp[10] = 0;
> + fallthrough;
> case 80:
> _pp[9] = 0;
> fallthrough;

Otherwise the code itself looks good to me.

Reviewed-by: Alexander Duyck <alexanderduyck@xxxxxx>