Re: [PATCH v3 00/12] kasan: unify kasan_arch_is_ready() and remove arch-specific implementations

From: Sabyrzhan Tasbolatov
Date: Tue Jul 22 2025 - 14:21:43 EST


On Tue, Jul 22, 2025 at 3:59 AM Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx> wrote:
>
>
>
> On 7/17/25 4:27 PM, Sabyrzhan Tasbolatov wrote:
>
> > === Testing with patches
> >
> > Testing in v3:
> >
> > - Compiled every affected arch with no errors:
> >
> > $ make CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm STRIP=llvm-strip \
> > OBJCOPY=llvm-objcopy OBJDUMP=llvm-objdump READELF=llvm-readelf \
> > HOSTCC=clang HOSTCXX=clang++ HOSTAR=llvm-ar HOSTLD=ld.lld \
> > ARCH=$ARCH
> >
> > $ clang --version
> > ClangBuiltLinux clang version 19.1.4
> > Target: x86_64-unknown-linux-gnu
> > Thread model: posix
> >
> > - make ARCH=um produces the warning during compiling:
> > MODPOST Module.symvers
> > WARNING: modpost: vmlinux: section mismatch in reference: \
> > kasan_init+0x43 (section: .ltext) -> \
> > kasan_init_generic (section: .init.text)
> >
> > AFAIU, it's due to the code in arch/um/kernel/mem.c, where kasan_init()
> > is placed in own section ".kasan_init", which calls kasan_init_generic()
> > which is marked with "__init".
> >
> > - Booting via qemu-system- and running KUnit tests:
> >
> > * arm64 (GENERIC, HW_TAGS, SW_TAGS): no regression, same above results.
> > * x86_64 (GENERIC): no regression, no errors
> >
>
> It would be interesting to see whether ARCH_DEFER_KASAN=y arches work.
> These series add static key into __asan_load*()/_store*() which are called
> from everywhere, including the code patching static branches during the switch.
>
> I have suspicion that the code patching static branches during static key switch
> might not be prepared to the fact the current CPU might try to execute this static
> branch in the middle of switch.

AFAIU, you're referring to this function in mm/kasan/generic.c:

static __always_inline bool check_region_inline(const void *addr,

size_t size, bool write,

unsigned long ret_ip)
{
if (!kasan_shadow_initialized())
return true;
...
}

and particularly, to architectures that selects ARCH_DEFER_KASAN=y, which are
loongarch, powerpc, um. So when these arch try to enable the static key:

1. static_branch_enable(&kasan_flag_enabled) called
2. Kernel patches code - changes jump instructions
3. Code patching involves memory writes
4. Memory writes can trigger any KASAN wrapper function
5. Wrapper calls kasan_shadow_initialized()
6. kasan_shadow_initialized() calls static_branch_likely(&kasan_flag_enabled)
7. This reads the static key being patched --- this is the potential issue?

The current runtime check is following in tis v3 patch series:

#ifdef CONFIG_ARCH_DEFER_KASAN
...
static __always_inline bool kasan_shadow_initialized(void)
{
return static_branch_likely(&kasan_flag_enabled);
}
...
#endif

I wonder, if I should add some protection only for KASAN_GENERIC,
where check_region_inline() is called (or for all KASAN modes?):

#ifdef CONFIG_ARCH_DEFER_KASAN
...
static __always_inline bool kasan_shadow_initialized(void)
{
/* Avoid recursion (?) during static key patching */
if (static_key_count(&kasan_flag_enabled.key) < 0)
return false;
return static_branch_likely(&kasan_flag_enabled);
}
...
#endif

Please suggest where the issue is and if I understood the problem.
I might try to run QEMU on powerpc with KUnits to see if I see any logs.