Re: [GIT PULL] Introduce try_alloc_pages for 6.15

From: Alexei Starovoitov
Date: Sun Mar 30 2025 - 17:30:39 EST


On Sun, Mar 30, 2025 at 1:42 PM Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Thu, 27 Mar 2025 at 07:52, Alexei Starovoitov
> <alexei.starovoitov@xxxxxxxxx> wrote:
> >
> > The pull includes work from Sebastian, Vlastimil and myself
> > with a lot of help from Michal and Shakeel.
> > This is a first step towards making kmalloc reentrant to get rid
> > of slab wrappers: bpf_mem_alloc, kretprobe's objpool, etc.
> > These patches make page allocator safe from any context.
>
> So I've pulled this too, since it looked generally fine.

Thanks!

> The one reaction I had is that when you basically change
>
> spin_lock_irqsave(&zone->lock, flags);
>
> into
>
> if (!spin_trylock_irqsave(&zone->lock, flags)) {
> if (unlikely(alloc_flags & ALLOC_TRYLOCK))
> return NULL;
> spin_lock_irqsave(&zone->lock, flags);
> }
>
> we've seen bad cache behavior for this kind of pattern in other
> situations: if the "try" fails, the subsequent "do the lock for real"
> case now does the wrong thing, in that it will immediately try again
> even if it's almost certainly just going to fail - causing extra write
> cache accesses.
>
> So typically, in places that can see contention, it's better to either do
>
> (a) trylock followed by a slowpath that takes the fact that it was
> locked into account and does a read-only loop until it sees otherwise
>
> This is, for example, what the mutex code does with that
> __mutex_trylock() -> mutex_optimistic_spin() pattern, but our
> spinlocks end up doing similar things (ie "trylock" followed by
> "release irq and do the 'relax loop' thing).

Right,
__mutex_trylock(lock) -> mutex_optimistic_spin() pattern is
equivalent to 'pending' bit spinning in qspinlock.

> or
>
> (b) do the trylock and lock separately, ie
>
> if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
> if (!spin_trylock_irqsave(&zone->lock, flags))
> return NULL;
> } else
> spin_lock_irqsave(&zone->lock, flags);
>
> so that you don't end up doing two cache accesses for ownership that
> can cause extra bouncing.

Ok, I will switch to above.

> I'm not sure this matters at all in the allocation path - contention
> may simply not be enough of an issue, and the trylock is purely about
> "unlikely NMI worries", but I do worry that you might have made the
> normal case slower.

We actually did see zone->lock being contended in production.
Last time the culprit was an inadequate per-cpu caching and
these series in 6.11 fixed it:
https://lwn.net/Articles/947900/
I don't think we've seen it contended in the newer kernels.

Johannes, pls correct me if I'm wrong.

But to avoid being finger pointed, I'll switch to checking alloc_flags
first. It does seem a better trade off to avoid cache bouncing because
of 2nd cmpxchg. Though when I wrote it this way I convinced myself and
others that it's faster to do trylock first to avoid branch misprediction.