Re: [PATCHv14 08/17] x86/mm: Reduce untagged_addr() overhead until the first LAM user

From: Linus Torvalds
Date: Tue Jan 17 2023 - 12:31:20 EST


On Tue, Jan 17, 2023 at 7:02 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Tue, Jan 17, 2023 at 04:57:03PM +0300, Kirill A. Shutemov wrote:
> > On Tue, Jan 17, 2023 at 02:05:22PM +0100, Peter Zijlstra wrote:
> > > On Wed, Jan 11, 2023 at 03:37:27PM +0300, Kirill A. Shutemov wrote:
> > >
> > > > #define __untagged_addr(untag_mask, addr)
> > > > u64 __addr = (__force u64)(addr); \
> > > > - s64 sign = (s64)__addr >> 63; \
> > > > - __addr &= untag_mask | sign; \
> > > > + if (static_branch_likely(&tagged_addr_key)) { \
> > > > + s64 sign = (s64)__addr >> 63; \
> > > > + __addr &= untag_mask | sign; \
> > > > + } \
> > > > (__force __typeof__(addr))__addr; \
> > > > })
> > > >
> > > > #define untagged_addr(addr) __untagged_addr(current_untag_mask(), addr)
> > >
> > > Is the compiler clever enough to put the memop inside the branch?
> >
> > Hm. You mean current_untag_mask() inside static_branch_likely()?
> >
> > But it is preprocessor who does this, not compiler. So, yes, the memop is
> > inside the branch.
> >
> > Or I didn't understand your question.
>
> Nah, call it a pre-lunch dip, I overlooked the whole CPP angle -- d'0h.
>
> That said, I did just put it through a compiler to see wth it did and it
> is pretty gross:

Yeah, I think the static branch likely just makes things worse.

And if we really want to make the "no untag mask exists" case better,
I think the code should probably use static_branch_unlikely() rather
than *_likely(). That should make it jump to the masking code, and
leave the unmasked code as a fallthrough, no?

The reason clang seems to generate saner code is that clang seems to
largely ignore the whole "__builtin_expect()", at least not to the
point where it tries to make the unlikely case be out-of-line.

But on the whole, I think we'd be better off without this whole static branch.

The cost of "untagged_addr()" generally shouldn't be worth this. There
are few performance-crticial users - the most common case is, I think,
just mmap() and friends, and the single load is going to be a
non-issue there.

Looking around, I think the only situation where we may care is
strnlen_user() and strncpy_from_user(). Those *can* be
performance-critical. They're used for paths and for execve() strings,
and can be a bit hot.

And both of those cases actually just use it because of the whole
"maximum address" calculation to avoid traversing into kernel
addresses, so I wonder if we could use alternatives there, kind of
like the get_user/put_user cases did. Except it's generic code, so ..

But maybe even those aren't worth worrying about. At least they do the
unmasking outside the loop - although then in the case of execve(),
the string copies themselves are obviously done in a loop anyway.

Kirill, do you have clear numbers for that static key being a noticeable win?

Linus