Re: [PATCH v6 1/2] x86: fix bitops.h warning with a moved cast

From: Nick Desaulniers
Date: Tue May 05 2020 - 13:47:42 EST


On Tue, May 5, 2020 at 10:29 AM Nick Desaulniers
<ndesaulniers@xxxxxxxxxx> wrote:
>
> On Tue, May 5, 2020 at 8:14 AM Andy Shevchenko
> <andriy.shevchenko@xxxxxxxxx> wrote:
> >
> > On Mon, May 04, 2020 at 06:14:43PM -0700, Jesse Brandeburg wrote:
> > > On Mon, 4 May 2020 12:51:12 -0700
> > > Nick Desaulniers <ndesaulniers@xxxxxxxxxx> wrote:
> > >
> > > > Sorry for the very late report. It turns out that if your config
> > > > tickles __builtin_constant_p just right, this now produces invalid
> > > > assembly:
> > > >
> > > > $ cat foo.c
> > > > long a(long b, long c) {
> > > > asm("orb\t%1, %0" : "+q"(c): "r"(b));
> > > > return c;
> > > > }
> > > > $ gcc foo.c
> > > > foo.c: Assembler messages:
> > > > foo.c:2: Error: `%rax' not allowed with `orb'
> > > >
> > > > The "q" constraint only has meanting on -m32 otherwise is treated as
> > > > "r".
> > > >
> > > > Since we have the mask (& 0xff), can we drop the `b` suffix from the
> > > > instruction? Or is a revert more appropriate? Or maybe another way to
> > > > skin this cat?
> > >
> > > Figures that such a small change can create problems :-( Sorry for the
> > > thrash!
> > >
> > > The patches in the link below basically add back the cast, but I'm
> > > interested to see if any others can come up with a better fix that
> > > a) passes the above code generation test
> > > b) still keeps sparse happy
> > > c) passes the test module and the code inspection
> > >
> > > If need be I'm OK with a revert of the original patch to fix the issue
> > > in the short term, but it seems to me there must be a way to satisfy
> > > both tools. We went through several iterations on the way to the final
> > > patch that we might be able to pluck something useful from.
> >
> > For me the below seems to work:
>
> Yep:
> https://github.com/ClangBuiltLinux/linux/issues/961#issuecomment-623785987
> https://github.com/ClangBuiltLinux/linux/issues/961#issuecomment-624162497
> Sedat wrote the same patch 22 days ago; I didn't notice before
> starting this thread. I will sign off on his patch, add your
> Suggested by tag, and send shortly.

Started a new thread:
https://lore.kernel.org/lkml/20200505174423.199985-1-ndesaulniers@xxxxxxxxxx/T/#u

>
> >
> >
> > diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
> > index b392571c1f1d1..139122e5b25b1 100644
> > --- a/arch/x86/include/asm/bitops.h
> > +++ b/arch/x86/include/asm/bitops.h
> > @@ -54,7 +54,7 @@ arch_set_bit(long nr, volatile unsigned long *addr)
> > if (__builtin_constant_p(nr)) {
> > asm volatile(LOCK_PREFIX "orb %1,%0"
> > : CONST_MASK_ADDR(nr, addr)
> > - : "iq" (CONST_MASK(nr) & 0xff)
> > + : "iq" ((u8)(CONST_MASK(nr) & 0xff))
> > : "memory");
> > } else {
> > asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> > @@ -74,7 +74,7 @@ arch_clear_bit(long nr, volatile unsigned long *addr)
> > if (__builtin_constant_p(nr)) {
> > asm volatile(LOCK_PREFIX "andb %1,%0"
> > : CONST_MASK_ADDR(nr, addr)
> > - : "iq" (CONST_MASK(nr) ^ 0xff));
> > + : "iq" ((u8)(CONST_MASK(nr) ^ 0xff)));
> > } else {
> > asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
> > : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
> >
> >
> > --
> > With Best Regards,
> > Andy Shevchenko
> >
> >
>
>
> --
> Thanks,
> ~Nick Desaulniers



--
Thanks,
~Nick Desaulniers