Re: [PATCH] lib: Add shared copy of __lshrti3 from libgcc

From: hpa
Date: Mon Mar 18 2019 - 19:54:52 EST


On March 18, 2019 4:52:19 PM PDT, Matthias Kaehlcke <mka@xxxxxxxxxxxx> wrote:
>On Mon, Mar 18, 2019 at 04:44:03PM -0700, hpa@xxxxxxxxx wrote:
>> On March 18, 2019 3:16:39 PM PDT, Matthias Kaehlcke
><mka@xxxxxxxxxxxx> wrote:
>> >On Mon, Mar 18, 2019 at 02:50:44PM -0700, hpa@xxxxxxxxx wrote:
>> >> On March 18, 2019 2:31:13 PM PDT, Matthias Kaehlcke
>> ><mka@xxxxxxxxxxxx> wrote:
>> >> >On Fri, Mar 15, 2019 at 01:54:50PM -0700, Matthias Kaehlcke
>wrote:
>> >> >> The compiler may emit calls to __lshrti3 from the compiler
>runtime
>> >> >> library, which results in undefined references:
>> >> >>
>> >> >> arch/x86/kvm/x86.o: In function `mul_u64_u64_shr':
>> >> >> include/linux/math64.h:186: undefined reference to
>`__lshrti3'
>> >> >>
>> >> >> Add a copy of the __lshrti3 libgcc routine (from gcc v4.9.2).
>> >> >>
>> >> >> Include the function for x86 builds with clang, which is the
>> >> >> environment where the above error was observed.
>> >> >>
>> >> >> Signed-off-by: Matthias Kaehlcke <mka@xxxxxxxxxxxx>
>> >> >
>> >> >With "Revert "kbuild: use -Oz instead of -Os when using clang"
>> >> >(https://lore.kernel.org/patchwork/patch/1051932/) the above
>> >> >error is fixed, a few comments inline for if the patch is
>> >> >resurrected in the future because __lshrti3 is emitted in a
>> >> >different context.
>> >> >
>> >> >> diff --git a/include/linux/libgcc.h b/include/linux/libgcc.h
>> >> >> index 32e1e0f4b2d0..a71036471838 100644
>> >> >> --- a/include/linux/libgcc.h
>> >> >> +++ b/include/linux/libgcc.h
>> >> >> @@ -22,15 +22,26 @@
>> >> >> #include <asm/byteorder.h>
>> >> >>
>> >> >> typedef int word_type __attribute__ ((mode (__word__)));
>> >> >> +typedef int TItype __attribute__ ((mode (TI)));
>> >> >
>> >> >Consider using __int128 instead. Definition and use need a
>> >> >'defined(__SIZEOF_INT128__)' guard (similar for mode (TI)),
>since
>> >> >these 128 bit types aren't supported on all platforms.
>> >> >
>> >> >> #ifdef __BIG_ENDIAN
>> >> >> struct DWstruct {
>> >> >> int high, low;
>> >> >> };
>> >> >> +
>> >> >> +struct DWstruct128 {
>> >> >> + long long high, low;
>> >> >> +};
>> >> >
>> >> >This struct isn't needed, struct DWstruct can be used.
>> >> >
>> >> >> diff --git a/lib/lshrti3.c b/lib/lshrti3.c
>> >> >> new file mode 100644
>> >> >> index 000000000000..2d2123bb3030
>> >> >> --- /dev/null
>> >> >> +++ b/lib/lshrti3.c
>> >> >> @@ -0,0 +1,31 @@
>> >> >> +// SPDX-License-Identifier: GPL-2.0
>> >> >> +
>> >> >> +#include <linux/export.h>
>> >> >> +#include <linux/libgcc.h>
>> >> >> +
>> >> >> +long long __lshrti3(long long u, word_type b)
>> >> >
>> >> >use TItype for input/output, which is what gcc does, though the
>> >above
>> >> >matches the interface in the documentation.
>> >> >
>> >> >> +{
>> >> >> + DWunion128 uu, w;
>> >> >> + word_type bm;
>> >> >> +
>> >> >> + if (b == 0)
>> >> >> + return u;
>> >> >> +
>> >> >> + uu.ll = u;
>> >> >> + bm = 64 - b;
>> >> >> +
>> >> >> + if (bm <= 0) {
>> >> >> + w.s.high = 0;
>> >> >> + w.s.low = (unsigned long long) uu.s.high >> -bm;
>> >> >
>> >> >include <linux/types.h> and use u64 instead of unsigned long
>long.
>> >>
>> >> Ok, now I'm really puzzled.
>> >>
>> >> How could we need a 128-bit shift when the prototype only has 64
>bits
>> >of input?!
>> >
>> >Good question, this is the code from libgcc:
>> >
>> >TItype
>> >__lshrti3 (TItype u, shift_count_type b)
>> >{
>> > if (b == 0)
>> > return u;
>> >
>> > const DWunion uu = {.ll = u};
>> > const shift_count_type bm = (8 * (8)) - b;
>> > DWunion w;
>> >
>> > if (bm <= 0)
>> > {
>> > w.s.high = 0;
>> > w.s.low = (UDItype) uu.s.high >> -bm;
>> > }
>> > else
>> > {
>> > const UDItype carries = (UDItype) uu.s.high << bm;
>> >
>> > w.s.high = (UDItype) uu.s.high >> b;
>> > w.s.low = ((UDItype) uu.s.low >> b) | carries;
>> > }
>> >
>> > return w.ll;
>> >}
>> >
>> >
>> >My compiler knowledge is limited, my guess is that the function is a
>> >generic implementation, and while a long long is 64-bit wide under
>> >Linux it could be 128-bit on other platforms.
>>
>> Yes, long long is just plain wrong.
>>
>> How could we end up calling this function on 32 bits?!
>
>We didn't, in this case the function is called in 64-bit code
>(arch/x86/kvm/x86.o: In function `mul_u64_u64_shr'), for the 32-bit
>vDSO it was __lshrdi3.

That makes more sense.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.