Re: Very slow clang kernel config ..

From: Fangrui Song
Date: Sat May 01 2021 - 19:55:57 EST


On 2021-05-01, Linus Torvalds wrote:
On Sat, May 1, 2021 at 12:58 PM Serge Guelton <sguelton@xxxxxxxxxx> wrote:

Different metrics lead to different choice, then comes the great pleasure of
making compromises :-)

Even if that particular compromise might be the right one to do for
clang and llvm, the point is that the Fedora rule is garbage, and it
doesn't _allow_ for making any compromises at all.

The Fedora policy is basically "you have to use shared libraries
whether that makes any sense or not".

As mentioned, I've seen a project bitten by that insane policy. It's bogus.

Linus

As a very safe optimization, distributions can consider
-fno-semantic-interposition (only effectful on x86 in GCC and Clang,
already used by some Python packages):
avoid GOT/PLT generating relocation if the referenced symbol is defined
in the same translation unit. See my benchmark below: it makes the built
-fPIC clang slightly faster.

As a slightly aggressive optimization, consider
-DCMAKE_EXE_LINKER_FLAGS=-Wl,-Bsymbolic-functions -DCMAKE_SHARED_LINKER_FLAGS=-Wl,-Bsymbolic-functions.
The performance is comparable to a mostly statically linked PIE clang. (-shared
-Bsymbolic is very similar to -pie.): function calls within libLLVM.so
or libclang-cpp.so has no extra cost compared with a mostly statically linked PIE clang.

Normally I don't recommend -Bsymbolic because

* it can break C++ semantics about address uniqueness of inline functions,
type_info (exceptions) when there are multiple definitions in the
process. I believe LLVM+Clang are not subject to such issues.
We don't throw LLVM/Clang type exceptions.
* it is not compatible with copy relocations[1]. This is not an issue for -Bsymbolic-functions.

-Bsymbolic-functions should be suitable for LLVM+Clang.



LD=ld.lld -j 40 defconfig; time 'make vmlinux'

# the compile flags may be very different from the clang builds below.
system gcc
1050.15s user 192.96s system 3015% cpu 41.219 total
1055.47s user 196.51s system 3022% cpu 41.424 total

clang (libLLVM*.a libclang*.a); LLVM=1
1588.35s user 193.02s system 3223% cpu 55.259 total
1613.59s user 193.22s system 3234% cpu 55.861 total
clang (libLLVM.so libclang-cpp.so); LLVM=1
1870.07s user 222.86s system 3256% cpu 1:04.26 total
1863.26s user 220.59s system 3219% cpu 1:04.73 total
1877.79s user 223.98s system 3233% cpu 1:05.00 total
1859.32s user 221.96s system 3241% cpu 1:04.20 total
clang (libLLVM.so libclang-cpp.so -fno-semantic-interposition); LLVM=1
1810.47s user 222.98s system 3288% cpu 1:01.83 total
1790.46s user 219.65s system 3227% cpu 1:02.27 total
1796.46s user 220.88s system 3139% cpu 1:04.25 total
1796.55s user 221.28s system 3215% cpu 1:02.75 total
clang (libLLVM.so libclang-cpp.so -fno-semantic-interposition -Wl,-Bsymbolic); LLVM=1
1608.75s user 221.39s system 3192% cpu 57.333 total
1607.85s user 220.60s system 3205% cpu 57.042 total
1598.64s user 191.21s system 3208% cpu 55.778 total
clang (libLLVM.so libclang-cpp.so -fno-semantic-interposition -Wl,-Bsymbolic-functions); LLVM=1
1617.35s user 220.54s system 3217% cpu 57.115 total



LLVM's reusable component design causes us some overhead here. Almost
every cross-TU callable function is moved to a public header and
exported, libLLVM.so and libclang-cpp.so have huge dynamic symbol tables.
-Wl,--gc-sections cannot really eliminate much.


(Last, I guess it is a conscious decision that distributions build all
targets instead of just the host -DLLVM_TARGETS_TO_BUILD=host. This
makes cross compilation easy: a single clang can replace various *-linux-gnu-gcc)


[1]: Even if one design goal of -fPIE is to avoid copy relocations, and
normally there should be no issue on non-x86, there is an unfortunate
GCC 5 fallout for x86-64 ("x86-64: Optimize access to globals in PIE with copy reloc").
I'll omit words here as you can find details on https://maskray.me/blog/2021-01-09-copy-relocations-canonical-plt-entries-and-protected
-Bsymbolic-functions avoids such issues.