[PATCH 026/114] arch: Kconfig: pedantic formatting

From: Enrico Weigelt, metux IT consult
Date: Mon Mar 11 2019 - 09:29:38 EST


Formatting of Kconfig files doesn't look so pretty, so let the
Great White Handkerchief come around and clean it up.

Signed-off-by: Enrico Weigelt, metux IT consult <info@xxxxxxxxx>
---
arch/Kconfig | 94 ++++++++++++++++++++++++++++++------------------------------
1 file changed, 47 insertions(+), 47 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 33687dd..41f9c4b 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -69,30 +69,30 @@ config KPROBES
If in doubt, say "N".

config JUMP_LABEL
- bool "Optimize very unlikely/likely branches"
- depends on HAVE_ARCH_JUMP_LABEL
- depends on CC_HAS_ASM_GOTO
- help
- This option enables a transparent branch optimization that
- makes certain almost-always-true or almost-always-false branch
- conditions even cheaper to execute within the kernel.
-
- Certain performance-sensitive kernel code, such as trace points,
- scheduler functionality, networking code and KVM have such
- branches and include support for this optimization technique.
-
- If it is detected that the compiler has support for "asm goto",
- the kernel will compile such branches with just a nop
- instruction. When the condition flag is toggled to true, the
- nop will be converted to a jump instruction to execute the
- conditional block of instructions.
-
- This technique lowers overhead and stress on the branch prediction
- of the processor and generally makes the kernel faster. The update
- of the condition is slower, but those are always very rare.
-
- ( On 32-bit x86, the necessary options added to the compiler
- flags may increase the size of the kernel slightly. )
+ bool "Optimize very unlikely/likely branches"
+ depends on HAVE_ARCH_JUMP_LABEL
+ depends on CC_HAS_ASM_GOTO
+ help
+ This option enables a transparent branch optimization that
+ makes certain almost-always-true or almost-always-false branch
+ conditions even cheaper to execute within the kernel.
+
+ Certain performance-sensitive kernel code, such as trace points,
+ scheduler functionality, networking code and KVM have such
+ branches and include support for this optimization technique.
+
+ If it is detected that the compiler has support for "asm goto",
+ the kernel will compile such branches with just a nop
+ instruction. When the condition flag is toggled to true, the
+ nop will be converted to a jump instruction to execute the
+ conditional block of instructions.
+
+ This technique lowers overhead and stress on the branch prediction
+ of the processor and generally makes the kernel faster. The update
+ of the condition is slower, but those are always very rare.
+
+ ( On 32-bit x86, the necessary options added to the compiler
+ flags may increase the size of the kernel slightly. )

config STATIC_KEYS_SELFTEST
bool "Static key selftest"
@@ -110,9 +110,9 @@ config KPROBES_ON_FTRACE
depends on KPROBES && HAVE_KPROBES_ON_FTRACE
depends on DYNAMIC_FTRACE_WITH_REGS
help
- If function tracer is enabled and the arch supports full
- passing of pt_regs to function tracing, then kprobes can
- optimize on top of function tracing.
+ If function tracer is enabled and the arch supports full
+ passing of pt_regs to function tracing, then kprobes can
+ optimize on top of function tracing.

config UPROBES
def_bool n
@@ -164,23 +164,23 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS
information on the topic of unaligned memory accesses.

config ARCH_USE_BUILTIN_BSWAP
- bool
- help
- Modern versions of GCC (since 4.4) have builtin functions
- for handling byte-swapping. Using these, instead of the old
- inline assembler that the architecture code provides in the
- __arch_bswapXX() macros, allows the compiler to see what's
- happening and offers more opportunity for optimisation. In
- particular, the compiler will be able to combine the byteswap
- with a nearby load or store and use load-and-swap or
- store-and-swap instructions if the architecture has them. It
- should almost *never* result in code which is worse than the
- hand-coded assembler in <asm/swab.h>. But just in case it
- does, the use of the builtins is optional.
-
- Any architecture with load-and-swap or store-and-swap
- instructions should set this. And it shouldn't hurt to set it
- on architectures that don't have such instructions.
+ bool
+ help
+ Modern versions of GCC (since 4.4) have builtin functions
+ for handling byte-swapping. Using these, instead of the old
+ inline assembler that the architecture code provides in the
+ __arch_bswapXX() macros, allows the compiler to see what's
+ happening and offers more opportunity for optimisation. In
+ particular, the compiler will be able to combine the byteswap
+ with a nearby load or store and use load-and-swap or
+ store-and-swap instructions if the architecture has them. It
+ should almost *never* result in code which is worse than the
+ hand-coded assembler in <asm/swab.h>. But just in case it
+ does, the use of the builtins is optional.
+
+ Any architecture with load-and-swap or store-and-swap
+ instructions should set this. And it shouldn't hurt to set it
+ on architectures that don't have such instructions.

config KRETPROBES
def_bool y
@@ -234,10 +234,10 @@ config HAVE_DMA_CONTIGUOUS
bool

config GENERIC_SMP_IDLE_THREAD
- bool
+ bool

config GENERIC_IDLE_POLL_SETUP
- bool
+ bool

config ARCH_HAS_FORTIFY_SOURCE
bool
@@ -251,7 +251,7 @@ config ARCH_HAS_SET_MEMORY

# Select if arch init_task must go in the __init_task_data section
config ARCH_TASK_STRUCT_ON_STACK
- bool
+ bool

# Select if arch has its private alloc_task_struct() function
config ARCH_TASK_STRUCT_ALLOCATOR
--
1.9.1