[PATCH v2 07/10] x86/jump_label: Implement arch_static_assert()

From: Peter Zijlstra
Date: Tue Jan 16 2018 - 09:37:16 EST


Implement the static (branch) assertion. It simply emits the address
of the next instruction into a special section which objtool will read
and validate against either __jump_table or .altinstructions.

Use like:

if (static_branch_likely(_key)) {
arch_static_assert();
/* do stuff */
}

Or

if (static_cpu_has(_feat)) {
arch_static_assert();
/* do stuff */
}

Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
arch/x86/include/asm/jump_label.h | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)

--- a/arch/x86/include/asm/jump_label.h
+++ b/arch/x86/include/asm/jump_label.h
@@ -62,6 +62,29 @@ static __always_inline bool arch_static_
return true;
}

+/*
+ * Annotation for objtool; asserts that the previous instruction is the
+ * jump_label patch site. Or rather, that the next instruction is a static
+ * branch target.
+ *
+ * Use like:
+ *
+ * if (static_branch_likely(key)) {
+ * arch_static_assert();
+ * do_code();
+ * }
+ *
+ * Also works with static_cpu_has().
+ */
+static __always_inline void arch_static_assert(void)
+{
+ asm volatile ("1:\n\t"
+ ".pushsection .discard.jump_assert \n\t"
+ _ASM_ALIGN "\n\t"
+ _ASM_PTR "1b \n\t"
+ ".popsection \n\t");
+}
+
#ifdef CONFIG_X86_64
typedef u64 jump_label_t;
#else