[tip: x86/microcode] x86/microcode: Default-disable late loading

From: tip-bot2 for Borislav Petkov
Date: Tue May 31 2022 - 03:35:39 EST


The following commit has been merged into the x86/microcode branch of tip:

Commit-ID: a77a94f86273ce42a39cb479217dd8d68acfe0ff
Gitweb: https://git.kernel.org/tip/a77a94f86273ce42a39cb479217dd8d68acfe0ff
Author: Borislav Petkov <bp@xxxxxxx>
AuthorDate: Wed, 25 May 2022 18:12:30 +02:00
Committer: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CommitterDate: Tue, 31 May 2022 09:31:19 +02:00

x86/microcode: Default-disable late loading

It is dangerous and it should not be used anyway - there's a nice early
loading already.

Requested-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Signed-off-by: Borislav Petkov <bp@xxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Link: https://lore.kernel.org/r/20220525161232.14924-3-bp@xxxxxxxxx
---
arch/x86/Kconfig | 11 +++++++++++
arch/x86/kernel/cpu/common.c | 2 ++
arch/x86/kernel/cpu/microcode/core.c | 7 ++++++-
3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f423a2d..976309d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1350,6 +1350,17 @@ config MICROCODE_AMD
If you select this option, microcode patch loading support for AMD
processors will be enabled.

+config MICROCODE_LATE_LOADING
+ bool "Late microcode loading (DANGEROUS)"
+ default n
+ depends on MICROCODE
+ help
+ Loading microcode late, when the system is up and executing instructions
+ is a tricky business and should be avoided if possible. Just the sequence
+ of synchronizing all cores and SMT threads is one fragile dance which does
+ not guarantee that cores might not softlock after the loading. Therefore,
+ use this at your own risk. Late loading taints the kernel too.
+
config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support"
help
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 2e91427..c296cb1 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2222,6 +2222,7 @@ void cpu_init_secondary(void)
}
#endif

+#ifdef CONFIG_MICROCODE_LATE_LOADING
/*
* The microcode loader calls this upon late microcode load to recheck features,
* only when microcode has been updated. Caller holds microcode_mutex and CPU
@@ -2251,6 +2252,7 @@ void microcode_check(void)
pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
}
+#endif

/*
* Invoked from core CPU hotplug code after hotplug operations
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index b72c413..c717db6 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -376,6 +376,7 @@ static int apply_microcode_on_target(int cpu)
/* fake device for request_firmware */
static struct platform_device *microcode_pdev;

+#ifdef CONFIG_MICROCODE_LATE_LOADING
/*
* Late loading dance. Why the heavy-handed stomp_machine effort?
*
@@ -543,6 +544,9 @@ put:
return ret;
}

+static DEVICE_ATTR_WO(reload);
+#endif
+
static ssize_t version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -559,7 +563,6 @@ static ssize_t pf_show(struct device *dev,
return sprintf(buf, "0x%x\n", uci->cpu_sig.pf);
}

-static DEVICE_ATTR_WO(reload);
static DEVICE_ATTR(version, 0444, version_show, NULL);
static DEVICE_ATTR(processor_flags, 0444, pf_show, NULL);

@@ -712,7 +715,9 @@ static int mc_cpu_down_prep(unsigned int cpu)
}

static struct attribute *cpu_root_microcode_attrs[] = {
+#ifdef CONFIG_MICROCODE_LATE_LOADING
&dev_attr_reload.attr,
+#endif
NULL
};