[tip: x86/urgent] x86/pat: Fix pat_x_mtrr_type() for MTRR disabled case

From: tip-bot2 for Juergen Gross
Date: Tue Jan 10 2023 - 11:32:51 EST


The following commit has been merged into the x86/urgent branch of tip:

Commit-ID: 90b926e68f500844dff16b5bcea178dc55cf580a
Gitweb: https://git.kernel.org/tip/90b926e68f500844dff16b5bcea178dc55cf580a
Author: Juergen Gross <jgross@xxxxxxxx>
AuthorDate: Tue, 10 Jan 2023 07:54:27 +01:00
Committer: Borislav Petkov (AMD) <bp@xxxxxxxxx>
CommitterDate: Tue, 10 Jan 2023 17:21:53 +01:00

x86/pat: Fix pat_x_mtrr_type() for MTRR disabled case

Since

72cbc8f04fe2 ("x86/PAT: Have pat_enabled() properly reflect state when running on Xen")

PAT can be enabled without MTRR.

This has resulted in problems e.g. for a SEV-SNP guest running under Hyper-V,
when trying to establish a new mapping via memremap() with WB caching mode, as
pat_x_mtrr_type() will call mtrr_type_lookup(), which in turn is returning
MTRR_TYPE_INVALID due to MTRR being disabled in this configuration.

The result is a mapping with UC- caching, leading to severe performance
degradation.

Fix that by handling MTRR_TYPE_INVALID the same way as MTRR_TYPE_WRBACK
in pat_x_mtrr_type() because MTRR_TYPE_INVALID means MTRRs are disabled.

[ bp: Massage commit message. ]

Fixes: 72cbc8f04fe2 ("x86/PAT: Have pat_enabled() properly reflect state when running on Xen")
Reported-by: Michael Kelley (LINUX) <mikelley@xxxxxxxxxxxxx>
Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
Signed-off-by: Borislav Petkov (AMD) <bp@xxxxxxxxx>
Reviewed-by: Michael Kelley <mikelley@xxxxxxxxxxxxx>
Tested-by: Michael Kelley <mikelley@xxxxxxxxxxxxx>
Cc: <stable@xxxxxxxxxx>
Link: https://lore.kernel.org/r/20230110065427.20767-1-jgross@xxxxxxxx
---
arch/x86/mm/pat/memtype.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
index 46de9cf..fb4b1b5 100644
--- a/arch/x86/mm/pat/memtype.c
+++ b/arch/x86/mm/pat/memtype.c
@@ -387,7 +387,8 @@ static unsigned long pat_x_mtrr_type(u64 start, u64 end,
u8 mtrr_type, uniform;

mtrr_type = mtrr_type_lookup(start, end, &uniform);
- if (mtrr_type != MTRR_TYPE_WRBACK)
+ if (mtrr_type != MTRR_TYPE_WRBACK &&
+ mtrr_type != MTRR_TYPE_INVALID)
return _PAGE_CACHE_MODE_UC_MINUS;

return _PAGE_CACHE_MODE_WB;