[PATCH 3.10 16/80] MIPS: KVM: Fix ASID restoration logic

From: Greg Kroah-Hartman
Date: Tue Mar 01 2016 - 21:26:22 EST


3.10-stable review patch. If anyone has any objections, please let me know.

------------------

From: James Hogan <james.hogan@xxxxxxxxxx>

commit 002374f371bd02df864cce1fe85d90dc5b292837 upstream.

ASID restoration on guest resume should determine the guest execution
mode based on the guest Status register rather than bit 30 of the guest
PC.

Fix the two places in locore.S that do this, loading the guest status
from the cop0 area. Note, this assembly is specific to the trap &
emulate implementation of KVM, so it doesn't need to check the
supervisor bit as that mode is not implemented in the guest.

Fixes: b680f70fc111 ("KVM/MIPS32: Entry point for trampolining to...")
Signed-off-by: James Hogan <james.hogan@xxxxxxxxxx>
Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx>
Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Cc: Gleb Natapov <gleb@xxxxxxxxxx>
Cc: linux-mips@xxxxxxxxxxxxxx
Cc: kvm@xxxxxxxxxxxxxxx
Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Signed-off-by: James Hogan <james.hogan@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
arch/mips/kvm/kvm_locore.S | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)

--- a/arch/mips/kvm/kvm_locore.S
+++ b/arch/mips/kvm/kvm_locore.S
@@ -156,9 +156,11 @@ FEXPORT(__kvm_mips_vcpu_run)

FEXPORT(__kvm_mips_load_asid)
/* Set the ASID for the Guest Kernel */
- sll t0, t0, 1 /* with kseg0 @ 0x40000000, kernel */
- /* addresses shift to 0x80000000 */
- bltz t0, 1f /* If kernel */
+ PTR_L t0, VCPU_COP0(k1)
+ LONG_L t0, COP0_STATUS(t0)
+ andi t0, KSU_USER | ST0_ERL | ST0_EXL
+ xori t0, KSU_USER
+ bnez t0, 1f /* If kernel */
addiu t1, k1, VCPU_GUEST_KERNEL_ASID /* (BD) */
addiu t1, k1, VCPU_GUEST_USER_ASID /* else user */
1:
@@ -442,9 +444,11 @@ __kvm_mips_return_to_guest:
mtc0 t0, CP0_EPC

/* Set the ASID for the Guest Kernel */
- sll t0, t0, 1 /* with kseg0 @ 0x40000000, kernel */
- /* addresses shift to 0x80000000 */
- bltz t0, 1f /* If kernel */
+ PTR_L t0, VCPU_COP0(k1)
+ LONG_L t0, COP0_STATUS(t0)
+ andi t0, KSU_USER | ST0_ERL | ST0_EXL
+ xori t0, KSU_USER
+ bnez t0, 1f /* If kernel */
addiu t1, k1, VCPU_GUEST_KERNEL_ASID /* (BD) */
addiu t1, k1, VCPU_GUEST_USER_ASID /* else user */
1: