Re: [PATCH v8 39/43] arm64: RME: Provide register list for unfinalized RME RECs

From: Gavin Shan
Date: Thu May 01 2025 - 19:32:18 EST


On 4/16/25 11:42 PM, Steven Price wrote:
From: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx>

KVM_GET_REG_LIST should not be called before SVE is finalized. The ioctl
handler currently returns -EPERM in this case. But because it uses
kvm_arm_vcpu_is_finalized(), it now also rejects the call for
unfinalized REC even though finalizing the REC can only be done late,
after Realm descriptor creation.

Move the check to copy_sve_reg_indices(). One adverse side effect of
this change is that a KVM_GET_REG_LIST call that only probes for the
array size will now succeed even if SVE is not finalized, but that seems
harmless since the following KVM_GET_REG_LIST with the full array will
fail.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx>
Signed-off-by: Steven Price <steven.price@xxxxxxx>
---
arch/arm64/kvm/arm.c | 4 ----
arch/arm64/kvm/guest.c | 9 +++------
2 files changed, 3 insertions(+), 10 deletions(-)


With below comment addressed.

Reviewed-by: Gavin Shan <gshan@xxxxxxxxxx>

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 4780e3af1bb9..eaa60ba6d97b 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1832,10 +1832,6 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (unlikely(!kvm_vcpu_initialized(vcpu)))
break;
- r = -EPERM;
- if (!kvm_arm_vcpu_is_finalized(vcpu))
- break;
-
r = -EFAULT;
if (copy_from_user(&reg_list, user_list, sizeof(reg_list)))
break;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index dd379aba31bb..1288920fc73d 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -671,12 +671,9 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)
{
const unsigned int slices = vcpu_sve_slices(vcpu);
- if (!vcpu_has_sve(vcpu))
+ if (!vcpu_has_sve(vcpu) || !kvm_arm_vcpu_sve_finalized(vcpu))
return 0;
- /* Policed by KVM_GET_REG_LIST: */
- WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu));
-
return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */)
+ 1; /* KVM_REG_ARM64_SVE_VLS */
}

KVM_REG_ARM64_SVE_VLS is exposed even SVE isn't finalized. See set_sve_vls() where
it's required that SVE isn't finalized, or -EPERM is returned. So this would be
something like below:

if (!vcpu_has_sve(vcpu))
return 0;

if (!kvm_arm_vcpu_sve_finalized(vcpu))
return 1; /* KVM_REG_ARM64_SVE_VLS */

return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */)
+ 1; /* KVM_REG_ARM64_SVE_VLS */

@@ -692,8 +689,8 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu,
if (!vcpu_has_sve(vcpu))
return 0;
- /* Policed by KVM_GET_REG_LIST: */
- WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu));
+ if (!kvm_arm_vcpu_sve_finalized(vcpu))
+ return -EPERM;
/*
* Enumerate this first, so that userspace can save/restore in

Since KVM_REG_ARM64_SVE_VLS can be exposed before the vCPU is finalized, it'd better to
move the check after the followup block where KVM_REG_ARM64_SVE_VLS index is copied
to user space.

/*
* Enumerate this first, so that userspace can save/restore in
* the order reported by KVM_GET_REG_LIST:
*/
reg = KVM_REG_ARM64_SVE_VLS;
if (put_user(reg, uindices++))
return -EFAULT;
++num_regs;

if (!kvm_arm_vcpu_sve_finalized(vcpu))
return num_regs;

Thanks,
Gavin