Re: [PATCH v4 05/15] x86/sev: Use kernel provided SVSM Calling Areas

From: Tom Lendacky
Date: Mon May 06 2024 - 09:14:58 EST


On 5/6/24 05:09, Borislav Petkov wrote:
Ok,

I think this is very readable and clear what's going on:

I'll test it out.


static __always_inline void issue_svsm_call(struct svsm_call *call, u8 *pending)
{
register unsigned long rax asm("rax") = call->rax;
register unsigned long rcx asm("rcx") = call->rcx;
register unsigned long rdx asm("rdx") = call->rdx;
register unsigned long r8 asm("r8") = call->r8;
register unsigned long r9 asm("r9") = call->r9;

call->caa->call_pending = 1;

asm volatile("rep; vmmcall\n\t"
: "+r" (rax), "+r" (rcx), "+r" (rdx), "+r" (r8), "+r" (r9));

xchg(pending, 1);

This isn't quite right. The xchg has to occur between pending and call->caa->call_pending.

Thanks,
Tom


call->rax_out = rax;
call->rcx_out = rcx;
call->rdx_out = rdx;
call->r8_out = r8;
call->r9_out = r9;
}

and the asm looks ok but the devil's in the detail.