Re: [PATCH v1] kvm: x86: implement PV send_IPI method
From: Jason Wang
Date: Fri Jul 18 2025 - 07:16:24 EST
On Fri, Jul 18, 2025 at 7:01 PM Chao Gao <chao.gao@xxxxxxxxx> wrote:
>
> On Fri, Jul 18, 2025 at 03:52:30PM +0800, Jason Wang wrote:
> >On Fri, Jul 18, 2025 at 2:25 PM Cindy Lu <lulu@xxxxxxxxxx> wrote:
> >>
> >> From: Jason Wang <jasowang@xxxxxxxxxx>
> >>
> >> We used to have PV version of send_IPI_mask and
> >> send_IPI_mask_allbutself. This patch implements PV send_IPI method to
> >> reduce the number of vmexits.
>
> It won't reduce the number of VM-exits; in fact, it may increase them on CPUs
> that support IPI virtualization.
Sure, but I wonder if it reduces the vmexits when there's no APICV or
L2 VM. I thought it can reduce the 2 vmexits to 1?
>
> With IPI virtualization enabled, *unicast* and physical-addressing IPIs won't
> cause a VM-exit.
Right.
> Instead, the microcode posts interrupts directly to the target
> vCPU. The PV version always causes a VM-exit.
Yes, but it applies to all PV IPI I think.
>
> >>
> >> Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx>
> >> Tested-by: Cindy Lu <lulu@xxxxxxxxxx>
> >
> >I think a question here is are we able to see performance improvement
> >in any kind of setup?
>
> It may result in a negative performance impact.
Userspace can check and enable PV IPI for the case where it suits.
For example, HyperV did something like:
void __init hv_apic_init(void)
{
if (ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) {
pr_info("Hyper-V: Using IPI hypercalls\n");
/*
* Set the IPI entry points.
*/
orig_apic = *apic;
apic_update_callback(send_IPI, hv_send_ipi);
apic_update_callback(send_IPI_mask, hv_send_ipi_mask);
apic_update_callback(send_IPI_mask_allbutself,
hv_send_ipi_mask_allbutself);
apic_update_callback(send_IPI_allbutself,
hv_send_ipi_allbutself);
apic_update_callback(send_IPI_all, hv_send_ipi_all);
apic_update_callback(send_IPI_self, hv_send_ipi_self);
}
send_IPI_mask is there.
Thanks
>
> >
> >Thanks
> >
> >
>