Re: [PATCH 2/10] x86: convert to generic helpers for IPI function calls

From: Jens Axboe
Date: Wed Apr 30 2008 - 08:31:53 EST


On Wed, Apr 30 2008, Paul E. McKenney wrote:
> On Wed, Apr 30, 2008 at 01:35:42PM +0200, Jens Axboe wrote:
> > On Tue, Apr 29 2008, Jeremy Fitzhardinge wrote:
> > > Jens Axboe wrote:
> > > >-int xen_smp_call_function_mask(cpumask_t mask, void (*func)(void *),
> > > >- void *info, int wait)
> > > >
> > > [...]
> > > >- /* Send a message to other CPUs and wait for them to respond */
> > > >- xen_send_IPI_mask(mask, XEN_CALL_FUNCTION_VECTOR);
> > > >-
> > > >- /* Make sure other vcpus get a chance to run if they need to. */
> > > >- yield = false;
> > > >- for_each_cpu_mask(cpu, mask)
> > > >- if (xen_vcpu_stolen(cpu))
> > > >- yield = true;
> > > >-
> > > >- if (yield)
> > > >- HYPERVISOR_sched_op(SCHEDOP_yield, 0);
> > > >
> > >
> > > I added this to deal with the case where you're sending an IPI to
> > > another VCPU which isn't currently running on a real cpu. In this case
> > > you could end up spinning while the other VCPU is waiting for a real CPU
> > > to run on. (Basically the same problem that spinlocks have in a virtual
> > > environment.)
> > >
> > > However, this is at best a partial solution to the problem, and I never
> > > benchmarked if it really makes a difference. Since any other virtual
> > > environment would have the same problem, its best if we can solve it
> > > generically. (Of course a synchronous single-target cross-cpu call is a
> > > simple cross-cpu rpc, which could be implemented very efficiently in the
> > > host/hypervisor by simply doing a vcpu context switch...)
> >
> > So, what would your advice be? Seems safe enough to ignore for now and
> > attack it if it becomes a real problem.
>
> How about an arch-specific function/macro invoked in the spin loop?
> The generic implementation would do nothing, but things like Xen
> could implement as above.

Xen could just stuff that bit into its arch_send_call_function_ipi(),
something like the below should be fine. My question to Jeremy was more
of the order of whether it should be kept or not, I guess it's safer to
just keep it and retain the existing behaviour (and let Jeremy/others
evaluate it at will later on). Note that I got rid of the yield bool and
break when we called the hypervisor.

Jeremy, shall I add this?

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 2dfe093..064e6dc 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -352,7 +352,17 @@ static void xen_send_IPI_mask(cpumask_t mask, enum ipi_vector vector)

void xen_smp_send_call_function_ipi(cpumask_t mask)
{
+ int cpu;
+
xen_send_IPI_mask(mask, XEN_CALL_FUNCTION_VECTOR);
+
+ /* Make sure other vcpus get a chance to run if they need to. */
+ for_each_cpu_mask(cpu, mask) {
+ if (xen_vcpu_stolen(cpu)) {
+ HYPERVISOR_sched_op(SCHEDOP_yield, 0);
+ break;
+ }
+ }
}

void xen_smp_send_call_function_single_ipi(int cpu)

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/