Re: [PATCH] x86, mm: only wait for flushes from online cpus

From: Srivatsa S. Bhat
Date: Wed Jul 18 2012 - 17:07:01 EST


On 06/21/2012 03:33 AM, mandeep.baines@xxxxxxxxx wrote:
> From: Mandeep Singh Baines <msb@xxxxxxxxxxxx>
>
> A cpu in the mm_cpumask could go offline before we send the
> invalidate IPI causing us to wait forever.
>
> Bug Trace:
>
> <4>[10222.234548] WARNING: at ../../arch/x86/kernel/apic/ipi.c:113 default_send_IPI_mask_logical+0x58/0x73()
> <5>[10222.234633] Pid: 23605, comm: lmt-udev Tainted: G C 3.2.7 #1
> <5>[10222.234639] Call Trace:
> <5>[10222.234651] [<8102e666>] warn_slowpath_common+0x68/0x7d
> <5>[10222.234661] [<81016c36>] ? default_send_IPI_mask_logical+0x58/0x73
> <5>[10222.234670] [<8102e68f>] warn_slowpath_null+0x14/0x18
> <5>[10222.234678] [<81016c36>] default_send_IPI_mask_logical+0x58/0x73
> <5>[10222.234687] [<8101eec2>] flush_tlb_others_ipi+0x86/0xba
> <5>[10222.234696] [<8101f0bb>] flush_tlb_mm+0x5e/0x62
> <5>[10222.234703] [<8101e36c>] pud_populate+0x2c/0x31
> <5>[10222.234711] [<8101e409>] pgd_alloc+0x98/0xc7
> <5>[10222.234719] [<8102c881>] mm_init.isra.38+0xcc/0xf3
> <5>[10222.234727] [<8102cbc2>] dup_mm+0x68/0x34e
> <5>[10222.234736] [<8139bbae>] ? _cond_resched+0xd/0x21
> <5>[10222.234745] [<810a5b7c>] ? kmem_cache_alloc+0x26/0xe2
> <5>[10222.234753] [<8102d421>] ? copy_process+0x556/0xda6
> <5>[10222.234761] [<8102d641>] copy_process+0x776/0xda6
> <5>[10222.234770] [<8102dd5e>] do_fork+0xcb/0x1d4
> <5>[10222.234778] [<810a8c96>] ? do_sync_write+0xd3/0xd3
> <5>[10222.234786] [<810a94ab>] ? vfs_read+0x95/0xa2
> <5>[10222.234795] [<81008850>] sys_clone+0x20/0x25
> <5>[10222.234804] [<8139d8c5>] ptregs_clone+0x15/0x30
> <5>[10222.234812] [<8139d7f7>] ? sysenter_do_call+0x12/0x26
> <4>[10222.234818] ---[ end trace 31e095600f50fd48 ]---
> <3>[10234.880183] BUG: soft lockup - CPU#0 stuck for 11s! [lmt-udev:23605]
>
> Addresses http://crosbug.com/31737
>
> Signed-off-by: Mandeep Singh Baines <msb@xxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
> Cc: x86@xxxxxxxxxx
> Cc: Tejun Heo <tj@xxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx>
> Cc: Christoph Lameter <cl@xxxxxxxxxx>
> Cc: Olof Johansson <olofj@xxxxxxxxxxxx>
> ---
> arch/x86/mm/tlb.c | 7 ++++++-
> 1 files changed, 6 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 5e57e11..010090d 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -194,8 +194,13 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,
> apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
> INVALIDATE_TLB_VECTOR_START + sender);
>

This function is always called with preempt_disabled() right?
In that case, _while_ this function is running, a CPU cannot go offline
because of stop_machine(). (I understand that it might go offline in between
calculating that cpumask and calling preempt_disable() - which is the race
you are trying to handle).

So, why not take the offline cpus out of the way even before sending that IPI?
That way, we need not modify the while loop below.

> - while (!cpumask_empty(to_cpumask(f->flush_cpumask)))
> + while (!cpumask_empty(to_cpumask(f->flush_cpumask))) {
> + /* Only wait for online cpus */
> + cpumask_and(to_cpumask(f->flush_cpumask),
> + to_cpumask(f->flush_cpumask),
> + cpu_online_mask);
> cpu_relax();
> + }
> }
>
> f->flush_mm = NULL;
>

That is, how about something like this:

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 5e57e11..9d387a9 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -186,7 +186,11 @@ static void flush_tlb_others_ipi(const struct cpumask *cpumask,

f->flush_mm = mm;
f->flush_va = va;
- if (cpumask_andnot(to_cpumask(f->flush_cpumask), cpumask, cpumask_of(smp_processor_id()))) {
+
+ cpumask_and(to_cpumask(f->flush_cpumask), cpumask, cpu_online_mask);
+ cpumask_clear_cpu(smp_processor_id(), to_cpumask(f->flush_cpumask));
+
+ if (!cpumask_empty(to_cpumask(f->flush_cpumask))) {
/*
* We have to send the IPI only to
* CPUs affected.



Regards,
Srivatsa S. Bhat
IBM Linux Technology Center

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/