[PATCH] vfs: Avoid IPI storm due to bh LRU invalidation

From: Jan Kara
Date: Mon Feb 06 2012 - 08:55:54 EST


When discovery of lots of disks happen in parallel, we call
invalidate_bh_lrus() once for each disk from partitioning code resulting in a
storm of IPIs and causing a softlockup detection to fire (it takes several
*minutes* for a machine to execute all the invalidate_bh_lrus() calls).

Fix the issue by allowing only single invalidation to run using a mutex and let
waiters for mutex figure out whether someone invalidated LRUs for them while
they were waiting.

Signed-off-by: Jan Kara <jack@xxxxxxx>
---
fs/buffer.c | 23 ++++++++++++++++++++++-
1 files changed, 22 insertions(+), 1 deletions(-)

I feel this is slightly hacky approach but it works. If someone has better
idea, please speak up.

diff --git a/fs/buffer.c b/fs/buffer.c
index 1a30db7..56b0d2b 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1384,10 +1384,31 @@ static void invalidate_bh_lru(void *arg)
}
put_cpu_var(bh_lrus);
}
-
+
+/*
+ * Invalidate all buffers in LRUs. Since we have to signal all CPUs to
+ * invalidate their per-cpu local LRU lists this is rather expensive operation.
+ * So we optimize the case of several parallel calls to invalidate_bh_lrus()
+ * which happens from partitioning code when lots of disks appear in the
+ * system during boot.
+ */
void invalidate_bh_lrus(void)
{
+ static DEFINE_MUTEX(bh_invalidate_mutex);
+ static long bh_invalidate_sequence;
+
+ long my_bh_invalidate_sequence = bh_invalidate_sequence;
+
+ mutex_lock(&bh_invalidate_mutex);
+ /* Someone did bh invalidation while we were sleeping? */
+ if (my_bh_invalidate_sequence != bh_invalidate_sequence)
+ goto out;
+ bh_invalidate_sequence++;
+ /* Inc of bh_invalidate_sequence must happen before we invalidate bhs */
+ smp_wmb();
on_each_cpu(invalidate_bh_lru, NULL, 1);
+out:
+ mutex_unlock(&bh_invalidate_mutex);
}
EXPORT_SYMBOL_GPL(invalidate_bh_lrus);

--
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/