[PATCH] lib/percpu_counter.c: fix __percpu_counter_add

From: Ming Lei
Date: Sun Jan 12 2014 - 11:11:25 EST


From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>

__percpu_counter_add() may be called in softirq/hardirq handler
(such as, blk_mq_queue_exit() is typically called in hardirq/softirq
handler), so we need to call this_cpu_add()(irq safe helper) to
update percpu counter, otherwise counts may be lost.

The patch fixes problem that 'rmmod null_blk' hangs in
blk_cleanup_queue() because of miscounting of
request_queue->mq_usage_counter.

Cc: Paul Gortmaker <paul.gortmaker@xxxxxxxxxxxxx>
Cc: Shaohua Li <shli@xxxxxxxxxxxx>
Cc: Jens Axboe <axboe@xxxxxxxxx>
Cc: Fan Du <fan.du@xxxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Ming Lei <tom.leiming@xxxxxxxxx>
---
This patch is the v1 of previous one of "lib/percpu_counter.c:
disable local irq when updating percpu couter", and takes Andrew's
approach which may be more efficient for ARCHs(x86, s390) that
have optimized this_cpu_add().

lib/percpu_counter.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index 7473ee3..1da85bb 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -82,10 +82,10 @@ void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch)
unsigned long flags;
raw_spin_lock_irqsave(&fbc->lock, flags);
fbc->count += count;
+ __this_cpu_sub(*fbc->counters, count);
raw_spin_unlock_irqrestore(&fbc->lock, flags);
- __this_cpu_write(*fbc->counters, 0);
} else {
- __this_cpu_write(*fbc->counters, count);
+ this_cpu_add(*fbc->counters, amount);
}
preempt_enable();
}
--
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/