Re: [PATCH] percpu data: only iterate over possible CPUs
From: Andrew Morton
Date: Fri Feb 10 2006 - 14:08:15 EST
Andi Kleen <ak@xxxxxx> wrote:
>
> On Friday 10 February 2006 11:42, Andrew Morton wrote:
> > Andi Kleen <ak@xxxxxx> wrote:
> > >
> > > On Thursday 09 February 2006 19:04, Andrew Morton wrote:
> > > > Ashok Raj <ashok.raj@xxxxxxxxx> wrote:
> > > > >
> > > > > The problem was with ACPI just simply looking at the namespace doesnt
> > > > > exactly give us an idea of how many processors are possible in this platform.
> > > >
> > > > We need to fix this asap - the performance penalty for HOTPLUG_CPU=y,
> > > > NR_CPUS=lots will be appreciable.
> > >
> > > What is this performance penalty exactly?
> >
> > All those for_each_cpu() loops will hit NR_CPUS cachelines instead of
> > hweight(cpu_possible_map) cachelines.
>
> But are there any in real fast paths? iirc they are mostly in initialization,
> where it doesn't matter too much.
>
Could be so.
I just added one to percpu_counters though. And it'd be a pain to
introduce a cpu notifier for each and every percpu_counter.
From: Andrew Morton <akpm@xxxxxxxx>
Implement percpu_counter_sum(). This is a more accurate but slower version of
percpu_counter_read_positive().
We need this for Alex's speedup-ext3_statfs patch. Otherwise it would be too
inaccurate on large CPU counts.
Cc: Ravikiran G Thirumalai <kiran@xxxxxxxxxxxx>
Cc: Alex Tomas <alex@xxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxx>
---
include/linux/percpu_counter.h | 6 ++++++
mm/swap.c | 25 +++++++++++++++++++++++--
2 files changed, 29 insertions(+), 2 deletions(-)
diff -puN include/linux/percpu_counter.h~percpu_counter_sum include/linux/percpu_counter.h
--- devel/include/linux/percpu_counter.h~percpu_counter_sum 2006-02-08 13:34:08.000000000 -0800
+++ devel-akpm/include/linux/percpu_counter.h 2006-02-08 13:34:08.000000000 -0800
@@ -39,6 +39,7 @@ static inline void percpu_counter_destro
}
void percpu_counter_mod(struct percpu_counter *fbc, long amount);
+long percpu_counter_sum(struct percpu_counter *fbc);
static inline long percpu_counter_read(struct percpu_counter *fbc)
{
@@ -92,6 +93,11 @@ static inline long percpu_counter_read_p
return fbc->count;
}
+static inline long percpu_counter_sum(struct percpu_counter *fbc)
+{
+ return percpu_counter_read_positive(fbc);
+}
+
#endif /* CONFIG_SMP */
static inline void percpu_counter_inc(struct percpu_counter *fbc)
diff -puN mm/swap.c~percpu_counter_sum mm/swap.c
--- devel/mm/swap.c~percpu_counter_sum 2006-02-08 13:34:08.000000000 -0800
+++ devel-akpm/mm/swap.c 2006-02-08 13:34:08.000000000 -0800
@@ -491,13 +491,34 @@ void percpu_counter_mod(struct percpu_co
if (count >= FBC_BATCH || count <= -FBC_BATCH) {
spin_lock(&fbc->lock);
fbc->count += count;
+ *pcount = 0;
spin_unlock(&fbc->lock);
- count = 0;
+ } else {
+ *pcount = count;
}
- *pcount = count;
put_cpu();
}
EXPORT_SYMBOL(percpu_counter_mod);
+
+/*
+ * Add up all the per-cpu counts, return the result. This is a more accurate
+ * but much slower version of percpu_counter_read_positive()
+ */
+long percpu_counter_sum(struct percpu_counter *fbc)
+{
+ long ret;
+ int cpu;
+
+ spin_lock(&fbc->lock);
+ ret = fbc->count;
+ for_each_cpu(cpu) {
+ long *pcount = per_cpu_ptr(fbc->counters, cpu);
+ ret += *pcount;
+ }
+ spin_unlock(&fbc->lock);
+ return ret < 0 ? 0 : ret;
+}
+EXPORT_SYMBOL(percpu_counter_sum);
#endif
/*
_
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/