Re: [RFC][PATCH 3/4] (Refcount + Waitqueue) implementation for cpu_hotplug "locking"

From: Ingo Molnar
Date: Thu Aug 24 2006 - 08:29:58 EST



* Gautham R Shenoy <ego@xxxxxxxxxx> wrote:

> This was the approach I tried to make it cache friendly.
> These are the problems I faced.
>
> - Reader checks the write_active flag. If set, he waits in the global read
> queue. else, he gets the lock and increments percpu refcount.
>
> - the writer would have to check each cpu's read refcount, and ensure that
> read refcount =0 on all of them before he sets write_active and
> begins a write operation.
> This will create a big race window - a writer is checking
> for a refcount on cpu(j), a reader comes on cpu(i) where i<j;
> Let's assume the writer checks refcounts in increasing order of cpus.
> Should the reader on cpu(i) wait or go ahead? If we use a global
> lock to serialize this operation, we the whole purpose of maintaining
> per cpu data is lost.

no. The writer first sets the global write_active flag, and _then_ goes
on to wait for all readers (if any) to get out of their critical
sections. (That's the purpose of the per-cpu waitqueue that readers use
to wake up a writer waiting for the refcount to go to 0.)

can you still see problems with this scheme?

(the 'write_active' flag is probably best implemented as a mutex, where
readers check mutex_is_locked(), and writers try to take it.)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/