Re: [RFC PATCH 1/6] kernel: implement queue spinlock API

From: Paul E. McKenney
Date: Fri Feb 08 2013 - 00:04:18 EST


On Thu, Feb 07, 2013 at 08:36:43PM -0800, Paul E. McKenney wrote:
> On Thu, Feb 07, 2013 at 07:48:33PM -0800, Michel Lespinasse wrote:
> > On Thu, Feb 7, 2013 at 4:40 PM, Paul E. McKenney
> > <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
> > > On Thu, Feb 07, 2013 at 04:03:54PM -0800, Eric Dumazet wrote:
> > >> It adds yet another memory write to store the node pointer in the
> > >> lock...
> > >>
> > >> I suspect it's going to increase false sharing.
> > >
> > > On the other hand, compared to straight MCS, it reduces the need to
> > > pass the node address around. Furthermore, the node pointer is likely
> > > to be in the same cache line as the lock word itself, and finally
> > > some architectures can do a double-pointer store.
> > >
> > > Of course, it might well be slower, but it seems like it is worth
> > > giving it a try.
> >
> > Right. Another nice point about this approach is that there needs to
> > be only one node per spinning CPU, so the node pointers (both tail and
> > next) might be replaced with CPU identifiers, which would bring the
> > spinlock size down to the same as with the ticket spinlock (which in
> > turns makes it that much more likely that we'll have atomic stores of
> > that size).
>
> Good point! I must admit that this is one advantage of having the
> various _irq spinlock acquisition primitives disable irqs before
> spinning. ;-)

Right... For spinlocks that -don't- disable irqs, you need to deal with
the possibility that a CPU gets interrupted while spinning, and the
interrupt handler also tries to acquire a queued lock. One way to deal
with this is to have a node per CPUxirq. Of course, if interrupts
handlers always disable irqs when acquiring a spinlock, then you only
need CPUx2.

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/