Re: [patch 2/2] x86_64: ticket lock spinlock

From: Andi Kleen
Date: Thu Aug 09 2007 - 05:54:50 EST


On Thursday 09 August 2007 03:42:54 Nick Piggin wrote:
> On Wed, Aug 08, 2007 at 12:26:55PM +0200, Andi Kleen wrote:
> >
> > > *
> > > * (the type definitions are in asm/spinlock_types.h)
> > > */
> > >
> > > +#if (NR_CPUS > 256)
> > > +#error spinlock supports a maximum of 256 CPUs
> > > +#endif
> > > +
> > > static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
> > > {
> > > - return *(volatile signed int *)(&(lock)->slock) <= 0;
> > > + int tmp = *(volatile signed int *)(&(lock)->slock);
> >
> > Why is slock not volatile signed int in the first place?
>
> Don't know really. Why does spin_is_locked need it to be volatile?

I suppose in case a caller doesn't have a memory barrier
(they should in theory, but might not). Without any barrier
or volatile gcc might optimize it away.

The other accesses in spinlocks hopefully all have barriers.

Ok anyways the patches look good.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/