Re: [PATCH v3 4/6] powerpc: atomic: Implement atomic{,64}_*_return_* variants

From: Boqun Feng
Date: Tue Oct 13 2015 - 21:00:57 EST


On Tue, Oct 13, 2015 at 09:35:54PM +0800, Boqun Feng wrote:
> On Tue, Oct 13, 2015 at 02:21:32PM +0100, Will Deacon wrote:
> > On Mon, Oct 12, 2015 at 10:14:04PM +0800, Boqun Feng wrote:
> [snip]
> > > +/*
> > > + * Since {add,sub}_return_relaxed and xchg_relaxed are implemented with
> > > + * a "bne-" instruction at the end, so an isync is enough as a acquire barrier
> > > + * on the platform without lwsync.
> > > + */
> > > +#ifdef CONFIG_SMP
> > > +#define smp_acquire_barrier__after_atomic() \
> > > + __asm__ __volatile__(PPC_ACQUIRE_BARRIER : : : "memory")
> >
> > I'm not keen on this barrier, as it sounds like it's part of the kernel
> > memory model, as opposed to an implementation detail on PowerPC (and
> > we've already got enough of that in the generic code ;).
> >
>
> Indeed, but we still have smp_lwsync() ;-)
>
> > Can you name it something different please (and maybe #undef it when
> > you're done)?
> >
>
> I've considered #undef it after used, but now I think open code this
> into __atomic_op_acquire() of PPC is a better idea?
>
>
> #define __atomic_op_acquire(op, args...) \
> ({ \
> typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \
> __asm__ __volatile__(PPC_ACQUIRE_BARRIER : : : "memory"); \

Should be:
__asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory");

> __ret; \
> })
>
> PPC_ACQUIRE_BARRIER will be empty if !SMP, so that will become a pure
> compiler barrier and just what we need.
>
> Regards,
> Boqun

Attachment: signature.asc
Description: PGP signature