Re: [PATCH tip/core/rcu 07/13] rcu: Add smp_mb__after_atomic() to sync_exp_work_done()

From: Paul E. McKenney
Date: Thu Apr 20 2017 - 11:05:20 EST


On Thu, Apr 20, 2017 at 01:17:43PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 19, 2017 at 04:23:52PM -0700, Paul E. McKenney wrote:
> > On Thu, Apr 13, 2017 at 07:51:36PM +0200, Peter Zijlstra wrote:
> > > On Thu, Apr 13, 2017 at 10:39:51AM -0700, Paul E. McKenney wrote:
> > >
> > > > Well, if there are no objections, I will fix up the smp_mb__before_atomic()
> > > > and smp_mb__after_atomic() pieces.
> > >
> > > Feel free.
> >
> > How about if I add this in the atomic_ops.txt description of these
> > two primitives?
> >
> > Preceding a non-value-returning read-modify-write atomic
> > operation with smp_mb__before_atomic() and following it with
> > smp_mb__after_atomic() provides the same full ordering that is
> > provided by value-returning read-modify-write atomic operations.
>
> That seems correct. It also already seems a direct implication of the
> extant text though. But as you're wont to say, people need repetition
> and pointing out the obvious etc..

Especially given that it never is obvious until you understand it.
At which point you don't need the documentation. Therefore, documentation
is mostly useful to people who are missing a few pieces of the overall
puzzle. Which we all were at some time in the past. ;-)

> The way I read that document, specifically this:
>
> "For example, smp_mb__before_atomic() can be used like so:
>
> obj->dead = 1;
> smp_mb__before_atomic();
> atomic_dec(&obj->ref_count);
>
> It makes sure that all memory operations preceding the atomic_dec()
> call are strongly ordered with respect to the atomic counter
> operation."
>
> Leaves no question that these operations must be full barriers.
>
> And therefore, your paragraph that basically states that:
>
> smp_mb__before_atomic();
> atomic_inc_return_relaxed();
> smp_mb__after_atomic();
>
> equals:
>
> atomic_inc_return();
>
> is implied, no?

That is a reasonable argument, but some very intelligent people didn't
make that leap when reading it, so more redundancy appears to be needed.

> > commit 5789953adc360b4d3685dc89513655e6bfb83980
> > Author: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> > Date: Wed Apr 19 16:20:07 2017 -0700
> >
> > atomics: Add header comment so spin_unlock_wait() and spin_is_locked()
> >
> > There is material describing the ordering guarantees provided by
> > spin_unlock_wait() and spin_is_locked(), but it is not necessarily
> > easy to find. This commit therefore adds a docbook header comment
> > to both functions informally describing their semantics.
> >
> > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> >
> > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> > index 59248dcc6ef3..2647dc7f3ea9 100644
> > --- a/include/linux/spinlock.h
> > +++ b/include/linux/spinlock.h
> > @@ -369,11 +369,49 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
> > raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
> > })
> >
> > +/**
> > + * spin_unlock_wait - Interpose between successive critical sections
> > + * @lock: the spinlock whose critical sections are to be interposed.
> > + *
> > + * Semantically this is equivalent to a spin_lock() immediately
> > + * followed by a spin_unlock(). However, most architectures have
> > + * more efficient implementations in which the spin_unlock_wait()
> > + * cannot block concurrent lock acquisition, and in some cases
> > + * where spin_unlock_wait() does not write to the lock variable.
> > + * Nevertheless, spin_unlock_wait() can have high overhead, so if
> > + * you feel the need to use it, please check to see if there is
> > + * a better way to get your job done.
> > + *
> > + * The ordering guarantees provided by spin_unlock_wait() are:
> > + *
> > + * 1. All accesses preceding the spin_unlock_wait() happen before
> > + * any accesses in later critical sections for this same lock.
> > + * 2. All accesses following the spin_unlock_wait() happen after
> > + * any accesses in earlier critical sections for this same lock.
> > + */
> > static __always_inline void spin_unlock_wait(spinlock_t *lock)
> > {
> > raw_spin_unlock_wait(&lock->rlock);
> > }
>
> ACK

Very good, adding your Acked-by.

> > +/**
> > + * spin_is_locked - Conditionally interpose after prior critical sections
> > + * @lock: the spinlock whose critical sections are to be interposed.
> > + *
> > + * Semantically this is equivalent to a spin_trylock(), and, if
> > + * the spin_trylock() succeeds, immediately followed by a (mythical)
> > + * spin_unlock_relaxed(). The return value from spin_trylock() is returned
> > + * by spin_is_locked(). Note that all current architectures have extremely
> > + * efficient implementations in which the spin_is_locked() does not even
> > + * write to the lock variable.
> > + *
> > + * A successful spin_is_locked() primitive in some sense "takes its place"
> > + * after some critical section for the lock in question. Any accesses
> > + * following a successful spin_is_locked() call will therefore happen
> > + * after any accesses by any of the preceding critical section for that
> > + * same lock. Note however, that spin_is_locked() provides absolutely no
> > + * ordering guarantees for code preceding the call to that spin_is_locked().
> > + */
> > static __always_inline int spin_is_locked(spinlock_t *lock)
> > {
> > return raw_spin_is_locked(&lock->rlock);
>
> I'm current confused on this one. The case listed in the qspinlock code
> doesn't appear to exist in the kernel anymore (or at least, I'm having
> trouble finding it).
>
> That said, I'm also not sure spin_is_locked() provides an acquire, as
> that comment has an explicit smp_acquire__after_ctrl_dep();

OK, I have dropped this portion of the patch for the moment.

Going forward, exactly what semantics do you believe spin_is_locked()
provides?

Do any of the current implementations need to change to provide the
semantics expected by the various use cases?

Thanx, Paul