Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles

From: Oleg Nesterov
Date: Fri Feb 20 2015 - 15:26:11 EST


On 02/20, Peter Zijlstra wrote:
>
> I think I agree with Oleg in that we only need the smp_rmb(); of course
> that wants a somewhat elaborate comment to go along with it. How about
> something like so:
>
> spin_unlock_wait(&local);
> /*
> * The above spin_unlock_wait() forms a control dependency with
> * any following stores; because we must first observe the lock
> * unlocked and we cannot speculate stores.
> *
> * Subsequent loads however can easily pass through the loads
> * represented by spin_unlock_wait() and therefore we need the
> * read barrier.
> *
> * This together is stronger than ACQUIRE for @local and
> * therefore we will observe the complete prior critical section
> * of @local.
> */
> smp_rmb();
>
> The obvious alternative is using spin_unlock_wait() with an
> smp_load_acquire(), but that might be more expensive on some archs due
> to repeated issuing of memory barriers.

Yes, yes, thanks!

But note that we need the same comment after sem_lock()->spin_is_locked().

So perhaps we can add this comment into include/linux/spinlock.h ? In this
case perhaps it makes sense to add, say,

#define smp_mb__after_unlock_wait() smp_rmb()

with this comment above? Another potential user task_work_run(). It could
use rmb() too, but this again needs the same fat comment.

Ehat do you think?

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/