Re: Internal vs. external barriers (was: Re: Interesting LKMM litmus test)

From: Paul E. McKenney
Date: Tue Jan 17 2023 - 22:50:48 EST


On Tue, Jan 17, 2023 at 03:15:06PM -0500, Alan Stern wrote:
> On Tue, Jan 17, 2023 at 09:43:08AM -0800, Paul E. McKenney wrote:
> > On Tue, Jan 17, 2023 at 10:56:34AM -0500, Alan Stern wrote:
> > > Isn't it true that the current code will flag srcu-bad-nesting if a
> > > litmus test has non-nested overlapping SRCU read-side critical sections?
> >
> > Now that you mention it, it does indeed, flagging srcu-bad-nesting.
> >
> > Just to see if I understand, different-values yields true if the set
> > contains multiple elements with the same value mapping to different
> > values. Or, to put it another way, if the relation does not correspond
> > to a function.
>
> As I understand it, given a relation r (i.e., a set of pairs of events),
> different-values(r) returns the sub-relation consisting of those pairs
> in r for which the value associated with the first event of the pair is
> different from the value associated with the second event of the pair.

OK, so different-values(r) is different than (r \ id) because the
former operates on values and the latter on events?

> For srcu_read_lock() and loads in general, the associated value is the
> value returned by the function call. For srcu_read_unlock() and stores
> in general, the associated value is the value (i.e., the second
> argument) passed to the function call.

As you say later, it would be good to have that "data" relation include
srcu_read_lock() and srcu_read_unlock(). ;-)

> > Or am I still missing something?
> >
> > > And if it is true, is there any need to change the memory model at this
> > > point?
> > >
> > > (And if it's not true, that's most likely due to a bug in herd7.)
> >
> > Agreed, changes must wait for SRCU support in herd7.
>
> Apparently the only change necessary is to make the srcu_read_lock and
> srcu_read_unlock events act like loads and stores. In particular, they
> need to be subject to the standard rules for calculating dependencies.
>
> Right now the behavior is kind of strange. The following simple litmus
> test:
>
> C test
> {}
> P0(int *x)
> {
> int r1;
> r1 = srcu_read_lock(x);
> srcu_read_unlock(x, r1);
> }
> exists (~0:r1=0)
>
> produces the following output from herd7:
>
> Test test Allowed
> States 1
> 0:r1=906;
> Ok
> Witnesses
> Positive: 1 Negative: 0
> Condition exists (not (0:r1=0))
> Observation test Always 1 0
> Time test 0.01
> Hash=2f42c87ae9c1d267f4e80c66f646b9bb
>
> Don't ask me where that 906 value comes from or why it is't 0. Also,
> herd7's graphical output shows there is no data dependency from the lock
> to the unlock, but we need to have one.

Is it still the case that any herd7 value greater than 127 is special?

> > At which point something roughly similar to this might work?
> >
> > let srcu-rscs = return_value(Srcu-lock) ; (dep | rfi)* ;
> > parameter(Srcu-unlock, 2)
>
> I can't tell what that's supposed to mean. In any case, I think what
> you want would be:
>
> let srcu-rscs = ([Srcu-lock] ; data ; [Srcu-unlock]) & loc

Agreed, given that (data) is supposed to propagate through locals
and globals. Within the Linux kernel, assigning the return value from
srcu_read_lock() to a global is asking for it, though.

> > Given an Srcu-down and an Srcu-up:
> >
> > let srcu-rscs = ( return_value(Srcu-lock) ; (dep | rfi)* ;
> > parameter(Srcu-unlock, 2) ) |
> > ( return_value(Srcu-down) ; (dep | rf)* ;
> > parameter(Srcu-up, 2) )
> >
> > Seem reasonable, or am I missing yet something else?
>
> Not at all reasonable.
>
> For one thing, consider this question: Which statements lie inside a
> read-side critical section?

Here srcu_down_read() and srcu_up_read() are to srcu_read_lock() and
srcu_read_unlock() as down_read() and up_read() are to mutex_lock()
and mutex_unlock(). Not that this should be all that much comfort
given that I have no idea how one would go about modeling down_read()
and up_read() in LKMM.

> With srcu_read_lock() and a matching srcu_read_unlock(), the answer is
> clear: All statements po-between the two. With srcu_down_read() and
> srcu_up_read(), the answer is cloudy in the extreme.

And I agree that it must be clearly specified, and my that previous try
was completely lacking. Here is a second attempt:

let srcu-rscs = (([Srcu-lock] ; data ; [Srcu-unlock]) & loc) |
(([Srcu-down] ; (data | rf)* ; [Srcu-up]) & loc)

(And I see your proposal and will try it.)

> Also, bear in mind that the Fundamental Law of RCU is formulated in
> terms of stores propagating to a critical section's CPU. What are we to
> make of this when a single critical section can belong to more than one
> CPU?

One way of answering this question is by analogy with down() and up()
when used as a cross-task mutex. Another is by mechanically applying
some of current LKMM. Let's start with this second option.

LKMM works mostly with critical sections, but we also discussed ordering
based on the set of events po-after an srcu_read_lock() on the one hand
and the set of events po-before an srcu_read_unlock() on the other.
Starting here, the critical section is the intersection of these two sets.

In the case of srcu_down_read() and srcu_up_read(), as you say, whatever
might be a critical section must span processes. So what if instead of
po, we used (say) xbstar? Then given a set of A such that ([Srcu-down ;
xbstar ; A) and B such that (B ; xbstar ; [Srcu-up]), then the critical
section is the intersection of A and B.

One objection to this approach is that a bunch of unrelated events could
end up being defined as part of the critical section. Except that this
happens already anyway in real critical sections in the Linux kernel.

So what about down() and up() when used as cross-task mutexes?
These often do have conceptual critical sections that protect some
combination of resource, but these critical sections might span tasks
and/or workqueue handlers. And any reasonable definition of these
critical sections would be just as likely to pull in unrelated accesses as
the above intersection approach for srcu_down_read() and srcu_up_read().

But I am just now making all this up, so thoughts?

> Indeed, given:
>
> P0(int *x) {
> srcu_down_read(x);
> }
>
> P1(int *x) {
> srcu_up_read(x);
> }
>
> what are we to make of executions in which P1 executes before P0?

Indeed, there had better be something else forbidding such executions, or
this is an invalid use of srcu_down_read() and srcu_up_read(). This might
become more clear if the example is expanded to include the index returned
from srcu_down_read() that is to be passed to srcu_up_read():

P0(int *x, int *i) {
WRITE_ONCE(i, srcu_down_read(x));
}

P1(int *x, int *i) {
srcu_up_read(x, READ_ONCE(i));
}

Which it looks like you in fact to have in your patch, so time for me
to go try that out.

Thanx, Paul