Re: [PATCH v5 06/13] lockdep: Implement crossrelease feature

From: Peter Zijlstra
Date: Fri Mar 03 2017 - 03:16:14 EST


On Fri, Mar 03, 2017 at 09:17:37AM +0900, Byungchul Park wrote:
> On Thu, Mar 02, 2017 at 02:40:31PM +0100, Peter Zijlstra wrote:

> > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> > index a95e5d1..7baea89 100644
> > --- a/kernel/locking/lockdep.c
> > +++ b/kernel/locking/lockdep.c
> > @@ -1860,6 +1860,17 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
> > }
> > }
> >
> > + /*
> > + * Is the <prev> -> <next> redundant?
> > + */
> > + this.class = hlock_class(prev);
> > + this.parent = NULL;
> > + ret = check_noncircular(&this, hlock_class(next), &target_entry);
> > + if (!ret) /* exists, redundant */
> > + return 2;
> > + if (ret < 0)
> > + return print_bfs_bug(ret);
> > +
> > if (!*stack_saved) {
> > if (!save_trace(&trace))
> > return 0;
>
> This whoud be very nice if you allow to add this code. However, prev_gen_id
> thingy is still useful, the code above can achieve it though. Agree?

So my goal was to avoid prev_gen_id, and yes I think the above does
that.

Now the problem with the above condition is that it makes reports
harder to decipher, because by avoiding adding redundant links to our
graph we loose a possible shorter path.

So while for correctness sake it doesn't matter, it is irrelevant how
long the cycle is after all, all that matters is that there is a cycle.
But the humans on the receiving end tend to like shorter cycles.

And I think the same is true for crossrelease, avoiding redundant links
increases cycle length.

(And remember, BFS will otherwise find the shortest cycle.)

That said; I'd be fairly interested in numbers on how many links this
avoids, I'll go make a check_redundant() version of the above and put a
proper counter in so I can see what it does for a regular boot etc..