Re: [PATCH RFC nohz_full 0/8] Provide infrastructure for full-systemidle

From: Paul E. McKenney
Date: Wed Jun 26 2013 - 11:02:34 EST


On Tue, Jun 25, 2013 at 06:11:27PM -0700, Andy Lutomirski wrote:
> On 06/25/2013 02:49 PM, Thomas Gleixner wrote:
> > On Tue, 25 Jun 2013, Paul E. McKenney wrote:
> >> Note that this version pays attention to CPUs that have taken an NMI
> >> from idle. It is not clear to me that NMI handlers can safely access
> >> the time on a system that is long-term idle. Unless someone tells me
> >> that it is somehow safe to access time from an NMI from idle, I will
> >> remove NMI support in the next version.
> >
> > NMI cannot access any time related functions independent of NOHZ, long
> > term idle or whatever you come up with:
> >
> > write_seqcount_begin(&timekeeper_seq);
> >
> > ---> NMI
> > ...
> > do {
> > seq = read_seqcount_begin(&timekeeper_seq);
> > } while (read_seqcount_retry(&timekeeper_seq, seq));
> >
> > Guess how well that works ....
> >
> > Thanks,
> >
> > tglx
> >
>
> Is this something worth fixing? One of the things on my infinitely long
> todo list is to replace that seqcount with a wait-free data structure,
> in which case this would be okay. I don't care about NMIs, but this
> would mean that clock_gettime would never stall just because the
> timekeeping code was running somewhere -- at worse you'd get a couple
> extra cache misses.
>
> The data structure is described here:
>
> http://link.springer.com/chapter/10.1007%2F978-3-540-92221-6_40
>
> (Sorry, this was my first paper and is therefore not so well written.
> Also, it costs $30, although I think I'm allowed to email copies out and
> probably even host them on a website somewhere.)
>
> The main downside would be a possible loss of monotonicity, like this:
>
> Thread a: read the timekeeping data
> Thread b: update the timekeeping data
> Thread c: start and finish reading the time (using new data)
> Thread a: read new raw clock value but compute using old timekeeping data
>
> This would be fixable.
>
> The data structure is essentially an array of copies of the protected
> data, which can be called bin 0, 1, 2, ..., N. The data is versioned,
> just like with seqcount. Bin i contains the most recent copy of the
> data that had a version number that's a multiple of 2^i, but any bin can
> also be marked as invalid if it's being written.

If I remember correctly, something like this was proposed some years
back, but was rejected because it was not possible to set a bound on
how long a given thread would be using a given array element, which
could result in all elements being both in use and out of date.

It is possible to avoid this, but the only way I can see to do so
re-introduces retries.

> To write: update all bins that need updating (that is 1 + num trailing
> zeros in the new version number), starting at the highest number.
>
> To read starting at bin i: try to read bin i (just like with a
> seqcount). If that fails, then recursively read starting at bin i+1.
> As a double-check, re-try bin i. If the retry fails but the recursive
> read succeeded, return the value from the recursive read.
>
> The only way this can fail is if you race with ~2^N writes. You can try
> to read in a loop to avoid this problem.
>
> Unlike a seqcount, you need to race with more than 1 write, which
> eliminates this deadlock -- writers have to make continuous progress for
> readers to get stuck. But it's extremely unlikely that a reader ever
> has to loop.

Unless I am missing something, you still have to have readers loop, but
reduce the probability of them having to do so. Assuming that you were
willing to reuse old entries, you could avoid the deadlock that Thomas
pointed out, but at the cost of reusing an arbitrarily old entry in the
case where the NMI happened at the end of a long full-system idle period.
Which might not be such a good thing!

Therefore, I still intend to remove the NMI detection.

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/