Re: [PATCH] clock: Fix smp_processor_id() in preemptible bug

From: Paul E. McKenney
Date: Tue Mar 14 2017 - 12:24:59 EST


On Mon, Mar 13, 2017 at 08:55:21AM -0700, Paul E. McKenney wrote:
> On Mon, Mar 13, 2017 at 01:46:21PM +0100, Peter Zijlstra wrote:
> > On Thu, Mar 09, 2017 at 10:37:32AM -0800, Paul E. McKenney wrote:
> > > And it does pass light testing. I will hammer it harder this evening.
> > >
> > > So please send a formal patch!
> >
> > Changed it a bit...
> >
> > ---
> > Subject: sched/clock: Some clear_sched_clock_stable() vs hotplug wobbles
> >
> > Paul reported two independent problems with clear_sched_clock_stable().
> >
> > - if we tickle it during hotplug (even though the sched_clock was
> > already marked unstable) we'll attempt to schedule_work() and
> > this explodes because RCU isn't watching the new CPU quite yet.
> >
> > - since we run all of __clear_sched_clock_stable() from workqueue
> > context, there's a preempt problem.
> >
> > Cure both by only doing the static_branch_disable() from a workqueue,
> > and only when it's still stable.
> >
> > This leaves the problem what to do about hotplug actually wrecking TSC
> > though, because if it was stable and now isn't, then we will want to run
> > that work, which then will prod RCU the wrong way. Bloody hotplug.
> >
> > Reported-by: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
>
> This passes initial testing. I will hammer it harder overnight, but
> in the meantime:

And it did just fine overnight, thank you!

> Tested-by: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>

I am guessing that you will be pushing this one for -rc3, but please
let me know.

Thanx, Paul

> > ---
> > kernel/sched/clock.c | 17 ++++++++++++-----
> > 1 file changed, 12 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
> > index a08795e21628..fec0f58c8dee 100644
> > --- a/kernel/sched/clock.c
> > +++ b/kernel/sched/clock.c
> > @@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void)
> > tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE);
> > }
> >
> > -static void __clear_sched_clock_stable(struct work_struct *work)
> > +static void __sched_clock_work(struct work_struct *work)
> > +{
> > + static_branch_disable(&__sched_clock_stable);
> > +}
> > +
> > +static DECLARE_WORK(sched_clock_work, __sched_clock_work);
> > +
> > +static void __clear_sched_clock_stable(void)
> > {
> > struct sched_clock_data *scd = this_scd();
> >
> > @@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work)
> > scd->tick_gtod, gtod_offset,
> > scd->tick_raw, raw_offset);
> >
> > - static_branch_disable(&__sched_clock_stable);
> > tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE);
> > -}
> >
> > -static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable);
> > + if (sched_clock_stable())
> > + schedule_work(&sched_clock_work);
> > +}
> >
> > void clear_sched_clock_stable(void)
> > {
> > @@ -173,7 +180,7 @@ void clear_sched_clock_stable(void)
> > smp_mb(); /* matches sched_clock_init_late() */
> >
> > if (sched_clock_running == 2)
> > - schedule_work(&sched_clock_work);
> > + __clear_sched_clock_stable();
> > }
> >
> > void sched_clock_init_late(void)
> >