Re: [PATCH 1/7] rcu: Fix preempt-unsafe debug check of rcu extendedquiescent state

From: Frederic Weisbecker
Date: Wed Sep 28 2011 - 08:44:45 EST


On Wed, Sep 28, 2011 at 11:17:40AM +0800, Yong Zhang wrote:
> On Tue, Sep 27, 2011 at 11:44:56PM +0200, Frederic Weisbecker wrote:
> > On Tue, Sep 27, 2011 at 11:16:02AM -0400, Pavel Ivanov wrote:
> > > > What matters is that we do that check by ensuring we are really
> > > > checking the value of the cpu var in the CPU we are currently
> > > > running and not some other random one that can change its dynticks
> > > > value at the same time.
> > >
> > > Define the "CPU we are currently running on" in this context. Is it
> > > CPU executing call to rcu_check_extended_qs() or is it CPU executing
> > > return from rcu_check_extended_qs() ? These CPUs can be different both
> > > before your patch and after that. And function can return extended_qs
> > > state from either of these CPUs again both before and after the patch.
> > > If the calling code wants these CPUs to be the same it has to disable
> > > preemption before making the call. And if it does so then additional
> > > preemption disabling inside the function is pointless.
> >
> > So, like Paul said rcu_read_lock() doesn't necessary imply to disable
> > preemption.
> >
> > Hence by the time we call rcu_check_extended_qs() the current task
> > can be migrated anytime before, while in the function (except
> > a little part) or after.
> >
> > The CPU I was referring to when I talked about "CPU we are currently
> > running on" is the CPU we are running between the call to get_cpu_var()
> > and put_cpu_var(). This one can not be changed because get_cpu_var()
> > disables preemption.
> >
> > So consider this piece of code:
> >
> > struct rcu_dynticks *rdtp =
> > &get_cpu_var(rcu_dynticks);
> > bool ext_qs = true;
> >
> > if (atomic_read(&rdtp->dynticks) & 0x1)
> > ext_qs = false;
> >
> > put_cpu_var(rcu_dynticks);
> >
> > What I expect from preemption disabled is that when I read the local
> > CPU variable rdtp->dyntick, I'm sure this is the CPU var of the local
> > CPU and the rdtp->dyntick from another CPU.
> >
> > If I don't disable preemption, like it was without my patch:
> >
> > 0 struct rcu_dynticks *rdtp =
> > 1 &__raw_get_cpu_var(rcu_dynticks);
> > 2
> > 3 if (atomic_read(&rdtp->dynticks) & 0x1)
> > 4 ext_qs = false;
> > 5
> > 6 put_cpu_var(rcu_dynticks);
> >
> > I can fetch rdtp of CPU 1 in line 0. Then the task migrates on CPU 2.
> > So on line 3 I'm reading rdtp->dynticks of CPU 1 from CPU 2 and this is
> > racy because CPU 1 can change the value of rdtp->dynticks concurrently.
> >
> > Now indeed it's weird because we can migrate anytime outside that preempt
> > disabled section.
> >
> > So let's explore the two cases where this function can be called:
> >
> > - From the idle task. For now this is the only place where we can
> > run sections of code in RCU extended quiescent state. If any use
> > of RCU is made on such section, it will hit our check.
> > Here there is no head-scratching about the role of disabling preemption
> > because the idle tasks can't be migrated. There is one per cpu so
> > the rcu_dynticks variable we look at is always the same inside a
> > given idle task.
>
> Yeah.
>
> >
> > - From a normal task. We assume it can be migrated anytime. But
> > normal tasks aren't supposed in RCU extended quiescent state.
> > Still the check can be useful there and spot for example cases where
> > we exit the idle task without calling rcu_exit_nohz().
> >
> > Now from a normal task, when we call rcu_read_lock(), we assume
> > we can read the value dynticks from any CPU, wherever we migrate
> > to. So for example if we are running idle in CPU 1, then we exit
> > idle without calling rcu_exit_nohz(), the next task running on this
> > CPU is about to call rcu_read_lock(), but of on the last time before
> > we do our check it migrates to CPU 2. It won't detect the issue in CPU 1
> > then. But it doesn't matter much, soon or later there are fair
> > chances there will be a call to rcu_read_lock() on CPU 1 that
> > will report the issue.
>
> So the main usage is to detect unbalanced rcu_enter_nohz()/rcu_exit_nohz(),
> right?

In fact the primary and main goal is to detect rcu uses between the calls
to rcu_enter_nohz() and rcu_exit_nohz(). We found several of these bugs in idle
code.

But it might find other things like unbalanced rcu_enter_nohz()/rcu_exit_nohz().

Also rcu_enter_nohz() is used only in idle for now but that may find
broader uses in the future. So we want this check everywhere.

> If so, I suggest this should be commented somewhere, like the commit log;
> because I was focusing on the idle task before then think it's harmless
> with/without this patch :)

You're right, I'll add a comment to explain that.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/