Re: [PATCH] rcu: Is it safe to enter an RCU read-side criticalsection?

From: Peter Zijlstra
Date: Mon Sep 09 2013 - 09:17:44 EST


On Mon, Sep 09, 2013 at 08:55:04AM -0400, Steven Rostedt wrote:
> On Mon, 9 Sep 2013 14:45:49 +0200
> Frederic Weisbecker <fweisbec@xxxxxxxxx> wrote:
>
>
> > > This just proves that the caller of rcu_is_cpu_idle() must disable
> > > preemption itself for the entire time that it needs to use the result
> > > of rcu_is_cpu_idle().
> >
> > Sorry, I don't understand your point here. What's wrong with checking the
> > ret from another CPU?
>
> Hmm, OK, this is why that code is in desperate need of a comment.
>
> From reading the context a bit more, it seems that the per cpu value is
> more a "per task" value that happens to be using per cpu variables, and
> changes on context switches. Is that correct?
>
> Anyway, it requires a comment to explain that we are not checking the
> CPU state, but really the current task state, otherwise that 'ret'
> value wouldn't travel with the task, but would stick with the CPU.

Egads.. and the only reason we couldn't do the immediate load is because
of that atomic mess.

Also, if its per-task, why don't we have this in the task struct? The
current scheme makes the context switch more expensive -- is this the
right trade-off?

So maybe something like:

int rcu_is_cpu_idle(void)
{
/*
* Comment explaining that rcu_dynticks.dynticks really is a
* per-task something and we need preemption-safe loading.
*/
atomic_t dynticks = this_cpu_read(rcu_dynticks.dynticks);
return !(__atomic_read(&dynticks) & 0x01);
}

Where __atomic_read() would be like atomic_read() but without the
volatile crap since that's entirely redundant here I think.

The this_cpu_read() should ensure we get a preemption-safe copy of the
value.

Once that this_cpu stuff grows preemption checks we'd need something
like __raw_this_cpu_read() or whatever the variant without preemption
checks will be called.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/