Re: [PATCH tip/core/rcu 04/13] rcu: Don't disable preemption for Tiny and Tree RCU readers

From: Paul E. McKenney
Date: Tue Oct 06 2015 - 13:42:17 EST


On Tue, Oct 06, 2015 at 10:16:30AM -0700, Josh Triplett wrote:
> On Tue, Oct 06, 2015 at 10:01:01AM -0700, Paul E. McKenney wrote:
> > On Tue, Oct 06, 2015 at 09:44:45AM -0700, Josh Triplett wrote:
> > > On Tue, Oct 06, 2015 at 09:13:39AM -0700, Paul E. McKenney wrote:
> > > > From: Boqun Feng <boqun.feng@xxxxxxxxx>
> > > >
> > > > Because preempt_disable() maps to barrier() for non-debug builds,
> > > > it forces the compiler to spill and reload registers. Because Tree
> > > > RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> > > > barrier() instances generate needless extra code for each instance of
> > > > rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> > > > RCU and bloats Tiny RCU.
> > > >
> > > > This commit therefore removes the preempt_disable() and preempt_enable()
> > > > from the non-preemptible implementations of __rcu_read_lock() and
> > > > __rcu_read_unlock(), respectively. However, for debug purposes,
> > > > preempt_disable() and preempt_enable() are still invoked if
> > > > CONFIG_PREEMPT_COUNT=y, because this allows detection of sleeping inside
> > > > atomic sections in non-preemptible kernels.
> > > >
> > > > This is based on an earlier patch by Paul E. McKenney, fixing
> > > > a bug encountered in kernels built with CONFIG_PREEMPT=n and
> > > > CONFIG_PREEMPT_COUNT=y.
> > >
> > > This also adds explicit barrier() calls to several internal RCU
> > > functions, but the commit message doesn't explain those at all.
> >
> > To compensate for them being removed from rcu_read_lock() and
> > rcu_read_unlock(), but yes, I will update.
>
> That much seemed clear from the comments, but that doesn't explain *why*
> those functions need barriers of their own even though rcu_read_lock()
> and rcu_read_unlock() don't.

Ah. The reason is that Tiny RCU and Tree RCU (the !PREEMPT ones) act
by implicitly extending (and, if need be, merging) the RCU read-side
critical sections to include all the code between successive quiescent
states, for example, all the code between a pair of calls to schedule().

Therefore, there need to be barrier() calls in the quiescent-state
functions. Some could be argued to be implicitly present due to
translation-unit boundaries, but paranoia and all that.

Would adding that sort of explanation help?

Thanx, Paul

> > > > Signed-off-by: Boqun Feng <boqun.feng@xxxxxxxxx>
> > > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> > > > ---
> > > > include/linux/rcupdate.h | 6 ++++--
> > > > include/linux/rcutiny.h | 1 +
> > > > kernel/rcu/tree.c | 9 +++++++++
> > > > 3 files changed, 14 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > > index d63bb77dab35..6c3ceceb6148 100644
> > > > --- a/include/linux/rcupdate.h
> > > > +++ b/include/linux/rcupdate.h
> > > > @@ -297,12 +297,14 @@ void synchronize_rcu(void);
> > > >
> > > > static inline void __rcu_read_lock(void)
> > > > {
> > > > - preempt_disable();
> > > > + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> > > > + preempt_disable();
> > > > }
> > > >
> > > > static inline void __rcu_read_unlock(void)
> > > > {
> > > > - preempt_enable();
> > > > + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> > > > + preempt_enable();
> > > > }
> > > >
> > > > static inline void synchronize_rcu(void)
> > > > diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
> > > > index c8a0722f77ea..4c1aaf9cce7b 100644
> > > > --- a/include/linux/rcutiny.h
> > > > +++ b/include/linux/rcutiny.h
> > > > @@ -216,6 +216,7 @@ static inline bool rcu_is_watching(void)
> > > >
> > > > static inline void rcu_all_qs(void)
> > > > {
> > > > + barrier(); /* Avoid RCU read-side critical sections leaking across. */
> > > > }
> > > >
> > > > #endif /* __LINUX_RCUTINY_H */
> > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > > index b9d9e0249e2f..93c0f23c3e45 100644
> > > > --- a/kernel/rcu/tree.c
> > > > +++ b/kernel/rcu/tree.c
> > > > @@ -337,12 +337,14 @@ static void rcu_momentary_dyntick_idle(void)
> > > > */
> > > > void rcu_note_context_switch(void)
> > > > {
> > > > + barrier(); /* Avoid RCU read-side critical sections leaking down. */
> > > > trace_rcu_utilization(TPS("Start context switch"));
> > > > rcu_sched_qs();
> > > > rcu_preempt_note_context_switch();
> > > > if (unlikely(raw_cpu_read(rcu_sched_qs_mask)))
> > > > rcu_momentary_dyntick_idle();
> > > > trace_rcu_utilization(TPS("End context switch"));
> > > > + barrier(); /* Avoid RCU read-side critical sections leaking up. */
> > > > }
> > > > EXPORT_SYMBOL_GPL(rcu_note_context_switch);
> > > >
> > > > @@ -353,12 +355,19 @@ EXPORT_SYMBOL_GPL(rcu_note_context_switch);
> > > > * RCU flavors in desperate need of a quiescent state, which will normally
> > > > * be none of them). Either way, do a lightweight quiescent state for
> > > > * all RCU flavors.
> > > > + *
> > > > + * The barrier() calls are redundant in the common case when this is
> > > > + * called externally, but just in case this is called from within this
> > > > + * file.
> > > > + *
> > > > */
> > > > void rcu_all_qs(void)
> > > > {
> > > > + barrier(); /* Avoid RCU read-side critical sections leaking down. */
> > > > if (unlikely(raw_cpu_read(rcu_sched_qs_mask)))
> > > > rcu_momentary_dyntick_idle();
> > > > this_cpu_inc(rcu_qs_ctr);
> > > > + barrier(); /* Avoid RCU read-side critical sections leaking up. */
> > > > }
> > > > EXPORT_SYMBOL_GPL(rcu_all_qs);
> > > >
> > > > --
> > > > 2.5.2
> > > >
> > >
> >
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/