Re: [PATCH 2/2] rcu: Fix lockup when RCU reader used while IRQ exiting
From: Paul E. McKenney
Date: Wed Jun 11 2025 - 12:17:27 EST
On Wed, Jun 11, 2025 at 09:05:06AM -0700, Boqun Feng wrote:
> On Mon, Jun 09, 2025 at 02:01:24PM -0400, Joel Fernandes wrote:
> > During rcu_read_unlock_special(), if this happens during irq_exit(), we
> > can lockup if an IPI is issued. This is because the IPI itself triggers
> > the irq_exit() path causing a recursive lock up.
> >
> > This is precisely what Xiongfeng found when invoking a BPF program on
> > the trace_tick_stop() tracepoint As shown in the trace below. Fix by
> > using context-tracking to tell us if we're still in an IRQ.
> > context-tracking keeps track of the IRQ until after the tracepoint, so
> > it cures the issues.
> >
> > irq_exit()
> > __irq_exit_rcu()
> > /* in_hardirq() returns false after this */
> > preempt_count_sub(HARDIRQ_OFFSET)
> > tick_irq_exit()
>
> @Frederic, while we are at it, what's the purpose of in_hardirq() in
> tick_irq_exit()? For nested interrupt detection?
If you are talking about the comment, these sorts of comments help
people reading the code, the point being that some common-code function
that invokes in_hardirq() after that point will get the wrong answer
from it. The context-tracking code does the same for whether or not
RCU is watching.
Thanx, Paul
> Regards,
> Boqun
>
> > tick_nohz_irq_exit()
> > tick_nohz_stop_sched_tick()
> > trace_tick_stop() /* a bpf prog is hooked on this trace point */
> > __bpf_trace_tick_stop()
> > bpf_trace_run2()
> > rcu_read_unlock_special()
> > /* will send a IPI to itself */
> > irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
> >
> > A simple reproducer can also be obtained by doing the following in
> > tick_irq_exit(). It will hang on boot without the patch:
> >
> > static inline void tick_irq_exit(void)
> > {
> > + rcu_read_lock();
> > + WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, true);
> > + rcu_read_unlock();
> > +
> >
> > While at it, add some comments to this code.
> >
> > Reported-by: Xiongfeng Wang <wangxiongfeng2@xxxxxxxxxx>
> > Closes: https://lore.kernel.org/all/9acd5f9f-6732-7701-6880-4b51190aa070@xxxxxxxxxx/
> > Tested-by: Xiongfeng Wang <wangxiongfeng2@xxxxxxxxxx>
> > Signed-off-by: Joel Fernandes <joelagnelf@xxxxxxxxxx>
> [...]