[PATCH RT 12/13] irq_work: Also rcuwait for !IRQ_WORK_HARD_IRQ on PREEMPT_RT

From: Steven Rostedt
Date: Wed Nov 24 2021 - 13:04:13 EST


5.10.78-rt56-rc3 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>

On PREEMPT_RT most items are processed as LAZY via softirq context.
Avoid to spin-wait for them because irq_work_sync() could have higher
priority and not allow the irq-work to be completed.

Wait additionally for !IRQ_WORK_HARD_IRQ irq_work items on PREEMPT_RT.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Link: https://lkml.kernel.org/r/20211006111852.1514359-5-bigeasy@xxxxxxxxxxxxx
Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
---
include/linux/irq_work.h | 5 +++++
kernel/irq_work.c | 6 ++++--
2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index f551ba9c99d4..2c0059340871 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -55,6 +55,11 @@ static inline bool irq_work_is_busy(struct irq_work *work)
return atomic_read(&work->flags) & IRQ_WORK_BUSY;
}

+static inline bool irq_work_is_hard(struct irq_work *work)
+{
+ return atomic_read(&work->flags) & IRQ_WORK_HARD_IRQ;
+}
+
bool irq_work_queue(struct irq_work *work);
bool irq_work_queue_on(struct irq_work *work, int cpu);

diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 03d09d779ee1..cbec10c32ead 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -211,7 +211,8 @@ void irq_work_single(void *arg)
flags &= ~IRQ_WORK_PENDING;
(void)atomic_cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY);

- if (!arch_irq_work_has_interrupt())
+ if ((IS_ENABLED(CONFIG_PREEMPT_RT) && !irq_work_is_hard(work)) ||
+ !arch_irq_work_has_interrupt())
rcuwait_wake_up(&work->irqwait);
}

@@ -271,7 +272,8 @@ void irq_work_sync(struct irq_work *work)
lockdep_assert_irqs_enabled();
might_sleep();

- if (!arch_irq_work_has_interrupt()) {
+ if ((IS_ENABLED(CONFIG_PREEMPT_RT) && !irq_work_is_hard(work)) ||
+ !arch_irq_work_has_interrupt()) {
rcuwait_wait_event(&work->irqwait, !irq_work_is_busy(work),
TASK_UNINTERRUPTIBLE);
return;
--
2.33.0