Re: IRQ affinity notifiers vs RT

From: Sebastian Andrzej Siewior
Date: Mon Sep 23 2013 - 08:59:03 EST


On 08/30/2013 11:29 PM, Ben Hutchings wrote:
> Sebastian, I saw you came up with a fix for this but apparently without
> seeing my earlier message:

Yes Ben, I haven't seen it. If I was on Cc I very sorry for overlooking
it.

> On Thu, 2013-07-25 at 00:31 +0100, Ben Hutchings wrote:

>> Workqueue code uses spin_lock_irq() on the workqueue lock, which with
>> PREEMPT_RT enabled doesn't actually block IRQs.
>>
>> In 3.6, the irq_cpu_rmap functions relies on a workqueue flush to finish
>> any outstanding notifications before freeing the cpu_rmap that they use.
>> This won't be reliable if the notification is scheduled after releasing
>> the irq_desc lock.
>>
>> However, following commit 896f97ea95c1 ('lib: cpu_rmap: avoid flushing
>> all workqueues') in 3.8, I think that it is sufficient to do only
>> kref_get(&desc->affinity_notify->kref) in __irq_set_affinity_locked()
>> and then call schedule_work() in irq_set_affinity() after releasing the
>> lock. Something like this (untested):
>
> Does the following make sense to you?

This was suggested by the original submitter on rt-users@xxxxx (Joe
Korty) where I've been made aware of this for the first time. This okay
except for the part where the workqueue is not scheduled if calling by
the __ function (i.e. the mips case). If I read the code correctly, the
CPU goes offline and affinity change should be updated / users notified
and this is not the case with this patch.

It is a valid question why only one mips SoC needs the function. If you
look at commit 0c3263870f ("MIPS: Octeon: Rewrite interrupt handling
code.") you can see that tglx himself made this adjustment so it might
be valid :) Therefore I assume that we may get more callers of this
function and the workqueue should be executed and I made something
simple that works on RT.

>
> Ben.
>

Sebastian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/