Re: [PATCH 2/2] chipidea: Use devm_request_irq()

From: Uwe Kleine-König
Date: Wed Jul 31 2013 - 05:45:07 EST


[Expanded Cc: a bit]

Hello,

On Wed, Jul 31, 2013 at 10:05:12AM +0100, Mark Brown wrote:
> On Wed, Jul 31, 2013 at 10:46:45AM +0200, Uwe Kleine-König wrote:
We're discussing about devm_request_irq and wonder what happens at
remove time when the irq is still active.

> > OK, so the possible problem is that remove is called while the irq is
> > still active. That means you have to assert that all resources the irq
> > handler is using (e.g. ioremap, clk_prepare_enable) are only freed
> > *after* the irq is done. For ioremap that means it must be done using
> > devm_ioremap_resource. For a clock it's not that easy because the irq
> > handler has to assert that a used clk is kept prepared which can only be
> > done using clk_prepare which in turn is not allowed in an irq handler.
>
> > Hmm. So the only possible fixes are
> > - devm* can be told to also care about clk_disable_unprepare
> > - after disabling irqs in the remove callback wait for all
> > active irqs to be done. (i.e. call synchronize_irq(irq))
> > - don't use devm_request_irq
>
> I'm not sure that devm_ guarantees any ordering in the cleanups it does
> so I'd not like to rely on the first option either, if there were some
> guarantee of that it'd help. The nice thing about explicitly freeing
> the IRQ is that you can tell that all this stuff is safe by inspection.
devm_* at least uses list_for_each_entry_reverse
(drivers/base/devres.c:release_nodes()). Without this guarantee devm_
would not make much sense IMHO.

To also manage clks, we'd need something like:

devm_clk_prepare(&dev, some_clk);

that makes devm_clk_release also call clk_unprepare the right number of
times. Maybe also the same for devm_clk_enable? Does this make sense?

Best regards
Uwe

--
Pengutronix e.K. | Uwe Kleine-König |
Industrial Linux Solutions | http://www.pengutronix.de/ |
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/