Re: [RFC 1/1] drivers/dma/*: replace tasklets with workqueue

From: Vinod Koul
Date: Tue May 31 2022 - 02:56:50 EST


On 27-05-22, 12:59, Arnd Bergmann wrote:
> On Fri, May 27, 2022 at 10:06 AM Vinod Koul <vkoul@xxxxxxxxxx> wrote:
> > On 25-05-22, 13:03, Arnd Bergmann wrote:
> > > What might work better in the case of the dmaengine API would
> > > be an approach like:
> > >
> > > 1. add helper functions to call the callback functions from a
> > > tasklet locally defined in drivers/dma/dmaengine.c to allow
> > > deferring it from hardirq context
> > >
> > > 2. Change all tasklets that are not part of the callback
> > > mechanism to work queue functions, I only see
> > > xilinx_dpdma_chan_err_task in the patch, but there
> > > may be more
> > >
> > > 3. change all drivers to move their custom tasklets back into
> > > hardirq context and instead call the new helper for deferring
> > > the callback.
> > >
> > > 4. Extend the dmaengine callback API to let slave drivers
> > > pick hardirq, tasklet or task context for the callback.
> > > task context can mean either a workqueue, or a threaded
> > > IRQ here, with the default remaining the tasklet version.
> >
> > That does sound a good idea, but I dont know who will use the workqueue
> > or a threaded context here, it might be that most would default to
> > hardirq or tasklet context for obvious reasons...
>
> If the idea is to remove tasklets from the kernel for good, then the
> choice is only between workqueue and hardirq at this point. The
> workqueue version is the one that would make sense for any driver
> that just defers execution from the callback down into task context.
> If that gets called in task context already, the driver can be simpler.
>
> I took a brief look at the roughly 150 slave drivers, and it does
> seem like very few of them actually want task context:
>
> * Over Half the drivers just do a complete(), which could
> probably be pulled into the dmaengine layer and done from
> hardirq, avoiding the callback entirely
>
> * A lot of the remaining drivers have interrupts disabled for
> the entire callback, which means they might as well use
> hardirqs, regardless of what they want
>
> * drivers/crypto/* and drivers/mmc/* tend to call another tasklet
> to do the real work.
>
> * drivers/ata/sata_dwc_460ex.c and drivers/ntb/ntb_transport.c
> probably want task context
>
> * Some drivers like sound/soc/sh/siu_pcm.c start a new DMA
> from the callback. Is that allowed from hardirq?
>
> If we do the first three steps above, and then add a 'struct
> completion' pointer to dma_async_tx_descriptor as an alternative
> to the callback, that would already reduce the number of drivers
> that end up in a tasklet significantly and should be completely
> safe.

That is a good idea, lot of drivers are waiting for completion which can
be signalled from hardirq, this would also reduce the hops we have and
help improve latency a bit. On the downside, some controllers provide
error information, which would need to be dealt with.

I will prototype this on Qcom boards I have...

>
> Unfortunately we can't just move the rest into hardirq
> context because that breaks anything using spin_lock_bh
> to protect against concurrent execution of the tasklet.
>
> A possible alternative might be to then replace the global
> dmaengine tasklet with a custom softirq. Obviously those
> are not so hot either, but dmaengine could be considered
> special enough to fit in the same category as net_rx/tx
> and block with their global softirqs.

Yes that would be a very reasonable mechanism, thanks for the
suggestions.

--
~Vinod