Re: [PATCH] dmapool: push new blocks in ascending order

From: Bryan O'Donoghue
Date: Fri Feb 24 2023 - 17:28:38 EST


On 24/02/2023 18:24, Keith Busch wrote:
On Thu, Feb 23, 2023 at 12:41:37PM -0800, Andrew Morton wrote:
On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <kbusch@xxxxxxxxxx> wrote:

On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
From: Keith Busch <kbusch@xxxxxxxxxx>

Some users of the dmapool need their allocations to happen in ascending
order. The recent optimizations pushed the blocks in reverse order, so
restore the previous behavior by linking the next available block from
low-to-high.

Who are those users?

Also should we document this behavior somewhere so that it isn't
accidentally changed again some time in the future?

usb/chipidea/udc.c qh_pool called "ci_hw_qh".

It would be helpful to know why these users need this side-effect. Did
the drivers break? Or just get slower?

The affected driver was reported to be unusable without this behavior.
Are those drivers misbehaving by assuming this behavior? Should we

I do think they're using the wrong API. You you shouldn't use the dmapool if
your blocks need to be arranged in a contiguous address order. They should just
directly use dma_alloc_coherent() instead.

require that they be altered instead of forever constraining the dmapool
implementation in this fashion?

This change isn't really constraining dmapool where it matters. It's just an
unexpected one-time initialization thing.

As far as altering those drivers, I'll reach out to someone on that side for
comment (I'm currently not familiar with the affected subsystem).

We can always change this driver, I'm fine to do that in-parallel/instead.

The symptom we have is a silent failure absent this change so, I just wonder are we really the _only_ code path that would be affected absent the change in this patch ?

---
bod