RE: [PATCH 5/9] PCI: host: brcmstb: add dma-ranges for inbound traffic

From: David Laight
Date: Mon Oct 23 2017 - 05:05:57 EST


From: Jim Quinlan
> Sent: 20 October 2017 16:28
> On Fri, Oct 20, 2017 at 10:57 AM, Christoph Hellwig <hch@xxxxxx> wrote:
> > On Fri, Oct 20, 2017 at 10:41:56AM -0400, Jim Quinlan wrote:
> >> I am not sure I understand your comment -- the size of the request
> >> shouldn't be a factor. Let's look at your example of the DMA request
> >> of 3fffff00 to 4000000f (physical memory). Lets say it is for 15
> >> pages. If we block out the last page [0x3ffff000..0x3fffffff] from
> >> what is available, there is no 15 page span that can happen across the
> >> 0x40000000 boundary. For SG, there can be no merge that connects a
> >> page from one region to another region. Can you give an example of
> >> the scenario you are thinking of?
> >
> > What prevents a merge from say the regions of
> > 0....3fffffff and 40000000....7fffffff?
>
> Huh? [0x3ffff000...x3ffffff] is not available to be used. Drawing from
> the original example, we now have to tell Linux that these are now our
> effective memory regions:
>
> memc0-a@[ 0....3fffefff] <=> pci@[ 0....3fffefff]
> memc0-b@[100000000...13fffefff] <=> pci@[ 40000000....7fffefff]
> memc1-a@[ 40000000....7fffefff] <=> pci@[ 80000000....bfffefff]
> memc1-b@[300000000...33fffefff] <=> pci@[ c0000000....ffffefff]
> memc2-a@[ 80000000....bfffefff] <=> pci@[100000000...13fffefff]
> memc2-b@[c00000000...c3fffffff] <=> pci@[140000000...17fffffff]
>
> This leaves a one-page gap between phsyical memory regions which would
> normally be contiguous. One cannot have a dma alloc that spans any two
> regions. This is a drastic step, but I don't see an alternative.
> Perhaps I may be missing what you are saying...

Isn't this all unnecessary?
Both kmalloc() and dma_alloc() are constrained to allocate memory
that doesn't cross an address boundary that is larger than the size.
So if you allocate 16k it won't cross a 16k physical address boundary.

David