Re: [PATCH 5/9] PCI: host: brcmstb: add dma-ranges for inbound traffic
From: Christoph Hellwig
Date: Fri Oct 20 2017 - 12:17:24 EST
On Fri, Oct 20, 2017 at 11:27:41AM -0400, Jim Quinlan wrote:
> memc0-a@[ 0....3fffefff] <=> pci@[ 0....3fffefff]
> memc0-b@[100000000...13fffefff] <=> pci@[ 40000000....7fffefff]
> memc1-a@[ 40000000....7fffefff] <=> pci@[ 80000000....bfffefff]
> memc1-b@[300000000...33fffefff] <=> pci@[ c0000000....ffffefff]
> memc2-a@[ 80000000....bfffefff] <=> pci@[100000000...13fffefff]
> memc2-b@[c00000000...c3fffffff] <=> pci@[140000000...17fffffff]
>
> This leaves a one-page gap between phsyical memory regions which would
> normally be contiguous. One cannot have a dma alloc that spans any two
> regions. This is a drastic step, but I don't see an alternative.
> Perhaps I may be missing what you are saying...
Ok, IFF we are guranteed to always have a gap between physical memory
locations we are fine indeed.