Re: [PATCH] scatterlist: Validate page before calling PageSlab()

From: Alan Mikhak
Date: Tue Oct 15 2019 - 15:03:12 EST


On Tue, Oct 15, 2019 at 9:12 AM Logan Gunthorpe <logang@xxxxxxxxxxxx> wrote:
>
>
>
> On 2019-10-15 3:55 a.m., Christoph Hellwig wrote:
> > On Mon, Oct 07, 2019 at 02:13:51PM -0700, Alan Mikhak wrote:
> >>> My goal is to not modify the Linux NVMe target code at all. The NVMe
> >>> endpoint function driver currently does the work that is required.
> >
> > You will have to do some modifications, as for example in PCIe you can
> > have a n:1 relationship between SQs and CQs. And you need to handle
> > the Create/Delete SQ/CQ commands, but not the fabrics commands. And
> > modifying subsystems in Linux is perfectly acceptable, that is how they
> > improve.
> >
> > Do you have a pointer to your code?
> >
> >>> In my current platform, there are no page struct backing for the PCIe
> >>> memory address space.
> >
> > In Linux there aren't struct pages for physical memory remapped using
> > ioremap(). But if you want to feed them to the I/O subsystem you have
> > to use devm_memremap_pages to create a page backing. Assuming you are
> > on a RISC-V platform given your affiliation you'll need to ensure your
> > kernel allows for ZONE_DEVICE pages, which Logan (added to Cc) has been
> > working on. I don't remember what the current status is.
>
> The last patchset submission was here [1]. It had some issues but the
> main one was that they wanted the page tables to be created and removed
> dynamically. I took a crack at it and ran into some issues and haven't
> had time to touch it since. I was waiting to see how arm64[2] solves
> similar problems and then maybe it can be made common.

Thanks Logan for the links to your patchset.

>
> It's also a bit of a PITA because the RISC-V hardware we have with hacky
> PCI support is fragile and stopped working on recent kernels, last I
> tried. So this hasn't been a priority for us.

I would be interested in hearing more about PCI issues on RISC-V hardware.
Please include me, if you like.

>
> >> Please consider the following information and cost estimate in
> >> bytes for requiring page structs for PCI memory if used with
> >> scatterlists. For example, a 128GB PCI memory address space
> >> could require as much as 256MB of system memory just for
> >> page struct backing. In a 1GB 64-bit system with flat memory
> >> model, that consumes 25% of available memory. However,
> >> not all of the 128GB PCI memory may be mapped for use at
> >> a given time depending on the application. The cost of PCI
> >> page structs is an upfront cost to be paid at system start.
> >
> > I know the pages are costly. But once you want to feed them through
> > subsystems that do expect pages you'll have to do that. And anything
> > using scatterlists currently does. A little hack here and there isn't
> > going to solve that.
> >
>
> Agreed. I tried expanding the SG-list to allow for page-less entries and
> it was a much bigger mess than what you describe.
>
> Also, I think your analysis is a bit unfair, we don't need to create
> pages for the entire 128GB of PCI memory space, we typically only need
> the BARs of a subset of devices which is far less. If a system has only
> 1GB of memory it probably doesn't actually have 128GB of PCI bar spaces
> that are sensibly usable.

I appreciate your comment and your attempt and experience with expanding
the scatterlist to allow for page-less entries. Feedback is clear.
Such expansion
is bigger than it looks at first glance.

An NVMe endpoint has to map Physical Region Pages (PRP) that sit
across the PCIe bus anywhere in host memory. PRPs are typically 4KB
pages which may be scattered throughout host memory. The PCIe address
space of NVMe endpoint is larger than its local physical memory space or
that of the host system. 128GB is not the typical amount of physical memory
populated on a PCIe host or endpoint. However, PCIe address space
of an NVMe endpoint needs to address much larger regions than physical
memory on either side of the PCIe bus.

NVMe endpoint function driver access to host PRPs is not BAR-based. NVMe
endpoint accesses host memory as PCIe bus master. PCIe hosts typically
access endpoint memory using BARs.

Finding an economical solution for page struct backing for large PCIe address
space, which is not itself backed by physical memory, is desirable. Clearly,
page structs are a requirement for using scatterlists. Since scatterlists
are the mechanism that Linux NVMe target driver uses, the NVMe endpoint
function driver needs to convert PRPs to scatterlist using mappings that have
proper page struct backing.

As Christoph Hellwig suggested, devm_memremap_pages() may be a solution
if the kernel supports ZONE_DEVICE.

Regards,
Alan

>
> Yes, it would be nice to get rid of this overhead, but that's a much
> bigger long-term project.
>
> Logan
>
>
> [1]
> https://lore.kernel.org/linux-riscv/20190327213643.23789-1-logang@xxxxxxxxxxxx/
> [2]
> https://lore.kernel.org/linux-arm-kernel/1570788852-12402-1-git-send-email-anshuman.khandual@xxxxxxx/