Re: [PATCH 4/5] mm: Provide page mask in struct follow_pfnmap_args
From: Jason Gunthorpe
Date: Fri Feb 14 2025 - 14:14:38 EST
On Wed, Feb 05, 2025 at 04:17:20PM -0700, Alex Williamson wrote:
> follow_pfnmap_start() walks the page table for a given address and
> fills out the struct follow_pfnmap_args in pfnmap_args_setup().
> The page mask of the page table level is already provided to this
> latter function for calculating the pfn. This page mask can also be
> useful for the caller to determine the extent of the contiguous
> mapping.
>
> For example, vfio-pci now supports huge_fault for pfnmaps and is able
> to insert pud and pmd mappings. When we DMA map these pfnmaps, ex.
> PCI MMIO BARs, we iterate follow_pfnmap_start() to get each pfn to test
> for a contiguous pfn range. Providing the mapping page mask allows us
> to skip the extent of the mapping level. Assuming a 1GB pud level and
> 4KB page size, iterations are reduced by a factor of 256K. In wall
> clock time, mapping a 32GB PCI BAR is reduced from ~1s to <1ms.
>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: linux-mm@xxxxxxxxx
> Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx>
> ---
> include/linux/mm.h | 2 ++
> mm/memory.c | 1 +
> 2 files changed, 3 insertions(+)
Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx>
Jason