Re: [PATCH 5/5] vfio/type1: Use mapping page mask for pfnmaps
From: Alex Williamson
Date: Mon Feb 17 2025 - 16:52:40 EST
On Fri, 14 Feb 2025 15:27:04 -0400
Jason Gunthorpe <jgg@xxxxxxxx> wrote:
> On Wed, Feb 05, 2025 at 04:17:21PM -0700, Alex Williamson wrote:
> > @@ -590,15 +592,23 @@ static int vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr,
> > vma = vma_lookup(mm, vaddr);
> >
> > if (vma && vma->vm_flags & VM_PFNMAP) {
> > - ret = follow_fault_pfn(vma, mm, vaddr, pfn, prot & IOMMU_WRITE);
> > + unsigned long pgmask;
> > +
> > + ret = follow_fault_pfn(vma, mm, vaddr, pfn, &pgmask,
> > + prot & IOMMU_WRITE);
> > if (ret == -EAGAIN)
> > goto retry;
> >
> > if (!ret) {
> > - if (is_invalid_reserved_pfn(*pfn))
> > - ret = 1;
> > - else
> > + if (is_invalid_reserved_pfn(*pfn)) {
> > + unsigned long epfn;
> > +
> > + epfn = (((*pfn << PAGE_SHIFT) + ~pgmask + 1)
> > + & pgmask) >> PAGE_SHIFT;
>
> That seems a bit indirect
>
> epfn = ((*pfn) | (~pgmask >> PAGE_SHIFT)) + 1;
>
> ?
That is simpler, for sure. Thanks!
> > + ret = min_t(int, npages, epfn - *pfn);
>
> It is nitty but the int's here should be long, and npages should be
> unsigned long..
Added a new patch that uses unsigned long consistently for passed page
counts and long for returns. Now we just need a system with a 16TiB
huge page size. Thanks,
Alex