Re: [PATCH v9 07/10] mm: Device exclusive memory access
From: Peter Xu
Date: Fri May 28 2021 - 09:11:34 EST
On Fri, May 28, 2021 at 11:48:40AM +1000, Alistair Popple wrote:
[...]
> > > > > + while (page_vma_mapped_walk(&pvmw)) {
> > > > > + /* Unexpected PMD-mapped THP? */
> > > > > + VM_BUG_ON_PAGE(!pvmw.pte, page);
> > > > > +
> > > > > + if (!pte_present(*pvmw.pte)) {
> > > > > + ret = false;
> > > > > + page_vma_mapped_walk_done(&pvmw);
> > > > > + break;
> > > > > + }
> > > > > +
> > > > > + subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte);
> > > >
> > > > I see that all pages passed in should be done after FOLL_SPLIT_PMD, so
> > > > is
> > > > this needed? Or say, should subpage==page always be true?
> > >
> > > Not always, in the case of a thp there are small ptes which will get
> > > device
> > > exclusive entries.
> >
> > FOLL_SPLIT_PMD will first split the huge thp into smaller pages, then do
> > follow_page_pte() on them (in follow_pmd_mask):
> >
> > if (flags & FOLL_SPLIT_PMD) {
> > int ret;
> > page = pmd_page(*pmd);
> > if (is_huge_zero_page(page)) {
> > spin_unlock(ptl);
> > ret = 0;
> > split_huge_pmd(vma, pmd, address);
> > if (pmd_trans_unstable(pmd))
> > ret = -EBUSY;
> > } else {
> > spin_unlock(ptl);
> > split_huge_pmd(vma, pmd, address);
> > ret = pte_alloc(mm, pmd) ? -ENOMEM : 0;
> > }
> >
> > return ret ? ERR_PTR(ret) :
> > follow_page_pte(vma, address, pmd, flags,
> > &ctx->pgmap); }
> >
> > So I thought all pages are small pages?
>
> The page will remain as a transparent huge page though (at least as I
> understand things). FOLL_SPLIT_PMD turns it into a pte mapped thp by splitting
> the pmd and creating pte's mapping the subpages but doesn't split the page
> itself. For comparison FOLL_SPLIT (which has been removed in v5.13 due to lack
> of use) is what would be used to split the page in the above GUP code by
> calling split_huge_page() rather than split_huge_pmd().
But shouldn't FOLL_SPLIT_PMD filled in small pfns for each pte? See
__split_huge_pmd_locked():
for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
...
} else {
entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot));
...
}
...
set_pte_at(mm, addr, pte, entry);
}
Then iiuc the coming follow_page_pte() will directly fetch the small pages?
--
Peter Xu