Re: [PATCH v3 3/4] mm: Support batched unmap for lazyfree large folios during reclamation
From: Barry Song
Date: Tue Feb 04 2025 - 22:36:15 EST
On Wed, Feb 5, 2025 at 12:38 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> Hi,
>
> > unsigned long hsz = 0;
> >
> > @@ -1780,6 +1800,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> > hugetlb_vma_unlock_write(vma);
> > }
> > pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
> > + } else if (folio_test_large(folio) && !(flags & TTU_HWPOISON) &&
> > + can_batch_unmap_folio_ptes(address, folio, pvmw.pte)) {
> > + nr_pages = folio_nr_pages(folio);
> > + flush_cache_range(vma, range.start, range.end);
> > + pteval = get_and_clear_full_ptes(mm, address, pvmw.pte, nr_pages, 0);
> > + if (should_defer_flush(mm, flags))
> > + set_tlb_ubc_flush_pending(mm, pteval, address,
> > + address + folio_size(folio));
> > + else
> > + flush_tlb_range(vma, range.start, range.end);
> > } else {
>
> I have some fixes [1] that will collide with this series. I'm currently
> preparing a v2, and am not 100% sure when the fixes will get queued+merged.
>
> I'll base them against mm-stable for now, and send them out based on
> that, to avoid the conflicts here (should all be fairly easy to resolve
> from a quick glimpse).
>
> So we might have to refresh this series here if the fixes go in first.
I assume you're referring to "[PATCH v1 08/12] mm/rmap: handle
device-exclusive entries correctly in try_to_unmap_one()". It looks
straightforward to resolve the conflict. If your patch is applied first,
I'll send a rebase.
>
> [1] https://lkml.kernel.org/r/20250129115411.2077152-1-david@xxxxxxxxxx
>
> --
> Cheers,
>
> David / dhildenb
>
Thanks
Barry