Re: [PATCH v3 3/4] mm: Support batched unmap for lazyfree large folios during reclamation

From: Andrew Morton
Date: Tue Feb 04 2025 - 21:55:55 EST


On Tue, 4 Feb 2025 12:38:31 +0100 David Hildenbrand <david@xxxxxxxxxx> wrote:

> Hi,
>
> > unsigned long hsz = 0;
> >
> > @@ -1780,6 +1800,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> > hugetlb_vma_unlock_write(vma);
> > }
> > pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
> > + } else if (folio_test_large(folio) && !(flags & TTU_HWPOISON) &&
> > + can_batch_unmap_folio_ptes(address, folio, pvmw.pte)) {
> > + nr_pages = folio_nr_pages(folio);
> > + flush_cache_range(vma, range.start, range.end);
> > + pteval = get_and_clear_full_ptes(mm, address, pvmw.pte, nr_pages, 0);
> > + if (should_defer_flush(mm, flags))
> > + set_tlb_ubc_flush_pending(mm, pteval, address,
> > + address + folio_size(folio));
> > + else
> > + flush_tlb_range(vma, range.start, range.end);
> > } else {
>
> I have some fixes [1] that will collide with this series. I'm currently
> preparing a v2, and am not 100% sure when the fixes will get queued+merged.
>
> I'll base them against mm-stable for now, and send them out based on
> that, to avoid the conflicts here (should all be fairly easy to resolve
> from a quick glimpse).
>
> So we might have to refresh this series here if the fixes go in first.
>
> [1] https://lkml.kernel.org/r/20250129115411.2077152-1-david@xxxxxxxxxx

It doesn't look like "mm: fixes for device-exclusive entries (hmm)"
will be backportable(?) but yes, we should aim to stage your fixes
against mainline and ahead of other changes to at least make life
easier for anyone who chooses to backport your fixes into an earlier
kernel.