Re: [PATCH 1/1] mm/memory: Fix boundary check for next PFN in folio_pte_batch()

From: David Hildenbrand
Date: Tue Feb 27 2024 - 03:46:55 EST


On 27.02.24 09:45, Lance Yang wrote:
On Tue, Feb 27, 2024 at 4:33 PM David Hildenbrand <david@xxxxxxxxxx> wrote:

On 27.02.24 09:23, Lance Yang wrote:
Hey David,

Thanks for taking time to review!

On Tue, Feb 27, 2024 at 3:30 PM David Hildenbrand <david@xxxxxxxxxx> wrote:

On 27.02.24 08:04, Lance Yang wrote:
Previously, in folio_pte_batch(), only the upper boundary of the
folio was checked using '>=' for comparison. This led to
incorrect behavior when the next PFN exceeded the lower boundary
of the folio, especially in corner cases where the next PFN might
fall into a different folio.

Which commit does this fix?

The introducing commit (f8d937761d65c87e9987b88ea7beb7bddc333a0e) is
already in mm-stable, so we would need a Fixes: tag. Unless, Ryan's
changes introduced a problem.

BUT

I don't see what is broken. :)

Can you please give an example/reproducer?

For example1:

PTE0 is present for large folio1.
PTE1 is present for large folio1.
PTE2 is present for large folio1.
PTE3 is present for large folio1.

folio_nr_pages(folio1) is 4.
folio_nr_pages(folio2) is 4.

pte = *start_ptep = PTE0;
max_nr = folio_nr_pages(folio2);

If folio_pfn(folio1) < folio_pfn(folio2),
the return value of folio_pte_batch(folio2, start_ptep, pte, max_nr)
will be 4(Actually it should be 0).

For example2:

PTE0 is present for large folio2.
PTE1 is present for large folio1.
PTE2 is present for large folio1.
PTE3 is present for large folio1.

folio_nr_pages(folio1) is 4.
folio_nr_pages(folio2) is 4.

pte = *start_ptep = PTE0;
max_nr = folio_nr_pages(folio1);


In both cases, start_ptep does not map the folio.

It's a BUG in your caller unless I am missing something important.

Sorry, I understood.

Thanks for your clarification!

I'll post some kernel doc as reply to Barry's export patch to clarify that.

--
Cheers,

David / dhildenb