Gup needs to read ptes locklessly, so it uses ptep_get_lockless().
However, the returned access and dirty bits are unimportant so let's
switch over to ptep_get_lockless_norecency().
The wrinkle is that gup needs to check that the pte hasn't changed once
it has pinned the folio following this model:
pte = ptep_get_lockless_norecency(ptep)
...
if (!pte_same(pte, ptep_get_lockless(ptep)))
// RACE!
...
And now that pte may not contain correct access and dirty information,
the pte_same() comparison could spuriously fail. So let's introduce a
new pte_same_norecency() helper which will ignore the access and dirty
bits when doing the comparison.
Note that previously, ptep_get() was being used for the comparison; this
is technically incorrect because the PTL is not held. I've also
converted the comparison to use the preferred pmd_same() helper instead
of doing a raw value comparison.
As a side-effect, this new approach removes the possibility of
concurrent read/write to the page causing a spurious fast gup failure,
because the access and dirty bits are no longer used in the comparison.
Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx>
---
#ifndef __HAVE_ARCH_PTE_UNUSED
/*
* Some architectures provide facilities to virtualization guests
diff --git a/mm/gup.c b/mm/gup.c
index df83182ec72d..0f96d0a5ec09 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2576,7 +2576,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
if (!ptep)
return 0;
do {
- pte_t pte = ptep_get_lockless(ptep);
+ pte_t pte = ptep_get_lockless_norecency(ptep);
struct page *page;
struct folio *folio;
@@ -2617,8 +2617,9 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
goto pte_unmap;
}
- if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
- unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) {
+ if (unlikely(!pmd_same(pmd, *pmdp)) ||
+ unlikely(!pte_same_norecency(pte,
+ ptep_get_lockless_norecency(ptep)))) {
gup_put_folio(folio, 1, flags);
goto pte_unmap;