[PATCH] mm,unmap: avoid flushing TLB in batch if PTE is inaccessible
From: Huang Ying
Date: Mon Apr 10 2023 - 03:52:49 EST
0Day/LKP reported a performance regression for commit
7e12beb8ca2a ("migrate_pages: batch flushing TLB"). In the commit, the
TLB flushing during page migration is batched. So, in
try_to_migrate_one(), ptep_clear_flush() is replaced with
set_tlb_ubc_flush_pending(). In further investigation, it is found
that the TLB flushing can be avoided in ptep_clear_flush() if the PTE
is inaccessible. In fact, we can optimize in similar way for the
batched TLB flushing too to improve the performance.
So in this patch, we check pte_accessible() before
set_tlb_ubc_flush_pending() in try_to_unmap/migrate_one(). Tests show
that the benchmark score of the anon-cow-rand-mt test case of
vm-scalability test suite can improve up to 2.1% with the patch on a
Intel server machine. The TLB flushing IPI can reduce up to 44.3%.
Link: https://lore.kernel.org/oe-lkp/202303192325.ecbaf968-yujie.liu@xxxxxxxxx
Link: https://lore.kernel.org/oe-lkp/ab92aaddf1b52ede15e2c608696c36765a2602c1.camel@xxxxxxxxx/
Fixes: 7e12beb8ca2a ("migrate_pages: batch flushing TLB")
Reported-by: kernel test robot <yujie.liu@xxxxxxxxx>
Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx>
Cc: Nadav Amit <namit@xxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
---
mm/rmap.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 8632e02661ac..3c7c43642d7c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1582,7 +1582,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
*/
pteval = ptep_get_and_clear(mm, address, pvmw.pte);
- set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+ if (pte_accessible(mm, pteval))
+ set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
} else {
pteval = ptep_clear_flush(vma, address, pvmw.pte);
}
@@ -1963,7 +1964,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
*/
pteval = ptep_get_and_clear(mm, address, pvmw.pte);
- set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+ if (pte_accessible(mm, pteval))
+ set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
} else {
pteval = ptep_clear_flush(vma, address, pvmw.pte);
}
--
2.39.2