[PATCH v3 3/3] arm64/mm: Elide tlbi in contpte_convert() under BBML2
From: Mikołaj Lenczewski
Date: Thu Mar 13 2025 - 06:44:28 EST
When converting a region via contpte_convert() to use mTHP, we have two
different goals. We have to mark each entry as contiguous, and we would
like to smear the dirty and young (access) bits across all entries in
the contiguous block. Currently, we do this by first accumulating the
dirty and young bits in the block, using an atomic
__ptep_get_and_clear() and the relevant pte_{dirty,young}() calls,
performing a tlbi, and finally smearing the correct bits across the
block using __set_ptes().
This approach works fine for BBM level 0, but with support for BBM level
2 we are allowed to reorder the tlbi to after setting the pagetable
entries. This reordering reduces the likelyhood of a concurrent page
walk finding an invalid (not present) PTE. This reduces the likelyhood
of a fault in other threads, and improves performance marginally
(more so when there are more threads).
If we support bbml2 without conflict aborts however, we can avoid the
final flush altogether and have hardware manage the tlb entries for us.
Avoiding flushes is a win.
Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@xxxxxxx>
---
arch/arm64/mm/contpte.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index 55107d27d3f8..77ed03b30b72 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -68,7 +68,8 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr,
pte = pte_mkyoung(pte);
}
- __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
+ if (!system_supports_bbml2_noabort())
+ __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3);
__set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES);
}
--
2.48.1