[RFC][PATCH v1] arm64: tlb: call kvm_call_hyp once during kvm_tlb_flush_vmid_range

From: eillon

Date: Mon Feb 09 2026 - 06:48:46 EST


The kvm_tlb_flush_vmid_range() function is performance-critical
during live migration, but there is a while loop when the system
support flush tlb by range when the size is larger than MAX_TLBI_RANGE_PAGE=
S.

This results in frequent entry to kvm_call_hyp() and then a large
amount of time is spent in kvm_clear_dirty_log_protect() during
migration(more than 50%). So, when the address range is large than
MAX_TLBI_RANGE_PAGES, directly call __kvm_tlb_flush_vmid to
optimize performance.

---
arch/arm64/kvm/hyp/pgtable.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 874244df7..9da22b882 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -675,21 +675,19 @@ static bool stage2_has_fwb(struct kvm_pgtable *pgt)
void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu,
phys_addr_t addr, size_t size)
{
- unsigned long pages, inval_pages;
+ unsigned long pages =3D size >> PAGE_SHIFT;

- if (!system_supports_tlb_range()) {
+ /*
+ * This function is performance-critical during live migration;
+ * thus, when the address range is large than MAX_TLBI_RANGE_PAGES,
+ * directly call __kvm_tlb_flush_vmid to optimize performance.
+ */
+ if (!system_supports_tlb_range() || pages > MAX_TLBI_RANGE_PAGES) {
kvm_call_hyp(__kvm_tlb_flush_vmid, mmu);
return;
}

- pages =3D size >> PAGE_SHIFT;
- while (pages > 0) {
- inval_pages =3D min(pages, MAX_TLBI_RANGE_PAGES);
- kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, inval_pages);
-
- addr +=3D inval_pages << PAGE_SHIFT;
- pages -=3D inval_pages;
- }
+ kvm_call_hyp(__kvm_tlb_flush_vmid_range, mmu, addr, pages);
}

#define KVM_S2_MEMATTR(pgt, attr) PAGE_S2_MEMATTR(attr, stage2_has_fwb(pgt=
))
--
2.43.0