Re: [RFC PATCH v2 12/23] KVM: x86/mmu: Introduce kvm_split_cross_boundary_leafs()
From: Huang, Kai
Date: Tue Nov 11 2025 - 05:49:41 EST
On Thu, 2025-08-07 at 17:43 +0800, Yan Zhao wrote:
> static int tdp_mmu_split_huge_pages_root(struct kvm *kvm,
> struct kvm_mmu_page *root,
> gfn_t start, gfn_t end,
> - int target_level, bool shared)
> + int target_level, bool shared,
> + bool only_cross_bounday, bool *flush)
> {
> struct kvm_mmu_page *sp = NULL;
> struct tdp_iter iter;
> @@ -1589,6 +1596,13 @@ static int tdp_mmu_split_huge_pages_root(struct kvm *kvm,
> * level into one lower level. For example, if we encounter a 1GB page
> * we split it into 512 2MB pages.
> *
> + * When only_cross_bounday is true, just split huge pages above the
> + * target level into one lower level if the huge pages cross the start
> + * or end boundary.
> + *
> + * No need to update @flush for !only_cross_bounday cases, which rely
> + * on the callers to do the TLB flush in the end.
> + *
s/only_cross_bounday/only_cross_boundary
From tdp_mmu_split_huge_pages_root()'s perspective, it's quite odd to only
update 'flush' when 'only_cross_bounday' is true, because
'only_cross_bounday' can only results in less splitting.
I understand this is because splitting S-EPT mapping needs flush (at least
before non-block DEMOTE is implemented?). Would it better to also let the
caller to decide whether TLB flush is needed? E.g., we can make
tdp_mmu_split_huge_pages_root() return whether any split has been done or
not. I think this should also work?