Re: [PATCH] x86/hyper-v: Validate entire GVA range for non-canonical addresses during PV TLB flush
From: Vitaly Kuznetsov
Date: Mon Feb 23 2026 - 03:55:57 EST
Sean Christopherson <seanjc@xxxxxxxxxx> writes:
> +Vitaly and Paolo
>
> Please use scripts/get_maintainer.pl, otherwise your emails might not reach the
> right eyeballs.
>
> On Thu, Feb 19, 2026, Manuel Andreas wrote:
>> In KVM guests with Hyper-V hypercalls enabled, the hypercalls
>> HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST and HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX
>> allow a guest to request invalidation of portions of a virtual TLB.
>> For this, the hypercall parameter includes a list of GVAs that are supposed
>> to be invalidated.
>>
>> Currently, only the base GVA is checked to be canonical. In reality,
>> this check needs to be performed for the entire range of GVAs.
>> This still enables guests running on Intel hardware to trigger a
>> WARN_ONCE in the host (see prior commit below).
>>
>> This patch simply moves the check for non-canonical addresses to be
>> performed for every single GVA of the supplied range. This should also
>> be more in line with the Hyper-V specification, since, although
>> unlikely, a range starting with an invalid GVA may still contain
>> GVAs that are valid.
>>
>> Fixes: fa787ac07b3c ("KVM: x86/hyper-v: Skip non-canonical addresses during PV TLB flush")
>> Signed-off-by: Manuel Andreas <manuel.andreas@xxxxxx>
>> ---
>> arch/x86/kvm/hyperv.c | 9 +++++----
>> 1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
>> index de92292eb1f5..f4f6accf1a33 100644
>> --- a/arch/x86/kvm/hyperv.c
>> +++ b/arch/x86/kvm/hyperv.c
>> @@ -1981,16 +1981,17 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
>> if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
>> goto out_flush_all;
>>
>> - if (is_noncanonical_invlpg_address(entries[i], vcpu))
>> - continue;
>> -
>> /*
>> * Lower 12 bits of 'address' encode the number of additional
>> * pages to flush.
>> */
>> gva = entries[i] & PAGE_MASK;
>> - for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
>> + for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) {
>> + if (is_noncanonical_invlpg_address(gva + j * PAGE_SIZE, vcpu))
>> + continue;
>> +
>> kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
>> + }
>
> Vitaly, can we treat the entire request as garbage and throw it away if any part
> isn't valid? Or do you think we should go with the more conservative approach
> as above?
I don't remember if I have ever seen real Windows trying to flush
anything non-canonical at all but my gut feeling tells me we should
rather play safe and use Manuel's 'conservative' approach. Also, this
should be consistent with TLFS which says:
"Invalid GVAs (those that specify addresses beyond the end of the
partition’s GVA space) are ignored."
i.e. it doesn't say 'Invalid GVA RANGES are ignored'.
>
> diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
> index de92292eb1f5..f568f3d4f6e5 100644
> --- a/arch/x86/kvm/hyperv.c
> +++ b/arch/x86/kvm/hyperv.c
> @@ -1967,8 +1967,8 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
> struct kvm_vcpu_hv_tlb_flush_fifo *tlb_flush_fifo;
> struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
> u64 entries[KVM_HV_TLB_FLUSH_FIFO_SIZE];
> + gva_t gva, extra_pages;
> int i, j, count;
> - gva_t gva;
>
> if (!tdp_enabled || !hv_vcpu)
> return -EINVAL;
> @@ -1978,18 +1978,22 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
> count = kfifo_out(&tlb_flush_fifo->entries, entries, KVM_HV_TLB_FLUSH_FIFO_SIZE);
>
> for (i = 0; i < count; i++) {
> +
> if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
> goto out_flush_all;
>
> - if (is_noncanonical_invlpg_address(entries[i], vcpu))
> - continue;
> -
> /*
> * Lower 12 bits of 'address' encode the number of additional
> * pages to flush.
> */
> gva = entries[i] & PAGE_MASK;
> - for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
> + extra_pages = (entries[i] & ~PAGE_MASK);
> +
> + if (is_noncanonical_invlpg_address(gva, vcpu) ||
> + is_noncanonical_invlpg_address(gva + extra_pages * PAGE_SIZE))
> + continue;
> +
> + for (j = 0; j < extra_pages + 1; j++)
> kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
>
> ++vcpu->stat.tlb_flush;
>
--
Vitaly