Re: [PATCH 4/7] KVM: x86: Add wrapper APIs to reset dirty/available register masks
From: Yosry Ahmed
Date: Tue Mar 10 2026 - 22:04:15 EST
On Tue, Mar 10, 2026 at 5:34 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> Add wrappers for setting regs_{avail,dirty} in anticipation of turning the
> fields into proper bitmaps, at which point direct writes won't work so
> well.
>
> Deliberately leave the initialization in kvm_arch_vcpu_create() as-is,
> because the regs_avail logic in particular is special in that it's the one
> and only place where KVM marks eagerly synchronized registers as available.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
> ---
> arch/x86/kvm/kvm_cache_regs.h | 19 +++++++++++++++++++
> arch/x86/kvm/svm/svm.c | 4 ++--
> arch/x86/kvm/vmx/nested.c | 4 ++--
> arch/x86/kvm/vmx/tdx.c | 2 +-
> arch/x86/kvm/vmx/vmx.c | 4 ++--
> 5 files changed, 26 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
> index ac1f9867a234..94e31cf38cb8 100644
> --- a/arch/x86/kvm/kvm_cache_regs.h
> +++ b/arch/x86/kvm/kvm_cache_regs.h
> @@ -105,6 +105,25 @@ static __always_inline bool kvm_register_test_and_mark_available(struct kvm_vcpu
> return arch___test_and_set_bit(reg, (unsigned long *)&vcpu->arch.regs_avail);
> }
>
> +static __always_inline void kvm_reset_available_registers(struct kvm_vcpu *vcpu,
> + u32 available_mask)
Not closely following this series and don't know this code well, but
this API is very confusing for me tbh. Especially in comparison with
kvm_reset_dirty_registers().
Maybe rename this to kvm_clear_available_registers(), and pass in a
"clear_mask", then reverse the polarity:
vcpu->arch.regs_avail &= ~clear_mask;
Most callers are already passing in an inverse of a mask, so might as
well pass the mask as-is and invert it here, and it helps make the
name clear, we're passing in a bitmask to clear from regs_avail.
> +{
> + /*
> + * Note the bitwise-AND! In practice, a straight write would also work
> + * as KVM initializes the mask to all ones and never clears registers
> + * that are eagerly synchronized. Using a bitwise-AND adds a bit of
> + * sanity checking as incorrectly marking an eagerly sync'd register
> + * unavailable will generate a WARN due to an unexpected cache request.
> + */
> + vcpu->arch.regs_avail &= available_mask;
> +}
> +
> +static __always_inline void kvm_reset_dirty_registers(struct kvm_vcpu *vcpu,
> + u32 dirty_mask)
> +{
> + vcpu->arch.regs_dirty = dirty_mask;
> +}
> +
> /*
> * The "raw" register helpers are only for cases where the full 64 bits of a
> * register are read/written irrespective of current vCPU mode. In other words,