Re: [PATCH v2 2/7] KVM: x86: Extract VMXON and EFER.SVME enablement to kernel

From: Sean Christopherson

Date: Wed Dec 17 2025 - 14:02:17 EST


On Wed, Dec 17, 2025, Xu Yilun wrote:
> > >+#define x86_virt_call(fn) \
> > >+({ \
> > >+ int __r; \
> > >+ \
> > >+ if (IS_ENABLED(CONFIG_KVM_INTEL) && \
> > >+ cpu_feature_enabled(X86_FEATURE_VMX)) \
> > >+ __r = x86_vmx_##fn(); \
> > >+ else if (IS_ENABLED(CONFIG_KVM_AMD) && \
> > >+ cpu_feature_enabled(X86_FEATURE_SVM)) \
> > >+ __r = x86_svm_##fn(); \
> > >+ else \
> > >+ __r = -EOPNOTSUPP; \
> > >+ \
> > >+ __r; \
> > >+})
> > >+
> > >+int x86_virt_get_cpu(int feat)
> > >+{
> > >+ int r;
> > >+
> > >+ if (!x86_virt_feature || x86_virt_feature != feat)
> > >+ return -EOPNOTSUPP;
> > >+
> > >+ if (this_cpu_inc_return(virtualization_nr_users) > 1)
> > >+ return 0;
> >
> > Should we assert that preemption is disabled? Calling this API when preemption
> > is enabled is wrong.
> >
> > Maybe use __this_cpu_inc_return(), which already verifies preemption status.

I always forget that the double-underscores have the checks.

> Is it better we explicitly assert the preemption for x86_virt_get_cpu()
> rather than embed the check in __this_cpu_inc_return()? We are not just
> protecting the racing for the reference counter. We should ensure the
> "counter increase + x86_virt_call(get_cpu)" can't be preempted.

I don't have a strong preference. Using __this_cpu_inc_return() without any
nearby preemption_{enable,disable}() calls makes it quite clears that preemption
is expected to be disabled by the caller. But I'm also ok being explicit.