Re: [RFC PATCH 24/41] KVM: x86/pmu: Zero out unexposed Counters/Selectors to avoid information leakage

From: Sean Christopherson
Date: Thu Apr 11 2024 - 17:36:21 EST


On Fri, Jan 26, 2024, Xiong Zhang wrote:
> From: Mingwei Zhang <mizhang@xxxxxxxxxx>
>
> Zero out unexposed counters/selectors because even though KVM intercepts
> all accesses to unexposed PMU MSRs, it does pass through RDPMC instruction
> which allows guest to read all GP counters and fixed counters. So, zero out
> unexposed counter values which might contain critical information for the
> host.

This belongs in the previous patch, it's effectively a bug fix. I appreciate
the push for finer granularity, but introducing a blatant bug and then immediately
fixing it goes too far.

> Signed-off-by: Mingwei Zhang <mizhang@xxxxxxxxxx>
> ---
> arch/x86/kvm/vmx/pmu_intel.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index f79bebe7093d..4b4da7f17895 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -895,11 +895,27 @@ static void intel_restore_pmu_context(struct kvm_vcpu *vcpu)
> wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + i, pmc->eventsel);
> }
>
> + /*
> + * Zero out unexposed GP counters/selectors to avoid information leakage
> + * since passthrough PMU does not intercept RDPMC.

Zeroing the selectors is unnecessary. KVM still intercepts MSR_CORE_PERF_GLOBAL_CTRL,
so just ensure the PMCs that aren't exposed the guest are globally enabled.

> + */
> + for (i = pmu->nr_arch_gp_counters; i < kvm_pmu_cap.num_counters_gp; i++) {
> + wrmsrl(MSR_IA32_PMC0 + i, 0);
> + wrmsrl(MSR_ARCH_PERFMON_EVENTSEL0 + i, 0);
> + }
> +
> wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, pmu->fixed_ctr_ctrl);
> for (i = 0; i < pmu->nr_arch_fixed_counters; i++) {
> pmc = &pmu->fixed_counters[i];
> wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, pmc->counter);
> }
> +
> + /*
> + * Zero out unexposed fixed counters to avoid information leakage
> + * since passthrough PMU does not intercept RDPMC.

I would call out that RDPMC interception is all or nothing, i.e. KVM can't
selectively intercept _some_ PMCs, and the MSR bitmaps don't apply to RDPMC.

> + */
> + for (i = pmu->nr_arch_fixed_counters; i < kvm_pmu_cap.num_counters_fixed; i++)
> + wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 0);
> }
>
> struct kvm_pmu_ops intel_pmu_ops __initdata = {
> --
> 2.34.1
>