[PATCH v3 0/7] x86/pmu: Corner cases fixes and optimization

From: Like Xu
Date: Wed Aug 31 2022 - 04:53:59 EST


Good well-designed tests can help us find more bugs, especially when
the test steps differ from the Linux kernel behaviour in terms of the
timing of access to virtualized hw resources.

Please feel free to run tests, add more or share comments.

Previous:
https://lore.kernel.org/kvm/20220823093221.38075-1-likexu@xxxxxxxxxxx/

V2 RESEND -> V3 Changelog:
- Post perf change as a separate patch to the perf folks; (Sean)
- Rewrite the deferred logic using imperative mood; (Sean)
- Drop some useless comment; (Sean)
- Rename __reprogram_counter() to kvm_pmu_request_counter_reprogam(); (Sean)
- Replace a play-by-play of the code changes with a high level description; (); (Sean)
- Rename pmc->stale_counter to pmc->prev_counter; (Sean)
- Drop an unnecessary check about pmc->prev_counter; (Sean)
- Simply the code about "CTLn is even, CTRn is odd"; (Sean)
- Refine commit message to avoid pronouns; (Sean)

Like Xu (7):
KVM: x86/pmu: Avoid setting BIT_ULL(-1) to pmu->host_cross_mapped_mask
KVM: x86/pmu: Don't generate PEBS records for emulated instructions
KVM: x86/pmu: Avoid using PEBS perf_events for normal counters
KVM: x86/pmu: Defer reprogram_counter() to kvm_pmu_handle_event()
KVM: x86/pmu: Defer counter emulated overflow via pmc->prev_counter
KVM: x86/svm/pmu: Direct access pmu->gp_counter[] to implement
amd_*_to_pmc()
KVM: x86/svm/pmu: Rewrite get_gp_pmc_amd() for more counters
scalability

arch/x86/include/asm/kvm_host.h | 6 +-
arch/x86/kvm/pmu.c | 44 +++++++-----
arch/x86/kvm/pmu.h | 6 +-
arch/x86/kvm/svm/pmu.c | 121 ++++++--------------------------
arch/x86/kvm/vmx/pmu_intel.c | 36 +++++-----
5 files changed, 75 insertions(+), 138 deletions(-)

--
2.37.3