Re: [PATCH v2 2/2] KVM: x86/pmu: Add PEBS support for SPR and future non-hybird models

From: Like Xu
Date: Fri Oct 14 2022 - 04:45:03 EST


On 12/10/2022 3:09 am, Sean Christopherson wrote:
Kind of a nit, but I would prefer a shortlog that talks about the pdit/pdir
stuff and not a "enable PEBS" bucket.

On Thu, Sep 22, 2022, Like Xu wrote:
@@ -140,6 +149,16 @@ static void kvm_perf_overflow(struct perf_event *perf_event,
__kvm_perf_overflow(pmc, true);
}
+static bool need_max_precise(struct kvm_pmc *pmc)
+{
+ if (pmc->idx == 0 && x86_match_cpu(vmx_pebs_pdist_cpu))
+ return true;
+ if (pmc->idx == 32 && x86_match_cpu(vmx_pebs_pdir_cpu))
+ return true;
+
+ return false;
+}
+
static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
u64 config, bool exclude_user,
bool exclude_kernel, bool intr)
@@ -181,11 +200,11 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type,
* the accuracy of the PEBS profiling result, because the "event IP"
* in the PEBS record is calibrated on the guest side.
*
- * On Icelake everything is fine. Other hardware (GLC+, TNT+) that
+ * On Icelake everything is fine. Other hardware (TNT+) that
* could possibly care here is unsupported and needs changes.

This part of the comment is still somewhat stale, and for me at least it's not at
all helpful.

SPR has GLC core, and this patch adds this support and removes the unsupported statements.
Not sure how else I may help you.


*/
attr.precise_ip = 1;
- if (x86_match_cpu(vmx_icl_pebs_cpu) && pmc->idx == 32)
+ if (need_max_precise(pmc))
attr.precise_ip = 3;

What about writing this as:

attr.precise_ip = pmc_get_pebs_precision(pmc);

(or whatever name is appropriate for "pebs_precision").

The comment says, "the difference in the software precision levels of guest and
host PEBS events will not affect the accuracy of the PEBS profiling result".


Then in the helper, add comments to explaint the magic numbers and the interaction
with PDIST and PDIR. Bonus points if #defines for the the magic numbers can be
added somewher

KVM just uses "precision_ip = 3" to request hw pdit/pdir counters from host perf,
It only works at 3. For 1 or 2, there is no difference.


* 0 - SAMPLE_IP can have arbitrary skid
* 1 - SAMPLE_IP must have constant skid
* 2 - SAMPLE_IP requested to have 0 skid
* 3 - SAMPLE_IP must have 0 skid

static u64 pmc_get_pebs_precision(struct kvm_pmc *pmc)
{
/* Comment that explains why PDIST/PDIR require 0 skid? */
if ((pmc->idx == 0 && x86_match_cpu(vmx_pebs_pdist_cpu)) ||
(pmc->idx == 32 && x86_match_cpu(vmx_pebs_pdir_cpu)))
return 3;

/* Comment about constant skid? */
return 1;
}
, therefore 0 or constant skid makes no difference.



}
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index c5e5dfef69c7..4dc4bbe18821 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -398,7 +398,9 @@ static inline bool vmx_pt_mode_is_host_guest(void)
static inline bool vmx_pebs_supported(void)
{
- return boot_cpu_has(X86_FEATURE_PEBS) && kvm_pmu_cap.pebs_ept;
+ return boot_cpu_has(X86_FEATURE_PEBS) &&
+ !boot_cpu_has(X86_FEATURE_HYBRID_CPU) &&

This belongs in a separate patch, and it should be ordered before patch 1 so that
there's no window where KVM can see pebs_ept==1 on a hybrid CPU.

Actually, shouldn't everything in this patch land before core enabling?

Perf core enabling about SPR is already upstreamed.
KVM support is not too late either, thanks to SPR's relentless delays.


+ kvm_pmu_cap.pebs_ept;

Please align indentation:

return boot_cpu_has(X86_FEATURE_PEBS) &&
!boot_cpu_has(X86_FEATURE_HYBRID_CPU) &&
kvm_pmu_cap.pebs_ept;