On Mon, Apr 10, 2023 at 11:17 PM Like Xu <like.xu.linux@xxxxxxxxx> wrote:
On 11/4/2023 1:36 pm, Jim Mattson wrote:
On Mon, Apr 10, 2023 at 3:51 AM Like Xu <like.xu.linux@xxxxxxxxx> wrote:
From: Like Xu <likexu@xxxxxxxxxxx>
Disable PMU support when running on AMD and perf reports fewer than four
general purpose counters. All AMD PMUs must define at least four counters
due to AMD's legacy architecture hardcoding the number of counters
without providing a way to enumerate the number of counters to software,
e.g. from AMD's APM:
The legacy architecture defines four performance counters (PerfCtrn)
and corresponding event-select registers (PerfEvtSeln).
Virtualizing fewer than four counters can lead to guest instability as
software expects four counters to be available.
I'm confused. Isn't zero less than four?
As I understand it, you are saying that virtualization of zero counter is also
reasonable.
If so, the above statement could be refined as:
Virtualizing fewer than four counters when vPMU is enabled may lead to guest
instability
as software expects at least four counters to be available, thus the vPMU is
disabled if the
minimum number of KVM supported counters is not reached during initialization.
Jim, does this help you or could you explain more about your confusion ?
You say that "fewer than four counters can lead to guest instability
as software expects four counters to be available." Your solution is
to disable the PMU, which leaves zero counters available. Zero is less
than four. Hence, by your claim, disabling the PMU can lead to guest
instability. I don't see how this is an improvement over one, two, or
three counters.
Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx>
Signed-off-by: Like Xu <likexu@xxxxxxxxxxx>
---
arch/x86/kvm/pmu.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index dd7c7d4ffe3b..002b527360f4 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -182,6 +182,9 @@ static inline void kvm_init_pmu_capability(const struct kvm_pmu_ops *pmu_ops)
enable_pmu = false;
}
+ if (!is_intel && kvm_pmu_cap.num_counters_gp < AMD64_NUM_COUNTERS)
+ enable_pmu = false;
+
if (!enable_pmu) {
memset(&kvm_pmu_cap, 0, sizeof(kvm_pmu_cap));
return;
--
2.40.0