Re: [PATCH v3] perf/x86/amd: Warn only on new bits set

From: Paul E. McKenney
Date: Tue Aug 06 2024 - 13:47:16 EST


On Tue, Aug 06, 2024 at 09:58:48AM -0700, Breno Leitao wrote:
> Warning at every leaking bits can cause a flood of message, triggering
> various stall-warning mechanisms to fire, including CSD locks, which
> makes the machine to be unusable.
>
> Track the bits that are being leaked, and only warn when a new bit is
> set.
>
> That said, this patch will help with the following issues:
>
> 1) It will tell us which bits are being set, so, it is easy to
> communicate it back to vendor, and to do a root-cause analyzes.
>
> 2) It avoid the machine to be unusable, because, worst case
> scenario, the user gets less than 60 WARNs (one per unhandled bit).
>
> Suggested-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
> Reviewed-by: Sandipan Das <sandipan.das@xxxxxxx>
> Signed-off-by: Breno Leitao <leitao@xxxxxxxxxx>

Reviewed-by: Paul E. McKenney <paulmck@xxxxxxxxxx>

> ---
> Changelog:
> v3:
> * Avoid potential false reporting when concurrent execution occurs on
> different CPUs (Paul E. McKenney)
>
> v2:
> * Improved the patch description, getting the benefits in words.
> * https://lore.kernel.org/all/20240731154651.1555511-1-leitao@xxxxxxxxxx/
>
> v1:
> * https://lore.kernel.org/all/20240524141021.3889002-1-leitao@xxxxxxxxxx/
>
>
> arch/x86/events/amd/core.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
> index 920e3a640cad..b4a1a2576510 100644
> --- a/arch/x86/events/amd/core.c
> +++ b/arch/x86/events/amd/core.c
> @@ -943,11 +943,12 @@ static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, u
> static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
> {
> struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
> + static atomic64_t status_warned = ATOMIC64_INIT(0);
> + u64 reserved, status, mask, new_bits, prev_bits;
> struct perf_sample_data data;
> struct hw_perf_event *hwc;
> struct perf_event *event;
> int handled = 0, idx;
> - u64 reserved, status, mask;
> bool pmu_enabled;
>
> /*
> @@ -1012,7 +1013,12 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
> * the corresponding PMCs are expected to be inactive according to the
> * active_mask
> */
> - WARN_ON(status > 0);
> + if (status > 0) {
> + prev_bits = atomic64_fetch_or(status, &status_warned);
> + // A new bit was set for the very first time.
> + new_bits = status & ~prev_bits;
> + WARN(new_bits, "New overflows for inactive PMCs: %llx\n", new_bits);
> + }
>
> /* Clear overflow and freeze bits */
> amd_pmu_ack_global_status(~status);
> --
> 2.43.5
>