Re: [PATCH v3] perf/x86/rapl: Fix the energy-pkg event for AMD CPUs
From: Dhananjay Ugwekar
Date: Mon Sep 09 2024 - 02:44:45 EST
Hello Ingo,
On 9/5/2024 3:37 PM, Ingo Molnar wrote:
>
> * Liang, Kan <kan.liang@xxxxxxxxxxxxxxx> wrote:
>
>>
>>
>> On 2024-09-04 6:09 a.m., Dhananjay Ugwekar wrote:
>>> After commit ("x86/cpu/topology: Add support for the AMD 0x80000026 leaf"),
>>> on AMD processors that support extended CPUID leaf 0x80000026, the
>>> topology_die_cpumask() and topology_logical_die_id() macros, no longer
>>> return the package cpumask and package id, instead they return the CCD
>>> (Core Complex Die) mask and id respectively. This leads to the energy-pkg
>>> event scope to be modified to CCD instead of package.
>>>
>>> So, change the PMU scope for AMD and Hygon back to package.
>>>
>>> On a 12 CCD 1 Package AMD Zen4 Genoa machine:
>>>
>>> Before:
>>> $ cat /sys/devices/power/cpumask
>>> 0,8,16,24,32,40,48,56,64,72,80,88.
>>>
>>> The expected cpumask here is supposed to be just "0", as it is a package
>>> scope event, only one CPU will be collecting the event for all the CPUs in
>>> the package.
>>>
>>> After:
>>> $ cat /sys/devices/power/cpumask
>>> 0
>>>
>>> Signed-off-by: Dhananjay Ugwekar <Dhananjay.Ugwekar@xxxxxxx>
>>
>> Reviewed-by: Kan Liang <kan.liang@xxxxxxxxxxxxxxx>
>
> Note that I just merged the older v2 version into perf/urgent
I'm not able to find the v2 version in perf/urgent right now, tip-bot had given the below link before
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/commit/?id=8d72eba1cf8cecd76a2b4c1dd7673c2dc775f514.
Have we changed our decision about keeping the v2 for backporting and "revert v2 + apply v4" post the PMU scope patchset?
Thanks,
Dhananjay
, because it's
> a regression fix, and to not create a backporting complication. I added
> your Reviewed-by to it if that's fine with you.
>
> Your hotplug series can essentially revert this:
>
> 8d72eba1cf8c ("perf/x86/rapl: Fix the energy-pkg event for AMD CPUs")
>
> and apply the v3 patch to gain the simpler version.
>
> Thanks,
>
> Ingo