Re: [RESEND PATCH 2/2] perf/x86: improve the event scheduling to avoid unnecessary pmu_stop/start

From: Wen Yang
Date: Wed Apr 20 2022 - 10:44:59 EST




在 2022/4/20 上午5:18, Stephane Eranian 写道:
Hi,

Going back to the original description of this patch 2/2, it seems the
problem was that you expected PINNED events to always remain in
the same counters. This is NOT what the interface guarantees. A pinned
event is guaranteed to either be on a counter or in error state if active.
But while active the event can change counters because of event scheduling
and this is fine. The kernel only computes deltas of the raw counter. If you
are using the read() syscall to extract a value, then this is totally
transparent
and you will see no jumps. If you are instead using RDPMC, then you cannot
assume the counter index of a pinned event remains the same. If you do, then
yes, you will see discrepancies in the count returned by RDPMC. You cannot
just use RDPMC to read a counter from user space. You need kernel help.
The info you need is in the page you must mmap on the fd of the event. It
shows the current counter index of the event along with sequence number and
timing to help scale the count if necessary. This proper loop for
RDPMC is documented
in include/uapi/linux/perf_event.h inside the perf_event_mmap_page definition.

As for TFA, it is not clear to me why this is a problem unless you
have the RDPMC problem
I described above.


Thank you for your comments.

Our scenario is: all four GP are used up, and the abnormal PMC3 counter is observed on several machines. In addition, the kernel version is 4.9/4.19.

After we encountered the problem of abnormal CPI data a few months ago, we checked all kinds of applications according to your suggestions here and finally determined that they all comply with the specifications in include/uapi/linux/perf_event.h.

After a long experiment, it was found that this problem was caused by TFA:

When Restricted Transactional Memory (RTM) is supported (CPUID.07H.EBX.RTM [bit 11] = 1) and CPUID.07H.EDX[bit 13]=1 and TSX_FORCE_ABORT[RTM_FORCE_ABORT]=0 (described later in this document), then Performance Monitor Unit (PMU) general purpose counter 3 (IA32_PMC3, MSR C4H and IA32_A_PMC3, MSR 4C4H) may contain unexpected values. Specifically, IA32_PMC3 (MSR C4H), IA32_PERF_GLOBAL_CTRL[3] (MSR 38FH) and IA32_PERFEVTSEL3 (MSR 189H) may contain unexpected values, which also affects IA32_A_PMC3 (MSR 4C4H) and IA32_PERF_GLOBAL_INUSE[3] (MSR 392H).
--> from https://www.intel.com/content/dam/support/us/en/documents/processors/Performance-Monitoring-Impact-of-TSX-Memory-Ordering-Issue-604224.pdf

We also submitted an IPS to Intel:
https://premiersupport.intel.com/IPS/5003b00001fqdhaAAA

For the latest kernel, this issue could be handled by the following commit:
400816f60c54 perf/x86/intel: ("Implement support for TSX Force Abort")

However, many production environments are 4.9, 4.19, or even 3.10 kernel, which do not contain the above commit, and it is difficult to make hotfix from this commit, so these kernels will be affected by this problem.

This patch 2/2 attempts to avoid the switching of the pmu counters in various perf_events, so the special behavior of a single pmu counter (eg, PMC3 here) will not be propagated to other events. We also made hotfix from it and verified it on some machines.

Please have another look.
Thanks

--
Best wishes,
Wen



On Tue, Apr 19, 2022 at 1:57 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:

On Tue, Apr 19, 2022 at 10:16:12PM +0800, Wen Yang wrote:
We finally found that TFA (TSX Force Abort) may affect PMC3's behavior,
refer to the following patch:

400816f60c54 perf/x86/intel: ("Implement support for TSX Force Abort")

When the MSR gets set; the microcode will no longer use PMC3 but will
Force Abort every TSX transaction (upon executing COMMIT).

When TSX Force Abort (TFA) is allowed (default); the MSR gets set when
PMC3 gets scheduled and cleared when, after scheduling, PMC3 is
unused.

When TFA is not allowed; clear PMC3 from all constraints such that it
will not get used.



However, this patch attempts to avoid the switching of the pmu counters
in various perf_events, so the special behavior of a single pmu counter
will not be propagated to other events.


Since PMC3 may have special behaviors, the continuous switching of PMU
counters may not only affects the performance, but also may lead to abnormal
data, please consider this patch again.

I'm not following. How do you get abnormal data?

Are you using RDPMC from userspace? If so, are you following the
prescribed logic using the self-monitoring interface?