Re: [RESEND PATCH 2/2] perf/x86: improve the event scheduling to avoid unnecessary pmu_stop/start

From: Peter Zijlstra
Date: Tue Mar 08 2022 - 07:54:24 EST


On Tue, Mar 08, 2022 at 02:42:10PM +0800, Wen Yang wrote:

> Perhaps the following code could ensure that the pmu counter value is
> stable:
>
>
> /*
> * Careful: an NMI might modify the previous event value.
> *
> * Our tactic to handle this is to first atomically read and
> * exchange a new raw count - then add that new-prev delta
> * count to the generic event atomically:
> */
> again:
> prev_raw_count = local64_read(&hwc->prev_count);
> rdpmcl(hwc->event_base_rdpmc, new_raw_count);
>
> if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
> new_raw_count) != prev_raw_count)
> goto again;
>
>
> It might be better if we could reduce the calls to goto again.

This is completely unrelated. And that goto is rather unlikely, unless
you're doing some truely weird things.

That case happens when the PMI for a counter lands in the middle of a
read() for that counter. In that case both will try and fold the
hardware delta into the software counter. This conflict is fundamentally
unavoidable and needs to be dealt with. The above guarantees correctness
in this case.

It is however extremely unlikely and has *NOTHING* what so ever to do
with counter scheduling.