Re: [PATCH 2/2] perf/x86/amd: Add support for Large Increment per Cycle Events
From: Kim Phillips
Date: Wed Jan 08 2020 - 17:26:59 EST
Hi Peter,
Happy New Year, and thanks for your review...
On 12/20/19 6:09 AM, Peter Zijlstra wrote:
> On Thu, Nov 14, 2019 at 12:37:20PM -0600, Kim Phillips wrote:
>> @@ -621,6 +622,8 @@ void x86_pmu_disable_all(void)
>> continue;
>> val &= ~ARCH_PERFMON_EVENTSEL_ENABLE;
>> wrmsrl(x86_pmu_config_addr(idx), val);
>> + if (is_large_inc(hwc))
>> + wrmsrl(x86_pmu_config_addr(idx + 1), 0);
>> }
>> }
>>
>
> See, the above makes sense, it doesn't assume anything about where
> config for idx and config for idx+1 are, and then here:
>
>> @@ -855,6 +871,9 @@ static inline void x86_pmu_disable_event(struct perf_event *event)
>> struct hw_perf_event *hwc = &event->hw;
>>
>> wrmsrl(hwc->config_base, hwc->config);
>> +
>> + if (is_large_inc(hwc))
>> + wrmsrl(hwc->config_base + 2, 0);
>> }
>
> You hard-code the offset as being 2. I fixed that for you.
Sorry I missed that! Thanks for catching and fixing it.
>> @@ -849,14 +862,19 @@ int perf_assign_events(struct event_constraint **constraints, int n,
>> int wmin, int wmax, int gpmax, int *assign)
>> {
>> struct perf_sched sched;
>> + struct event_constraint *c;
>>
>> perf_sched_init(&sched, constraints, n, wmin, wmax, gpmax);
>>
>> do {
>> if (!perf_sched_find_counter(&sched))
>> break; /* failed */
>> - if (assign)
>> + if (assign) {
>> assign[sched.state.event] = sched.state.counter;
>> + c = constraints[sched.state.event];
>> + if (c->flags & PERF_X86_EVENT_LARGE_INC)
>> + sched.state.counter++;
>> + }
>> } while (perf_sched_next_event(&sched));
>>
>> return sched.state.unassigned;
>
> I'm still confused by this bit. AFAICT it serves no purpose.
> perf_sched_next_event() will reset sched->state.counter to 0 on every
> pass anyway.
>
> I've deleted it for you.
OK, I had my doubts because perf_sched_next_event doesn't have a counter reset in the "next weight" exit case, which indeed might be a moot case, but then perf_sched_init, called in the beginning of perf_assign_events, doesn't clear the counter variable either..
>> @@ -926,10 +944,14 @@ int x86_schedule_events(struct cpu_hw_events *cpuc, int n, int *assign)
>> break;
>>
>> /* not already used */
>> - if (test_bit(hwc->idx, used_mask))
>> + if (test_bit(hwc->idx, used_mask) || (is_large_inc(hwc) &&
>> + test_bit(hwc->idx + 1, used_mask)))
>> break;
>>
>> __set_bit(hwc->idx, used_mask);
>> + if (is_large_inc(hwc))
>> + __set_bit(hwc->idx + 1, used_mask);
>> +
>> if (assign)
>> assign[i] = hwc->idx;
>> }
>
> This is just really sad.. fixed that too.
[*]
> Can you verify the patches in:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git perf/amd
>
> work?
I found a test that failed: perf stat on 6 instructions + 1 FLOPs events, multiplexing over 6 h/w counters along with the watchdog often returns exceptionally high and incorrect counts for a couple of the instructions events. The FLOPs event counts ok.
My original patch, rebased onto the same code your perf/amd branch is, does not fail this test.
If I re-add my perf_assign_events changes you removed, the problem does not go away.
If I undo re-adding my perf_assign_events code, and re-add my "not already used" code that you removed - see [*] above - the problem DOES go away, and all the counts are all accurate.
One problem I see with your change in the "not already used" fastpath area, is that the new mask variable gets updated with position 'i' regardless of any previous Large Increment event assignments. I.e., a successfully scheduled large increment event assignment may have already consumed that 'i' slot for its Merge event in a previous iteration of the loop. So if the fastpath scheduler fails to assign that following event, the slow path is wrongly entered due to a wrong 'i' comparison with 'n', for example.
Btw, thanks for adding the comment pointing out perf_sched_restore_state is incompatible with Large Increment events.
I'd just as soon put my original "not already used" code back in, but if it's still unwanted, let me know how you'd like to proceed...
Kim