Re: [PATCH v2 0/3] arm64: perf_event: Fix time offset prior to epoch
From: Peter Zijlstra
Date: Tue May 12 2020 - 05:19:41 EST
On Tue, May 12, 2020 at 02:38:12PM +0800, Leo Yan wrote:
> @@ -1165,28 +1166,26 @@ device_initcall(armv8_pmu_driver_init)
> void arch_perf_update_userpage(struct perf_event *event,
> struct perf_event_mmap_page *userpg, u64 now)
> {
> + struct clock_read_data *rd;
> + unsigned int seq;
>
> /*
> * Internal timekeeping for enabled/running/stopped times
> * is always computed with the sched_clock.
> */
> userpg->cap_user_time = 1;
> + userpg->cap_user_time_zero = 1;
>
> + do {
> + rd = sched_clock_read_begin(&seq);
> +
> + userpg->time_mult = rd->mult;
> + userpg->time_shift = rd->shift;
> + userpg->time_zero = rd->epoch_ns;
> +
> + userpg->time_zero -= (rd->epoch_cyc * rd->mult) >> rd->shift;
Damn, I think this is broken vs the counter wrapping.
So what the sched_clock code does is:
cyc_to_ns((cyc - rd->epoch_cyc) & rd->sched_clock_mask, rd->mult, rd->shift)
But because the perf interface assumes a simple linear relation, we
can't express that properly.
Now, your arm64 counter is 56 bits, so wrapping is rare, but still, we
should probably fix that. And that probably needs an ABI extention
*sigh*.
> +
> + } while (sched_clock_read_retry(seq));
> +
> + userpg->time_offset = userpg->time_zero - now;
> }