Re: [PATCH] x86/vdso: implement clock_gettime(CLOCK_MONOTONIC_RAW, ...)
From: Thomas Gleixner
Date: Tue May 28 2019 - 14:05:16 EST
Alexander,
On Tue, 28 May 2019, Sverdlin, Alexander (Nokia - DE/Ulm) wrote:
> Add CLOCK_MONOTONIC_RAW to the existing clock_gettime() vDSO
> implementation. This is based on the ideas of Jason Vas Dias and comments
> of Thomas Gleixner.
Well to some extent, but
> The results from the above test program:
>
> Clock Before After Diff
> ----- ------ ----- ----
> CLOCK_MONOTONIC 355531720 338039373 -5%
> CLOCK_MONOTONIC_RAW 44831738 336064912 +650%
> CLOCK_REALTIME 347665133 338102907 -3%
the preformance regressions for CLOCK_MONOTONIC and CLOCK_REALTIME are just
not acceptable.
> diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
> index 98c7d12..7c9db26 100644
> --- a/arch/x86/entry/vdso/vclock_gettime.c
> +++ b/arch/x86/entry/vdso/vclock_gettime.c
> @@ -144,6 +144,13 @@ notrace static int do_hres(clockid_t clk, struct timespec *ts)
> struct vgtod_ts *base = >od->basetime[clk];
> u64 cycles, last, sec, ns;
> unsigned int seq;
> + u32 mult = gtod->mult;
> + u32 shift = gtod->shift;
> +
> + if (clk == CLOCK_MONOTONIC_RAW) {
> + mult = gtod->raw_mult;
> + shift = gtod->raw_shift;
> + }
They stem from this extra conditional.
>
> do {
> seq = gtod_read_begin(gtod);
> @@ -153,8 +160,8 @@ notrace static int do_hres(clockid_t clk, struct timespec *ts)
> if (unlikely((s64)cycles < 0))
> return vdso_fallback_gettime(clk, ts);
> if (cycles > last)
> - ns += (cycles - last) * gtod->mult;
> - ns >>= gtod->shift;
> + ns += (cycles - last) * mult;
And this is completely broken because you read the mult/shift values
outside of the sequence count protected region, so a concurrent update of
the timekeeping values will lead to inconsistent state.
Thanks,
tglx