[tip:sched/urgent] sched/clock: Fix broken stable to unstable transfer

From: tip-bot for Pavel Tatashin
Date: Mon Mar 27 2017 - 06:31:07 EST


Commit-ID: 7b09cc5a9debc86c903c2eff8f8a1fdef773c649
Gitweb: http://git.kernel.org/tip/7b09cc5a9debc86c903c2eff8f8a1fdef773c649
Author: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx>
AuthorDate: Wed, 22 Mar 2017 16:24:17 -0400
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Mon, 27 Mar 2017 10:23:48 +0200

sched/clock: Fix broken stable to unstable transfer

When it is determined that the clock is actually unstable, and
we switch from stable to unstable, the __clear_sched_clock_stable()
function is eventually called.

In this function we set gtod_offset so the following holds true:

sched_clock() + raw_offset == ktime_get_ns() + gtod_offset

But instead of getting the latest timestamps, we use the last values
from scd, so instead of sched_clock() we use scd->tick_raw, and
instead of ktime_get_ns() we use scd->tick_gtod.

However, later, when we use gtod_offset sched_clock_local() we do not
add it to scd->tick_gtod to calculate the correct clock value when we
determine the boundaries for min/max clocks.

This can result in tick granularity sched_clock() values, so fix it.

Signed-off-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: hpa@xxxxxxxxx
Fixes: 5680d8094ffa ("sched/clock: Provide better clock continuity")
Link: http://lkml.kernel.org/r/1490214265-899964-2-git-send-email-pasha.tatashin@xxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
kernel/sched/clock.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index 24a3e01..00a45c4 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -221,7 +221,7 @@ static inline u64 wrap_max(u64 x, u64 y)
*/
static u64 sched_clock_local(struct sched_clock_data *scd)
{
- u64 now, clock, old_clock, min_clock, max_clock;
+ u64 now, clock, old_clock, min_clock, max_clock, gtod;
s64 delta;

again:
@@ -238,9 +238,10 @@ again:
* scd->tick_gtod + TICK_NSEC);
*/

- clock = scd->tick_gtod + __gtod_offset + delta;
- min_clock = wrap_max(scd->tick_gtod, old_clock);
- max_clock = wrap_max(old_clock, scd->tick_gtod + TICK_NSEC);
+ gtod = scd->tick_gtod + __gtod_offset;
+ clock = gtod + delta;
+ min_clock = wrap_max(gtod, old_clock);
+ max_clock = wrap_max(old_clock, gtod + TICK_NSEC);

clock = wrap_max(clock, min_clock);
clock = wrap_min(clock, max_clock);