Re: locking/csd-lock: Switch from sched_clock() to ktime_get_mono_fast_ns()
From: Paul E. McKenney
Date: Fri Oct 11 2024 - 13:43:46 EST
On Thu, Oct 10, 2024 at 11:31:08AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 10, 2024 at 11:13:35AM -0400, Rik van Riel wrote:
> > On Wed, 2024-10-09 at 10:57 -0700, Paul E. McKenney wrote:
> > > Currently, the CONFIG_CSD_LOCK_WAIT_DEBUG code uses sched_clock()
> > > to check for excessive CSD-lock wait times. This works, but does not
> > > guarantee monotonic timestamps. Therefore, switch from sched_clock()
> > > to ktime_get_mono_fast_ns(), which does guarantee monotonic
> > > timestamps,
> > > at least in the absence of calls from NMI handlers, which are not
> > > involved
> > > in this code path.
> > >
> > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
> > > Cc: Neeraj Upadhyay <neeraj.upadhyay@xxxxxxxxxx>
> > > Cc: Rik van Riel <riel@xxxxxxxxxxx>
> > > Cc: Leonardo Bras <leobras@xxxxxxxxxx>
> > > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> > > Cc: "Peter Zijlstra (Intel)" <peterz@xxxxxxxxxxxxx>
> >
> > The functional change here is that sched_clock() uses rdtsc(),
> > while ktime_get_mono_fast_ns() uses rdtsc_ordered().
> >
> > I don't know if sched_clock() should also be using rdtsc_ordered().
> >
> > Reviewed-by: Rik van Riel <riel@xxxxxxxxxxx>
>
> Thank you! I will apply this on my next rebase.
And please see below for the updated version.
Thanx, Paul
------------------------------------------------------------------------
locking/csd-lock: Switch from sched_clock() to ktime_get_mono_fast_ns()
Currently, the CONFIG_CSD_LOCK_WAIT_DEBUG code uses sched_clock() to check
for excessive CSD-lock wait times. This works, but does not guarantee
monotonic timestamps on x86 due to the sched_clock() function's use of
the rdtsc instruction, which does not guarantee ordering. This means
that, given successive calls to sched_clock(), the second might return
an earlier time than the second, that is, time might seem to go backwards.
This can (and does!) result in false-positive CSD-lock wait complaints
claiming almost 2^64 nanoseconds of delay.
Therefore, switch from sched_clock() to ktime_get_mono_fast_ns(), which
does guarantee monotonic timestamps via the rdtsc_ordered() function,
which as the name implies, does guarantee ordered timestamps, at least
in the absence of calls from NMI handlers, which are not involved in
this code path.
Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
Reviewed-by: Rik van Riel <riel@xxxxxxxxxxx>
Cc: Neeraj Upadhyay <neeraj.upadhyay@xxxxxxxxxx>
Cc: Leonardo Bras <leobras@xxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: "Peter Zijlstra (Intel)" <peterz@xxxxxxxxxxxxx>
diff --git a/kernel/smp.c b/kernel/smp.c
index f25e20617b7eb..27dc31a146a35 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -246,7 +246,7 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
return true;
}
- ts2 = sched_clock();
+ ts2 = ktime_get_mono_fast_ns();
/* How long since we last checked for a stuck CSD lock.*/
ts_delta = ts2 - *ts1;
if (likely(ts_delta <= csd_lock_timeout_ns * (*nmessages + 1) *
@@ -321,7 +321,7 @@ static void __csd_lock_wait(call_single_data_t *csd)
int bug_id = 0;
u64 ts0, ts1;
- ts1 = ts0 = sched_clock();
+ ts1 = ts0 = ktime_get_mono_fast_ns();
for (;;) {
if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id, &nmessages))
break;