Re: [PATCH] x86/xen: Add "xen_timer_slop" command line option

From: Dario Faggioli
Date: Tue Mar 26 2019 - 05:13:47 EST

On Mon, 2019-03-25 at 09:43 -0400, Boris Ostrovsky wrote:
> On 3/25/19 8:05 AM, luca abeni wrote:
> >
> > The picture shows the latencies measured with an unpatched guest
> > kernel
> > and with a guest kernel having TIMER_SLOP set to 1000 (arbitrary
> > small
> > value :).
> > All the experiments have been performed booting the hypervisor with
> > a
> > small timer_slop (the hypervisor's one) value. So, they show that
> > decreasing the hypervisor's timer_slop is not enough to measure low
> > latencies with cyclictest.
> I have a couple of questions:
> * Does it make sense to make this a tunable for other clockevent
> devices
> as well?
So, AFAIUI, the thing is as follows. In clockevents_program_event(), we
keep the delta between now and the next timer event within
dev->max_delta_ns and dev->min_delta_ns:

delta = min(delta, (int64_t) dev->max_delta_ns);
delta = max(delta, (int64_t) dev->min_delta_ns);

For Xen (well, for the Xen clock) we have:

.max_delta_ns = 0xffffffff,
.min_delta_ns = TIMER_SLOP,

which means a guest can't ask for a timer to fire earlier than 100us
ahead, which is a bit too coarse, especially on contemporary hardware.

For "lapic_deadline" (which was what was in use in KVM guests, in our
experiments) we have:

lapic_clockevent.max_delta_ns = clockevent_delta2ns(0x7FFFFF, &lapic_clockevent);
lapic_clockevent.min_delta_ns = clockevent_delta2ns(0xF, &lapic_clockevent);

Which means max is 0x7FFFFF device ticks, and min is 0xF.
clockevent_delta2ns() does the conversion from ticks to ns, basing on
the results of the APIC calibration process. It calls cev_delta2ns()
which does some scaling, shifting, divs, etc, and, at the very end,

/* Deltas less than 1usec are pointless noise */
return clc > 1000 ? clc : 1000;

So, as Ryan is also saying, the actual minimum, in this case, depends
on hardware, with a sanity check of "never below 1us" (which is quite
smaller than 100us!)

Of course, the actual granularity depends on hardware in the Xen case
as well, but that is handled in Xen itself. And we have mechanisms in
place in there to avoid timer interrupt storms (like, ahem, the Xen's
'timer_slop' boot parameter... :-P)

And this is basically why I was also thinking we can/should lower the
default value of TIMER_SLOP, here in the Xen clock implementation in

> * This patch adjusts min value. Could max value (ever) need a similar
> adjustment?
Well, for Xen, it's already 0xffffffff. I don't see use cases when one
would want a smaller max. Wanting an higher max *might* be of some
interest, e.g., for power management, if the first timer event is 1min
ahead, and you don't want to be woken up every (if my math is right) 4

But we'd have to see if that actually works, not to mention that 4 secs
is already large enough, IMHO, that it's unlikely we'll be really
sleeping for that much time without having to wake up for one reason or
another. :-)

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D,
Software Engineer @ SUSE

Attachment: signature.asc
Description: This is a digitally signed message part