Re: [PATCH v2 0/3] newidle_balance() PREEMPT_RT latency mitigations
From: Scott Wood
Date: Mon May 03 2021 - 12:33:58 EST
On Sun, 2021-05-02 at 05:25 +0200, Mike Galbraith wrote:
> On Sat, 2021-05-01 at 17:03 -0500, Scott Wood wrote:
> > On Thu, 2021-04-29 at 09:12 +0200, Vincent Guittot wrote:
> > > Hi Scott,
> > >
> > > On Thu, 29 Apr 2021 at 01:28, Scott Wood <swood@xxxxxxxxxx> wrote:
> > > > These patches mitigate latency caused by newidle_balance() on large
> > > > systems when PREEMPT_RT is enabled, by enabling interrupts when the
> > > > lock
> > > > is dropped, and exiting early at various points if an RT task is
> > > > runnable
> > > > on the current CPU.
> > > >
> > > > On a system with 128 CPUs, these patches dropped latency (as
> > > > measured by
> > > > a 12 hour rteval run) from 1045us to 317us (when applied to
> > > > 5.12.0-rc3-rt3).
> > >
> > > The patch below has been queued for v5.13 and removed the update of
> > > blocked load what seemed to be the major reason for long preempt/irq
> > > off during newly idle balance:
> > > https://lore.kernel.org/lkml/20210224133007.28644-1-vincent.guittot@xxxxxxxxxx/
> > >
> > > I would be curious to see how it impacts your cases
> >
> > I still get 1000+ ms latencies with those patches applied.
>
> If NEWIDLE balancing migrates one task, how does that manage to consume
> a full *millisecond*, and why would that only be a problem for RT?
>
> -Mike
>
> (rt tasks don't play !rt balancer here, if CPU goes idle, tough titty)
Determining which task to pull is apparently taking that long (again, this
is on a 128-cpu system). RT is singled out because that is the config that
makes significant tradeoffs to keep latencies down (I expect this would be
far from the only possible 1ms+ latency on a non-RT kernel), and there was
concern about the overhead of a double context switch when pulling a task to
a newidle cpu.
-Scott