[patch] Re: scheduler oddity [bug?]

From: Mike Galbraith
Date: Mon Mar 09 2009 - 04:02:43 EST


On Sun, 2009-03-08 at 18:52 +0100, Ingo Molnar wrote:
> * Mike Galbraith <efault@xxxxxx> wrote:
>
> > On Sun, 2009-03-08 at 16:39 +0100, Ingo Molnar wrote:
> > > * Mike Galbraith <efault@xxxxxx> wrote:
> > >
> > > > The problem with your particular testcase is that while one
> > > > half has an avg_overlap (what we use as affinity hint for
> > > > synchronous wakeups) which triggers the affinity hint, the
> > > > other half has avg_overlap of zero, what it was born with, so
> > > > despite significant execution overlap, the scheduler treats
> > > > them as if they were truly synchronous tasks.
> > >
> > > hm, why does it stay on zero?
> >
> > Wakeup preemption. Presuming here: heavy task wakes light
> > task, is preempted, light task stuffs data into pipe, heavy
> > task doesn't block, so no avg_overlap is ever computed. The
> > heavy task uses 100% CPU.
> >
> > Running as SCHED_BATCH (virgin source), it becomes sane.
>
> ah.
>
> I'd argue then that time spent on the rq preempted _should_
> count in avg_overlap statistics. I.e. couldnt we do something
> like ... your patch? :)
>
> > > if (sleep && p->se.last_wakeup) {
> > > update_avg(&p->se.avg_overlap,
> > > p->se.sum_exec_runtime - p->se.last_wakeup);
> > > p->se.last_wakeup = 0;
> > > - }
> > > + } else if (p->se.avg_overlap < limit && runtime >= limit)
> > > + update_avg(&p->se.avg_overlap, runtime);
>
> Just done unconditionally, i.e. something like:
>
> if (sleep) {
> runtime = p->se.sum_exec_runtime - p->se.last_wakeup;
> p->se.last_wakeup = 0;
> } else {
> runtime = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime;
> }
>
> update_avg(&p->se.avg_overlap, runtime);
>
> ?

OK, I've not seen any problem indications yet, so find patchlet below.

However! Balazs has stated that this problem is _not_ present in .git,
and that..

commit 38736f475071b80b66be28af7b44c854073699cc
Author: Gautham R Shenoy <ego@xxxxxxxxxx>
Date: Sat Sep 6 14:50:23 2008 +0530

..is what fixed it. Willy Tarreau verified this as being the case on
his HW as well. It is present in .git with my HW.

I see it as a problem, but it's your call. Dunno if I'd apply it or
hold back, given these conflicting reports.

Anyway...

Given a task pair communicating via pipe, if one partner fills/drains such
that the other does not block for extended periods, avg_overlap can be long
stale, and trigger affine wakeups despite heavy CPU demand. This can, and
does lead to throughput loss in the testcase posted by the reporter.

Fix this by unconditionally updating avg_overlap at dequeue time instead
of only updating when a task sleeps.

See http://lkml.org/lkml/2009/3/7/79 for details/testcase.

Reported-by: Balazs Scheidler <bazsi@xxxxxxxxxx>
Signed-off-by: Mike Galbraith <efault@xxxxxx>

kernel/sched.c | 9 +++++++--
1 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 8e2558c..c670050 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -1712,12 +1712,17 @@ static void enqueue_task(struct rq *rq, struct task_struct *p, int wakeup)

static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep)
{
+ u64 runtime;
+
if (sleep && p->se.last_wakeup) {
- update_avg(&p->se.avg_overlap,
- p->se.sum_exec_runtime - p->se.last_wakeup);
+ runtime = p->se.sum_exec_runtime - p->se.last_wakeup;
p->se.last_wakeup = 0;
+ } else {
+ runtime = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime;
}

+ update_avg(&p->se.avg_overlap, runtime);
+
sched_info_dequeued(p);
p->sched_class->dequeue_task(rq, p, sleep);
p->se.on_rq = 0;



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/