Re: single aio thread is migrated crazily by scheduler
From: Phil Auld
Date: Thu Nov 21 2019 - 09:12:27 EST
On Thu, Nov 21, 2019 at 12:12:18PM +0800 Ming Lei wrote:
> On Wed, Nov 20, 2019 at 05:03:13PM -0500, Phil Auld wrote:
> > Hi Peter,
> >
> > On Wed, Nov 20, 2019 at 08:16:36PM +0100 Peter Zijlstra wrote:
> > > On Tue, Nov 19, 2019 at 07:40:54AM +1100, Dave Chinner wrote:
> > > > On Mon, Nov 18, 2019 at 10:21:21AM +0100, Peter Zijlstra wrote:
> > >
> > > > > We typically only fall back to the active balancer when there is
> > > > > (persistent) imbalance and we fail to migrate anything else (of
> > > > > substance).
> > > > >
> > > > > The tuning mentioned has the effect of less frequent scheduling, IOW,
> > > > > leaving (short) tasks on the runqueue longer. This obviously means the
> > > > > load-balancer will have a bigger chance of seeing them.
> > > > >
> > > > > Now; it's been a while since I looked at the workqueue code but one
> > > > > possible explanation would be if the kworker that picks up the work item
> > > > > is pinned. That would make it runnable but not migratable, the exact
> > > > > situation in which we'll end up shooting the current task with active
> > > > > balance.
> > > >
> > > > Yes, that's precisely the problem - work is queued, by default, on a
> > > > specific CPU and it will wait for a kworker that is pinned to that
> > >
> > > I'm thinking the problem is that it doesn't wait. If it went and waited
> > > for it, active balance wouldn't be needed, that only works on active
> > > tasks.
> >
> > Since this is AIO I wonder if it should queue_work on a nearby cpu by
> > default instead of unbound.
>
> When the current CPU isn't busy enough, there is still cost for completing
> request remotely.
>
> Or could we change queue_work() in the following way?
>
> * We try to queue the work to the CPU on which it was submitted, but if the
> * CPU dies or is saturated enough it can be processed by another CPU.
>
> Can we decide in a simple or efficient way if the current CPU is saturated
> enough?
>
The scheduler doesn't know if the queued_work submitter is going to go to sleep.
That's why I was singling out AIO. My understanding of it is that you submit the IO
and then keep going. So in that case it might be better to pick a node-local nearby
cpu instead. But this is a user of work queue issue not a scheduler issue.
Interestingly in our fio case the 4k one does not sleep and we get the active balance
case where it moves the actually running thread. The 512 byte case seems to be
sleeping since the migrations are all at wakeup time I believe.
Cheers,
Phil
> Thanks,
> Ming
--