Re: [PATCH/RFC] workqueue: allow rescuer thread to do more work.
From: Dongsu Park
Date: Thu Nov 06 2014 - 11:58:24 EST
Hi Tejun & Neil,
On 04.11.2014 09:22, Tejun Heo wrote:
> On Thu, Oct 30, 2014 at 10:19:32AM +1100, NeilBrown wrote:
> > > Given that workder depletion is pool-wide
> > > event, maybe it'd make sense to trigger rescuers immediately while
> > > workers are in short supply? e.g. while there's a manager stuck in
> > > maybe_create_worker() with the mayday timer already triggered?
> >
> > So what if I change "need_more_worker" to "need_to_create_worker" ?
> > Then it will stop as soon as there in an idle worker thread.
> > That is the condition that keeps maybe_create_worker() looping.
> > ??
>
> Yeah, that'd be a better condition and can work out. Can you please
> write up a patch to do that and do some synthetic tests excercising
> the code path? Also please cc Lai Jiangshan <laijs@xxxxxxxxxxxxxx>
> when posting the patch.
This issue looks exactly like what I've encountered occasionally in our test
setup. (with a custom kernel based on 3.12, MD/raid1, dm-multipath, etc.)
When a system suffers from high memory pressure, and at the same time
underlying devices of RAID arrays are repeatedly removed and re-added,
then sometimes the whole system gets locked up on a worker pool's lock.
So I had to fix our custom MD code to allocate a separate ordered workqueue
with WQ_MEM_RECLAIM, apart from md_wq or md_misc_wq.
Then the lockup seemed to have disappeared.
Now that I read the Neil's patch, which looks like an ultimate solution
to the problem I have seen. I'm really looking forward to seeing this
change in mainline.
How about the attached patch? Based on the Neil's patch, I replaced
need_more_worker() with need_to_create_worker() as Tejun suggested.
Test is running with this patch, which seems to be working for now.
But I'm going to observe the test result carefully for a few more days.
Regards,
Dongsu
----