Re: [PATCH] workqueue: warn if memory reclaim tries to flush !WQ_MEM_RECLAIM workqueue
From: Tejun Heo
Date: Thu Dec 03 2015 - 17:04:14 EST
Hello, Peter.
On Thu, Dec 03, 2015 at 10:09:11PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 03, 2015 at 03:56:32PM -0500, Tejun Heo wrote:
> > So, if I'm not mistaken, those are all marking tasks which can be
> > depended upon during memory reclaim and we do want to catch them all.
>
> Up to a point yes, these are things that want to be reliable during
> reclaim, but lacking memory reserves and usage bounds (which we
> discussed last at lsf/mm) these are just wanna-be.
Hmmm... even if buggy in that they can't guarantee forward-progress
even with access to the emergency pool, I think it makes sense to warn
them about creating an extra dependency which doesn't have access to
the emergency pool.
> > PF_MEMALLOC shouldn't depend on something which require memory to be
> > reclaimed to guarantee forward progress.
>
> PF_MEMALLOC basically avoids reclaim for any memory allocation while its
> set.
So, the assumption is that they're already on the reclaim path and
thus shouldn't recurse into it again.
> The thing is, even if your workqueue has WQ_MEM_RECLAIM set, it will not
> hit the mayday button until you're completely full flat out of memory.
It's more trigger-happy than that. It's timer based. If new worker
can't be created for a certain amount of time for whatever reason,
it'll summon the rescuer.
> At which point you're probably boned anyway, because, as per the above,
> all that code assumes there's _some_ memory to be had.
Not really. PF_MEMALLOC tasks have access to the emergency pool,
creating new workers doesn't, so this really is creating a dependency
which is qualitatively different.
> One solution is to always fail maybe_create_worker() when PF_MEMALLOC is
> set, thus always hitting the mayday button.
I'm not following. When PF_MEMALLOC is set where?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/