Re: [PATCH] mm: allow exiting processes to exceed the memory.max limit
From: Johannes Weiner
Date: Thu Dec 12 2024 - 15:45:29 EST
On Mon, Dec 09, 2024 at 07:08:19PM +0100, Michal Hocko wrote:
> On Mon 09-12-24 12:42:33, Rik van Riel wrote:
> > It is possible for programs to get stuck in exit, when their
> > memcg is at or above the memory.max limit, and things like
> > the do_futex() call from mm_release() need to page memory in.
> >
> > This can hang forever, but it really doesn't have to.
>
> Are you sure this is really happening?
>
> >
> > The amount of memory that the exit path will page into memory
> > should be relatively small, and letting exit proceed faster
> > will free up memory faster.
> >
> > Allow PF_EXITING tasks to bypass the cgroup memory.max limit
> > the same way PF_MEMALLOC already does.
> >
> > Signed-off-by: Rik van Riel <riel@xxxxxxxxxxx>
> > ---
> > mm/memcontrol.c | 9 +++++----
> > 1 file changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 7b3503d12aaf..d1abef1138ff 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2218,11 +2218,12 @@ int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
> >
> > /*
> > * Prevent unbounded recursion when reclaim operations need to
> > - * allocate memory. This might exceed the limits temporarily,
> > - * but we prefer facilitating memory reclaim and getting back
> > - * under the limit over triggering OOM kills in these cases.
> > + * allocate memory, or the process is exiting. This might exceed
> > + * the limits temporarily, but we prefer facilitating memory reclaim
> > + * and getting back under the limit over triggering OOM kills in
> > + * these cases.
> > */
> > - if (unlikely(current->flags & PF_MEMALLOC))
> > + if (unlikely(current->flags & (PF_MEMALLOC | PF_EXITING)))
> > goto force;
>
> We already have task_is_dying() bail out. Why is that insufficient?
Note that the current one goes to nomem, which causes the fault to
simply retry. It doesn't actually make forward progress.
> It is currently hitting when the oom situation is triggered while your
> patch is triggering this much earlier. We used to do that in the past
> but this got changed by a4ebf1b6ca1e ("memcg: prohibit unconditional
> exceeding the limit of dying tasks"). I believe the situation in vmalloc
> has changed since then but I suspect the fundamental problem that the
> amount of memory dying tasks could allocate a lot of memory stays.
Before that patch, *every* exiting task was allowed to bypass. That
doesn't seem right, either. But IMO this patch then tossed the baby
out with the bathwater; at least the OOM vic needs to make progress.
> There is still this
> : It has been observed that it is not really hard to trigger these
> : bypasses and cause global OOM situation.
> that really needs to be re-evaluated.
This is quite vague, yeah. And not clear if a single task was doing
this, or a large number of concurrently exiting tasks all being
allowed to bypass without even trying. I'm guessing the latter, simply
because OOM victims *are* allowed to tap into the page_alloc reserves;
we'd have seen deadlocks if a single task's exit path vmallocing could
blow the lid on these.
I sent a patch in the other thread, we should discuss over there. I
just wanted to address those two points made here.