Re: [RFC]: userspace memory reaping
From: Suren Baghdasaryan
Date: Wed Oct 14 2020 - 12:57:36 EST
On Wed, Oct 14, 2020 at 5:09 AM Michal Hocko <mhocko@xxxxxxxx> wrote:
>
> [Sorry for a late reply]
>
> On Mon 14-09-20 17:45:44, Suren Baghdasaryan wrote:
> > + linux-kernel@xxxxxxxxxxxxxxx
> >
> > On Mon, Sep 14, 2020 at 5:43 PM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
> > >
> > > Last year I sent an RFC about using oom-reaper while killing a
> > > process: https://patchwork.kernel.org/cover/10894999. During LSFMM2019
> > > discussion https://lwn.net/Articles/787217 a couple of alternative
> > > options were discussed with the most promising one (outlined in the
> > > last paragraph of https://lwn.net/Articles/787217) suggesting to use a
> > > remote version of madvise(MADV_DONTNEED) operation to force memory
> > > reclaim of a killed process. With process_madvise() making its way
> > > through reviews (https://patchwork.kernel.org/patch/11747133/), I
> > > would like to revive this discussion and get feedback on several
> > > possible options, their pros and cons.
>
> Thanks for reviving this!
Thanks for your feedback!
>
> > > The need is similar to why oom-reaper was introduced - when a process
> > > is being killed to free memory we want to make sure memory is freed
> > > even if the victim is in uninterruptible sleep or is busy and reaction
> > > to SIGKILL is delayed by an unpredictable amount of time. I
> > > experimented with enabling process_madvise(MADV_DONTNEED) operation
> > > and using it to force memory reclaim of the target process after
> > > sending SIGKILL. Unfortunately this approach requires the caller to
> > > read proc/pid/maps to extract the list of VMAs to pass as an input to
> > > process_madvise().
>
> Well I would argue that this is not really necessary. You can simply
> call process_madvise with the full address range and let the kernel
> operated only on ranges which are safe to tear down asynchronously.
> Sure that would require some changes to the existing code to not fail
> on those ranges if they contain incompatible vmas but that should be
> possible. If we are worried about backward compatibility then a
> dedicated flag could override.
>
IIUC this is very similar to the last option I proposed. I think this
is doable if we treat it as a special case. process_madvise() return
value not being able to handle a large range would still be a problem.
Maybe we can return MAX_INT in those cases?
> [...]
>
> > > While the objective is to guarantee forward progress even when the
> > > victim cannot terminate, we still want this mechanism to be efficient
> > > because we perform these operations to relieve memory pressure before
> > > it affects user experience.
> > >
> > > Alternative options I would like your feedback are:
> > > 1. Introduce a dedicated process_madvise(MADV_DONTNEED_MM)
> > > specifically for this case to indicate that the whole mm can be freed.
>
> This shouldn't be any different from madvise on the full address range,
> right?
>
Yep, just a matter of choosing the most appropriate API.
> > > 2. A new syscall to efficiently obtain a vector of VMAs (start,
> > > length, flags) of the process instead of reading /proc/pid/maps. The
> > > size of the vector is still limited by UIO_MAXIOV (1024), so several
> > > calls might be needed to query larger number of VMAs, however it will
> > > still be an order of magnitude more efficient than reading
> > > /proc/pid/maps file in 4K or smaller chunks.
>
> While this might be interesting for other usecases - userspace memory
> management in general - I do not think it is directly related to this
> particular feature.
>
True but such a syscall would be useful for other use cases, like
MADV_COLD/MADV_PAGEOUT that Minchan was working on. Maybe we can kill
more than one bird here? Minchan, any thought?
> > > 3. Use process_madvise() flags parameter to indicate a bulk operation
> > > which ignores input vectors. Sample usage: process_madvise(pidfd,
> > > MADV_DONTNEED, vector=NULL, vlen=0, flags=PMADV_FLAG_FILE |
> > > PMADV_FLAG_ANON);
>
> Similar to above.
>
Similar to option 1 I suppose. If so, I agree, just a matter of choosing API.
> > > 4. madvise()/process_madvise() handle gaps between VMAs, so we could
> > > provide one vector element spanning the entire address space. There
> > > are technical issues with this approach (process_madvise return value
> > > can't handle such a large number of bytes and there is MAX_RW_COUNT
> > > limit on max number of bytes one process_madvise call can handle) but
> > > I would still like to hear opinions about it. If this option is
> > > preferable maybe we can deal with these limitations.
>
> To be really honest, the more I am thinking about remove MADV_DONTNEED
> the less I like it. Sure we can limit this functionality to killed tasks
> but there is still a need to MMF_UNSTABLE that the current oom reaper
> sets to prevent from memory corruption while the kernel is still in
> kernel. Userspace memory reaper would need something similar.
>
> I do have a vague recollection that we have discussed a kill(2) based
> approach as well in the past. Essentially SIG_KILL_SYNC which would
> not only send the signal but it would start a teardown of resources
> owned by the task - at least those we can remove safely. The interface
> would be much more simple and less tricky to use. You just make your
> userspace oom killer or potentially other users call SIG_KILL_SYNC which
> will be more expensive but you would at least know that as many
> resources have been freed as the kernel can afford at the moment.
Correct, my early RFC here
https://patchwork.kernel.org/project/linux-mm/patch/20190411014353.113252-3-surenb@xxxxxxxxxx
was using a new flag for pidfd_send_signal() to request mm reaping by
oom-reaper kthread. IIUC you propose to have a new SIG_KILL_SYNC
signal instead of a new pidfd_send_signal() flag and otherwise a very
similar solution. Is my understanding correct?
I remember Mel Gorman (who I forgot to CC in my original email and
added now) and Matthew Wilcox were actively participating in that
discussion during LSFMM. Would love to hear their opinions before
jumping into development.
Thanks,
Suren.
> --
> Michal Hocko
> SUSE Labs