Re: [PATCH RESEND v2 1/2] mm: drop oom code from exit_mmap
From: Andrew Morton
Date: Wed Jun 01 2022 - 17:36:46 EST
On Tue, 31 May 2022 15:30:59 -0700 Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
> The primary reason to invoke the oom reaper from the exit_mmap path used
> to be a prevention of an excessive oom killing if the oom victim exit
> races with the oom reaper (see [1] for more details). The invocation has
> moved around since then because of the interaction with the munlock
> logic but the underlying reason has remained the same (see [2]).
>
> Munlock code is no longer a problem since [3] and there shouldn't be
> any blocking operation before the memory is unmapped by exit_mmap so
> the oom reaper invocation can be dropped. The unmapping part can be done
> with the non-exclusive mmap_sem and the exclusive one is only required
> when page tables are freed.
>
> Remove the oom_reaper from exit_mmap which will make the code easier to
> read. This is really unlikely to make any observable difference although
> some microbenchmarks could benefit from one less branch that needs to be
> evaluated even though it almost never is true.
>
> [1] 212925802454 ("mm: oom: let oom_reap_task and exit_mmap run concurrently")
> [2] 27ae357fa82b ("mm, oom: fix concurrent munlock and oom reaper unmap, v3")
> [3] a213e5cf71cb ("mm/munlock: delete munlock_vma_pages_all(), allow oomreap")
>
I've just reinstated the mapletree patchset so there are some
conflicting changes.
> --- a/include/linux/oom.h
> +++ b/include/linux/oom.h
> @@ -106,8 +106,6 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm)
> return 0;
> }
>
> -bool __oom_reap_task_mm(struct mm_struct *mm);
> -
> long oom_badness(struct task_struct *p,
> unsigned long totalpages);
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 2b9305ed0dda..b7918e6bb0db 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3110,30 +3110,13 @@ void exit_mmap(struct mm_struct *mm)
> /* mm's last user has gone, and its about to be pulled down */
> mmu_notifier_release(mm);
>
> - if (unlikely(mm_is_oom_victim(mm))) {
> - /*
> - * Manually reap the mm to free as much memory as possible.
> - * Then, as the oom reaper does, set MMF_OOM_SKIP to disregard
> - * this mm from further consideration. Taking mm->mmap_lock for
> - * write after setting MMF_OOM_SKIP will guarantee that the oom
> - * reaper will not run on this mm again after mmap_lock is
> - * dropped.
> - *
> - * Nothing can be holding mm->mmap_lock here and the above call
> - * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
> - * __oom_reap_task_mm() will not block.
> - */
> - (void)__oom_reap_task_mm(mm);
> - set_bit(MMF_OOM_SKIP, &mm->flags);
> - }
> -
> - mmap_write_lock(mm);
> + mmap_read_lock(mm);
Unclear why this patch fiddles with the mm_struct locking in this
fashion - changelogging that would have been helpful.
But iirc mapletree wants to retain a write_lock here, so I ended up with
void exit_mmap(struct mm_struct *mm)
{
struct mmu_gather tlb;
struct vm_area_struct *vma;
unsigned long nr_accounted = 0;
MA_STATE(mas, &mm->mm_mt, 0, 0);
int count = 0;
/* mm's last user has gone, and its about to be pulled down */
mmu_notifier_release(mm);
mmap_write_lock(mm);
arch_exit_mmap(mm);
vma = mas_find(&mas, ULONG_MAX);
if (!vma) {
/* Can happen if dup_mmap() received an OOM */
mmap_write_unlock(mm);
return;
}
lru_add_drain();
flush_cache_mm(mm);
tlb_gather_mmu_fullmm(&tlb, mm);
/* update_hiwater_rss(mm) here? but nobody should be looking */
/* Use ULONG_MAX here to ensure all VMAs in the mm are unmapped */
unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX);
/*
* Set MMF_OOM_SKIP to hide this task from the oom killer/reaper
* because the memory has been already freed. Do not bother checking
* mm_is_oom_victim because setting a bit unconditionally is cheaper.
*/
set_bit(MMF_OOM_SKIP, &mm->flags);
free_pgtables(&tlb, &mm->mm_mt, vma, FIRST_USER_ADDRESS,
USER_PGTABLES_CEILING);
tlb_finish_mmu(&tlb);
/*
* Walk the list again, actually closing and freeing it, with preemption
* enabled, without holding any MM locks besides the unreachable
* mmap_write_lock.
*/
do {
if (vma->vm_flags & VM_ACCOUNT)
nr_accounted += vma_pages(vma);
remove_vma(vma);
count++;
cond_resched();
} while ((vma = mas_find(&mas, ULONG_MAX)) != NULL);
BUG_ON(count != mm->map_count);
trace_exit_mmap(mm);
__mt_destroy(&mm->mm_mt);
mm->mmap = NULL;
mmap_write_unlock(mm);
vm_unacct_memory(nr_accounted);
}