Re: [PATCH v4 18/33] mm: write-lock VMAs before removing them from VMA tree

From: Hyeonggon Yoo
Date: Wed Mar 01 2023 - 02:43:55 EST


On Mon, Feb 27, 2023 at 09:36:17AM -0800, Suren Baghdasaryan wrote:
> Write-locking VMAs before isolating them ensures that page fault
> handlers don't operate on isolated VMAs.
>
> Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
> ---
> mm/mmap.c | 1 +
> mm/nommu.c | 5 +++++
> 2 files changed, 6 insertions(+)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 1f42b9a52b9b..f7ed357056c4 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2255,6 +2255,7 @@ int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> static inline int munmap_sidetree(struct vm_area_struct *vma,
> struct ma_state *mas_detach)
> {
> + vma_start_write(vma);
> mas_set_range(mas_detach, vma->vm_start, vma->vm_end - 1);

I may be missing something, but have few questions:

1) Why does a writer need to both write-lock a VMA and mark the VMA detached
when unmapping it, isn't it enough to just only write-lock a VMA?

2) as VMAs that are going to be removed are already locked in vma_prepare(),
so I think this hunk could be dropped?

> if (mas_store_gfp(mas_detach, vma, GFP_KERNEL))
> return -ENOMEM;
> diff --git a/mm/nommu.c b/mm/nommu.c
> index 57ba243c6a37..2ab162d773e2 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -588,6 +588,7 @@ static int delete_vma_from_mm(struct vm_area_struct *vma)
> current->pid);
> return -ENOMEM;
> }
> + vma_start_write(vma);
> cleanup_vma_from_mm(vma);

3) I think this hunk could be dropped as Per-VMA lock depends on MMU anyway.

Thanks,
Hyeonggon

>
> /* remove from the MM's tree and list */
> @@ -1519,6 +1520,10 @@ void exit_mmap(struct mm_struct *mm)
> */
> mmap_write_lock(mm);
> for_each_vma(vmi, vma) {
> + /*
> + * No need to lock VMA because this is the only mm user and no
> + * page fault handled can race with it.
> + */
> cleanup_vma_from_mm(vma);
> delete_vma(mm, vma);
> cond_resched();
> --
> 2.39.2.722.g9855ee24e9-goog
>
>