Re: [PATCH v5 14/21] mm/mmap: Avoid zeroing vma tree in mmap_region()

From: Liam R. Howlett
Date: Tue Jul 23 2024 - 10:18:41 EST


* Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> [240722 14:43]:
> On Wed, Jul 17, 2024 at 04:07:02PM GMT, Liam R. Howlett wrote:
> > From: "Liam R. Howlett" <Liam.Howlett@xxxxxxxxxx>
> >
> > Instead of zeroing the vma tree and then overwriting the area, let the
> > area be overwritten and then clean up the gathered vmas using
> > vms_complete_munmap_vmas().
> >
> > If a driver is mapping over an existing vma, then clear the ptes before
> > the call_mmap() invocation. If the vma has a vm_ops->close(), then call
> > the close() function. This is done using the vms_clear_ptes() and
> > vms_close_vmas() helpers. This has the side effect of needing to call
> > open() on the vmas if the mmap_region() fails later on.
> >
> > Temporarily keep track of the number of pages that will be removed and
> > reduce the charged amount.
> >
> > This commit drops the validate_mm() call in the vma_expand() function.
> > It is necessary to drop the validate as it would fail since the mm
> > map_count would be incorrect during a vma expansion, prior to the
> > cleanup from vms_complete_munmap_vmas().
> >
> > Clean up the error handing of the vms_gather_munmap_vmas() by calling
> > the verification within the function.
> >
> > Note that before this change, a MAP_FIXED could fail and leave a gap in
> > the vma tree. With this change, a MAP_FIXED failure will fail without
> > creating a gap and leave *a* vma in the area (may have been split) and
> > attept to restore them to an operational state (re-attached and
> > vm_ops->open()'ed if they were vm_ops->close()'d).
> >
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx>
> > ---
> > mm/internal.h | 2 +
> > mm/mmap.c | 119 +++++++++++++++++++++++++++++++-------------------
> > 2 files changed, 76 insertions(+), 45 deletions(-)
> >
> > diff --git a/mm/internal.h b/mm/internal.h
> > index ec8441362c28..5bd60cb9fcbb 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -1503,6 +1503,8 @@ struct vma_munmap_struct {
> > unsigned long stack_vm;
> > unsigned long data_vm;
> > bool unlock; /* Unlock after the munmap */
> > + bool clear_ptes; /* If there are outstanding PTE to be cleared */
> > + bool closed; /* vma->vm_ops->close() called already */
> > };
> >
> > void __meminit __init_single_page(struct page *page, unsigned long pfn,
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 20da0d039c95..0b7aa2c46cec 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -170,10 +170,11 @@ void unlink_file_vma_batch_final(struct unlink_vma_file_batch *vb)
> > /*
> > * Close a vm structure and free it.
> > */
> > -static void remove_vma(struct vm_area_struct *vma, bool unreachable)
> > +static
> > +void remove_vma(struct vm_area_struct *vma, bool unreachable, bool closed)
> > {
> > might_sleep();
> > - if (vma->vm_ops && vma->vm_ops->close)
> > + if (!closed && vma->vm_ops && vma->vm_ops->close)
> > vma->vm_ops->close(vma);
> > if (vma->vm_file)
> > fput(vma->vm_file);
> > @@ -401,17 +402,21 @@ anon_vma_interval_tree_post_update_vma(struct vm_area_struct *vma)
> > }
> >
> > static unsigned long count_vma_pages_range(struct mm_struct *mm,
> > - unsigned long addr, unsigned long end)
> > + unsigned long addr, unsigned long end,
> > + unsigned long *nr_accounted)
> > {
> > VMA_ITERATOR(vmi, mm, addr);
> > struct vm_area_struct *vma;
> > unsigned long nr_pages = 0;
> >
> > + *nr_accounted = 0;
> > for_each_vma_range(vmi, vma, end) {
> > unsigned long vm_start = max(addr, vma->vm_start);
> > unsigned long vm_end = min(end, vma->vm_end);
> >
> > nr_pages += PHYS_PFN(vm_end - vm_start);
> > + if (vma->vm_flags & VM_ACCOUNT)
> > + *nr_accounted += PHYS_PFN(vm_end - vm_start);
> > }
> >
> > return nr_pages;
> > @@ -527,6 +532,8 @@ static inline void init_vma_munmap(struct vma_munmap_struct *vms,
> > vms->exec_vm = vms->stack_vm = vms->data_vm = 0;
> > vms->unmap_start = FIRST_USER_ADDRESS;
> > vms->unmap_end = USER_PGTABLES_CEILING;
> > + vms->clear_ptes = false; /* No PTEs to clear yet */
> > + vms->closed = false;
> > }
> >
> > /*
> > @@ -735,7 +742,6 @@ int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > vma_iter_store(vmi, vma);
> >
> > vma_complete(&vp, vmi, vma->vm_mm);
> > - validate_mm(vma->vm_mm);
> > return 0;
> >
> > nomem:
> > @@ -2597,23 +2603,31 @@ struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
> > *
> > * Reattach any detached vmas and free up the maple tree used to track the vmas.
> > */
> > -static inline void abort_munmap_vmas(struct ma_state *mas_detach)
> > +static inline void abort_munmap_vmas(struct ma_state *mas_detach, bool closed)
> > {
> > struct vm_area_struct *vma;
> >
> > mas_set(mas_detach, 0);
> > - mas_for_each(mas_detach, vma, ULONG_MAX)
> > + mas_for_each(mas_detach, vma, ULONG_MAX) {
> > + if (closed && vma->vm_ops && vma->vm_ops->close &&
> > + vma->vm_ops->open)
> > + vma->vm_ops->open(vma);
> > +
>
> Unfortunately I think this is broken. While in theory this should probably
> be semantically correct, I think drivers get this wrong.
>
> For instance, in the devio driver, usbdev_vm_ops assigns custom VMA
> open/close functionality in usbdev_vm_open() and usbdev_vm_close().
>
> usbdev_vm_open() simply increments a 'vma_use_count' counter, whereas
> usbdev_vm_close() calls dec_usb_memory_use_count(), which, if the count
> reaches zero, frees a bunch of objects.
>
> I've not tested it directly, but it's conceivable that we could end up with
> an entirely broken mapping that might result in a kernel NULL pointer deref
> or some such other hideous, possibly exploitable (at least for DoS), scenario.
>
> Also since this is up to drivers, we can't really control whether people do
> stupid things here or otherwise assume this close/reopen scenario cannot
> happen.
>
> I think the fact we _might_ cause inconsistent kernel state here rules this
> out as an approach unfortunately.
>
> We can't simply do what this code did before (that is, leaving a hole) as
> this might require allocations to clear a range in the maple tree (as you
> pointed out in the open VMA scalability call earlier today).
>
> However, as I suggested in the call, it seems that the case of us
> performing a MAP_FIXED mapping _and_ removing underlying VMAs _and_ those
> VMAs having custom close() operators is very niche, so in this instance it
> seems sensible to simply preallocate memory to allow us this out, and
> clearing the range + returning an error if this occurs in this scenario
> only.
>
> Since this is such an edge case it'll mean we almost never preallocate,
> only doing so in this rare instance.
>
> It's ugly, but it seems there is no really 'pretty' solution to this
> problem, and we don't want this to block this series!
>

I will add a patch to this series that will preallocate to leave a gap
(as we do today in the impossible-to-undo failure cases).

Suren suggested I keep the change in its own patch so people see why we
cannot do the Right Thing(tm).

Thanks,
Liam

...