Re: [PATCH v2 8/8] binder: use per-vma lock in page installation

From: Suren Baghdasaryan
Date: Thu Nov 07 2024 - 13:52:53 EST


On Thu, Nov 7, 2024 at 10:27 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote:
>
> On Thu, Nov 7, 2024 at 10:19 AM Carlos Llamas <cmllamas@xxxxxxxxxx> wrote:
> >
> > On Thu, Nov 07, 2024 at 10:04:23AM -0800, Suren Baghdasaryan wrote:
> > > On Thu, Nov 7, 2024 at 9:55 AM Carlos Llamas <cmllamas@xxxxxxxxxx> wrote:
> > > > On Thu, Nov 07, 2024 at 08:16:39AM -0800, Suren Baghdasaryan wrote:
> > > > > On Wed, Nov 6, 2024 at 8:03 PM Carlos Llamas <cmllamas@xxxxxxxxxx> wrote:
> > > > > > +static int binder_page_insert(struct binder_alloc *alloc,
> > > > > > + unsigned long addr,
> > > > > > + struct page *page)
> > > > > > +{
> > > > > > + struct mm_struct *mm = alloc->mm;
> > > > > > + struct vm_area_struct *vma;
> > > > > > + int ret = -ESRCH;
> > > > > > +
> > > > > > + if (!mmget_not_zero(mm))
> > > > > > + return -ESRCH;
> > > > > > +
> > > > > > + /* attempt per-vma lock first */
> > > > > > + vma = lock_vma_under_rcu(mm, addr);
> > > > > > + if (!vma)
> > > > > > + goto lock_mmap;
> > > > > > +
> > > > > > + if (binder_alloc_is_mapped(alloc))
> > > > >
> > > > > I don't think you need this check here. lock_vma_under_rcu() ensures
> > > > > that the VMA was not detached from the tree after locking the VMA, so
> > > > > if you got a VMA it's in the tree and it can't be removed (because
> > > > > it's locked). remove_vma()->vma_close()->vma->vm_ops->close() is
> > > > > called after VMA gets detached from the tree and that won't happen
> > > > > while VMA is locked. So, if lock_vma_under_rcu() returns a VMA,
> > > > > binder_alloc_is_mapped() has to always return true. A WARN_ON() check
> > > > > here to ensure that might be a better option.
> > > >
> > > > Yes we are guaranteed to have _a_ non-isolated vma. However, the check
> > > > validates that it's the _expected_ vma. IIUC, our vma could have been
> > > > unmapped (clearing alloc->mapped) and a _new_ unrelated vma could have
> > > > gotten the same address space assigned?
> > >
> > > No, this should never happen. lock_vma_under_rcu() specifically checks
> > > the address range *after* it locks the VMA:
> > > https://elixir.bootlin.com/linux/v6.11.6/source/mm/memory.c#L6026
> >
> > The scenario I'm describing is the following:
> >
> > Proc A Proc B
> > mmap(addr, binder_fd)
> > binder_page_insert()
> > mmget_not_zero()
> > munmap(addr)
> > alloc->mapped = false;
> > [...]
> > // mmap other vma but same addr
> > mmap(addr, other_fd)
> >
> > vma = lock_vma_under_rcu()
> >
> > Isn't there a chance for the vma that Proc A receives is an unrelated
> > vma that was placed in the same address range?
>
> Ah, I see now. The VMA is a valid one and at the address we specified
> but it does not belong to the binder. Yes, then you do need this
> check.

Is this scenario possible?:

Proc A Proc B
mmap(addr, binder_fd)
binder_page_insert()
mmget_not_zero()
munmap(addr)
alloc->mapped = false;
[...]
// mmap other vma but same addr
mmap(addr, other_fd)
mmap(other_addr, binder_fd)
vma = lock_vma_under_rcu(addr)

If so, I think your binder_alloc_is_mapped() check will return true
but the binder area is mapped at a different other_addr. To avoid that
I think you can check that "addr" still belongs to [alloc->vm_start,
alloc->buffer_size] after you obtained and locked the VMA.