Re: [PATCH v6 2/9] binder: concurrent page installation
From: Carlos Llamas
Date: Wed Dec 04 2024 - 08:39:48 EST
On Wed, Dec 04, 2024 at 10:59:19AM +0100, Alice Ryhl wrote:
> On Tue, Dec 3, 2024 at 10:55 PM Carlos Llamas <cmllamas@xxxxxxxxxx> wrote:
> >
> > Allow multiple callers to install pages simultaneously by switching the
> > mmap_sem from write-mode to read-mode. Races to the same PTE are handled
> > using get_user_pages_remote() to retrieve the already installed page.
> > This method significantly reduces contention in the mmap semaphore.
> >
> > To ensure safety, vma_lookup() is used (instead of alloc->vma) to avoid
> > operating on an isolated VMA. In addition, zap_page_range_single() is
> > called under the alloc->mutex to avoid racing with the shrinker.
>
> How do you avoid racing with the shrinker? You don't hold the mutex
> when binder_install_single_page is called.
>
> E.g. consider this execution:
>
> 1. binder_alloc_new_buf finishes allocating the struct binder_buffer
> and unlocks the mutex.
By the time the mutex is released in binder_alloc_new_buf() all the
pages that will be used have been removed from the freelist and the
shrinker will have no access to them.
> 2. Shrinker starts running, locks the mutex, sets the page pointer to
> NULL and unlocks the lru spinlock. The mutex is still held.
> 3. binder_install_buffer_pages is called and since the page pointer is
> NULL, binder_install_single_page is called.
> 4. binder_install_single_page allocates a page and tries to
> vm_insert_page it. It gets an EBUSY error because the shrinker has not
> yet called zap_page_range_single.
> 5. binder_install_single_page looks up the page with
> get_user_pages_remote. The page is written back to the pages array.
> 6. The shrinker calls zap_page_range_single followed by
> binder_free_page(page_to_free).
> 7. The page has now been freed and zapped, but it's in the page array. UAF.
>
> Is there something I'm missing?
I think that would be the call to binder_lru_freelist_del().