Re: [PATCH] mm/migrate: fix hugetlbfs deadlock by respecting lock ordering

From: Jinchao Wang
Date: Fri Jan 09 2026 - 09:16:10 EST


On Fri, Jan 09, 2026 at 02:39:08PM +0100, David Hildenbrand (Red Hat) wrote:
> On 1/9/26 04:47, Jinchao Wang wrote:
> > Fix an AB-BA deadlock between hugetlbfs_punch_hole() and page migration.
> >
> > The deadlock occurs because migration violates the lock ordering defined
> > in mm/rmap.c for hugetlbfs:
> >
> > * hugetlbfs PageHuge() take locks in this order:
> > * hugetlb_fault_mutex
> > * vma_lock
> > * mapping->i_mmap_rwsem
> > * folio_lock
> >
> > The following trace illustrates the inversion:
> >
> > Task A (punch_hole): Task B (migration):
> > -------------------- -------------------
> > 1. i_mmap_lock_write(mapping) 1. folio_lock(folio)
> > 2. folio_lock(folio) 2. i_mmap_lock_read(mapping)
> > (blocks waiting for B) (blocks waiting for A)
> >
> > Task A is blocked in the punch-hole path:
> > hugetlbfs_fallocate
> > hugetlbfs_punch_hole
> > hugetlbfs_zero_partial_page
> > folio_lock
> >
> > Task B is blocked in the migration path:
> > migrate_pages
> > unmap_and_move_huge_page
> > remove_migration_ptes
> > __rmap_walk_file
> > i_mmap_lock_read
> >
> > To fix this, adjust unmap_and_move_huge_page() to respect the established
> > hierarchy. If i_mmap_rwsem is acquired during try_to_migrate(), hold it
>
>
> I'm confused. Isn't it unmap_and_move_huge_page() that grabs the
> i_mmap_rwsem during hugetlb_page_mapping_lock_write() (where we do a
> try-lock)?
Yes, but the lock is released before remove_migration_ptes().

Task A can enter the race window between
i_mmap_unlock_write(mapping)
and
remove_migration_ptes() -> i_mmap_lock_read(mapping).

This window was introduced by the change below:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/diff/mm/migrate.c?id=336bf30eb765

>
>
> We now handle file-backed folios correctly I think. Could we somehow also be
> in trouble for anon folios? Because there, we'd still take the rmap lock
> after grabbing the folio lock.
>
>
> --
> Cheers
>
> David