Re: [PATCH] mm/migrate: fix deadlock in migrate_pages_batch() on large folios

From: Gao Xiang
Date: Fri Aug 02 2024 - 05:01:56 EST


Hi Matthew,

On 2024/7/29 06:11, Gao Xiang wrote:
Hi,

On 2024/7/29 05:46, Matthew Wilcox wrote:
On Sun, Jul 28, 2024 at 11:49:13PM +0800, Gao Xiang wrote:
It was found by compaction stress test when I explicitly enable EROFS
compressed files to use large folios, which case I cannot reproduce with
the same workload if large folio support is off (current mainline).
Typically, filesystem reads (with locked file-backed folios) could use
another bdev/meta inode to load some other I/Os (e.g. inode extent
metadata or caching compressed data), so the locking order will be:

Umm.  That is a new constraint to me.  We have two other places which
take the folio lock in a particular order.  Writeback takes locks on
folios belonging to the same inode in ascending ->index order.  It
submits all the folios for write before moving on to lock other inodes,
so it does not conflict with this new constraint you're proposing.

BTW, I don't believe it's a new order out of EROFS, if you consider
ext4 or ext2 for example, it will also use sb_bread() (buffer heads
on bdev inode to trigger some meta I/Os),

e.g. take ext2 for simplicity:
  ext2_readahead
    mpage_readahead
     ext2_get_block
       ext2_get_blocks
         ext2_get_branch
            sb_bread     <-- get some metadata using for this data I/O

I guess I need to write more words about this:

Although currently sb_bread() mainly take buffer locks to do meta I/Os,
but the following path takes the similar dependency:

...
sb_bread
__bread_gfp
bdev_getblk
__getblk_slow
grow_dev_folio // bdev->bd_mapping
__filemap_get_folio(FGP_LOCK | .. | FGP_CREAT)

So the order is already there for decades.. Although EROFS doesn't
use buffer heads since its initial version, it needs a different
address_space to cache metadata in page cache for best performance.

In .read_folio() and .readahead() context, the orders have to be

file-backed folios
bdev/meta folios

since it's hard to use any other orders and the file-backed folios
won't be filled without uptodated bdev/meta folios.



The other place is remap_file_range().  Both inodes in that case must be
regular files,
         if (!S_ISREG(inode_in->i_mode) || !S_ISREG(inode_out->i_mode))
                 return -EINVAL;
so this new rule is fine.

Refer to vfs_dedupe_file_range_compare() and vfs_lock_two_folios(), it
seems it only considers folio->index regardless of address_spaces too.


Does anybody know of any _other_ ordering constraints on folio locks?  I'm
willing to write them down ...

Personally I don't think out any particular order between two folio
locks acrossing different inodes, so I think folio batching locking
always needs to be taken care.


I think folio_lock() comment of different address_spaces added in
commit cd125eeab2de ("filemap: Update the folio_lock documentation")
would be better to be refined:

...
* in the same address_space. If they are in different address_spaces,
* acquire the lock of the folio which belongs to the address_space which
* has the lowest address in memory first.
*/
static inline void folio_lock(struct folio *folio)
{
...


Since there are several cases we cannot follow the comment above due
to .read_folio(), .readahead() and more contexts.

I'm not sure how to document the order of different address_spaces,
so I think it's just "no particular order between different
address_space".

Thanks,
Gao Xiang