[PATCH v2] erofs: add GFP_NOIO in the bio completion if needed
From: Jiucheng Xu via B4 Relay
Date: Wed Mar 11 2026 - 05:12:22 EST
From: Jiucheng Xu <jiucheng.xu@xxxxxxxxxxx>
The bio completion path in the process context (e.g. dm-verity)
will directly call into decompression rather than trigger another
workqueue context for minimal scheduling latencies, which can
then call vm_map_ram() with GFP_KERNEL.
Due to insufficient memory, vm_map_ram() may generate memory
swapping I/O, which can cause submit_bio_wait to deadlock
in some scenarios.
Trimmed down the call stack, as follows:
f2fs_submit_read_io
submit_bio //bio_list is initialized.
mmc_blk_mq_recovery
z_erofs_endio
vm_map_ram
__pte_alloc_kernel
__alloc_pages_direct_reclaim
shrink_folio_list
__swap_writepage
submit_bio_wait //bio_list is non-NULL, hang!!!
Use memalloc_noio_{save,restore}() to wrap up this path.
Reviewed-by: Gao Xiang <hsiangkao@xxxxxxxxxxxxxxxxx>
Signed-off-by: Jiucheng Xu <jiucheng.xu@xxxxxxxxxxx>
---
Changes in v2:
- Reword the commit message.
- Link to v1: https://lore.kernel.org/r/20260311-origin-dev-v1-1-40524ef07ff0@xxxxxxxxxxx
---
fs/erofs/zdata.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index 3977e42b9516861bf3d59c072b6b8aaa6898dd8a..fe8121df9ef2f2404fc6e3f0fbbd6367f9ec2c67 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -1445,6 +1445,7 @@ static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io,
int bios)
{
struct erofs_sb_info *const sbi = EROFS_SB(io->sb);
+ int gfp_flag;
/* wake up the caller thread for sync decompression */
if (io->sync) {
@@ -1477,7 +1478,9 @@ static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io,
sbi->sync_decompress = EROFS_SYNC_DECOMPRESS_FORCE_ON;
return;
}
+ gfp_flag = memalloc_noio_save();
z_erofs_decompressqueue_work(&io->u.work);
+ memalloc_noio_restore(gfp_flag);
}
static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
---
base-commit: 11439c4635edd669ae435eec308f4ab8a0804808
change-id: 20260311-origin-dev-1c9665798204
Best regards,
--
Jiucheng Xu <jiucheng.xu@xxxxxxxxxxx>