Forwarded: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios
From: syzbot
Date: Sat Mar 21 2026 - 04:36:59 EST
For archival purposes, forwarding an incoming command email to
linux-kernel@xxxxxxxxxxxxxxx, syzkaller-bugs@xxxxxxxxxxxxxxxx.
***
Subject: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios
Author: kartikey406@xxxxxxxxx
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
When blocksize < PAGE_SIZE, a folio can span multiple pages with
multiple buffer heads. ext4_bio_write_folio() encrypted the entire
folio once with offset=0 via fscrypt_encrypt_pagecache_blocks(),
which always returns a single bounce page covering only the first
page of the folio.
When the write loop iterated over buffer heads beyond the first page,
bio_add_folio() calculated nr = bh_offset(bh) / PAGE_SIZE which was
non-zero for bh's on subsequent pages. folio_page(io_folio, nr) then
went out of bounds on the single page bounce folio, returning a NULL
or garbage page pointer, causing a NULL pointer dereference in
bvec_set_page().
Fix this by moving the encryption inside the write loop and encrypting
per buffer head using the correct offset within the folio via
offset_in_folio(folio, bh->b_data). Each buffer head now gets its own
bounce page at index 0, so folio_page(io_folio, 0) is always valid.
The existing retry logic for -ENOMEM is preserved.
Reported-by: syzbot+ed8bc247f231c1a48e21@xxxxxxxxxxxxxxxxxxxxxxxxx
Signed-off-by: Deepanshu kartikey <Kartikey406@xxxxxxxxx>
---
fs/ext4/page-io.c | 87 +++++++++++++++++++++++------------------------
1 file changed, 43 insertions(+), 44 deletions(-)
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index a8c95eee91b7..d7114171cd52 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -537,56 +537,55 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio,
* (e.g. holes) to be unnecessarily encrypted, but this is rare and
* can't happen in the common case of blocksize == PAGE_SIZE.
*/
- if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
- gfp_t gfp_flags = GFP_NOFS;
- unsigned int enc_bytes = round_up(len, i_blocksize(inode));
- struct page *bounce_page;
-
- /*
- * Since bounce page allocation uses a mempool, we can only use
- * a waiting mask (i.e. request guaranteed allocation) on the
- * first page of the bio. Otherwise it can deadlock.
- */
- if (io->io_bio)
- gfp_flags = GFP_NOWAIT;
- retry_encrypt:
- bounce_page = fscrypt_encrypt_pagecache_blocks(folio,
- enc_bytes, 0, gfp_flags);
- if (IS_ERR(bounce_page)) {
- ret = PTR_ERR(bounce_page);
- if (ret == -ENOMEM &&
- (io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) {
- gfp_t new_gfp_flags = GFP_NOFS;
- if (io->io_bio)
- ext4_io_submit(io);
- else
- new_gfp_flags |= __GFP_NOFAIL;
- memalloc_retry_wait(gfp_flags);
- gfp_flags = new_gfp_flags;
- goto retry_encrypt;
- }
-
- printk_ratelimited(KERN_ERR "%s: ret = %d\n", __func__, ret);
- folio_redirty_for_writepage(wbc, folio);
- do {
- if (buffer_async_write(bh)) {
- clear_buffer_async_write(bh);
- set_buffer_dirty(bh);
- }
- bh = bh->b_this_page;
- } while (bh != head);
-
- return ret;
- }
- io_folio = page_folio(bounce_page);
- }
-
__folio_start_writeback(folio, keep_towrite);
/* Now submit buffers to write */
do {
if (!buffer_async_write(bh))
continue;
+ if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
+ gfp_t gfp_flags = GFP_NOFS;
+ struct page *bounce_page;
+ /*
+ * Since bounce page allocation uses a mempool, we can
+ * only use a waiting mask (i.e. request guaranteed
+ * allocation) on the first page of the bio.
+ * Otherwise it can deadlock.
+ */
+ if (io->io_bio)
+ gfp_flags = GFP_NOWAIT;
+ retry_encrypt:
+ bounce_page = fscrypt_encrypt_pagecache_blocks(folio,
+ bh->b_size,
+ offset_in_folio(folio, bh->b_data),
+ gfp_flags);
+ if (IS_ERR(bounce_page)) {
+ ret = PTR_ERR(bounce_page);
+ if (ret == -ENOMEM &&
+ (io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) {
+ gfp_t new_gfp_flags = GFP_NOFS;
+ if (io->io_bio)
+ ext4_io_submit(io);
+ else
+ new_gfp_flags |= __GFP_NOFAIL;
+ memalloc_retry_wait(gfp_flags);
+ gfp_flags = new_gfp_flags;
+ goto retry_encrypt;
+ }
+ printk_ratelimited(KERN_ERR "%s: ret = %d\n",
+ __func__, ret);
+ folio_redirty_for_writepage(wbc, folio);
+ do {
+ if (buffer_async_write(bh)) {
+ clear_buffer_async_write(bh);
+ set_buffer_dirty(bh);
+ }
+ bh = bh->b_this_page;
+ } while (bh != head);
+ return ret;
+ }
+ io_folio = page_folio(bounce_page);
+ }
io_submit_add_bh(io, inode, folio, io_folio, bh);
} while ((bh = bh->b_this_page) != head);
--
2.43.0