Re: [syzbot] [mm?] KMSAN: uninit-value in swap_writeout

From: Baolin Wang

Date: Tue Dec 23 2025 - 20:43:15 EST




On 2025/12/24 08:16, Barry Song wrote:
On Wed, Dec 24, 2025 at 12:43 PM Pedro Falcato <pfalcato@xxxxxxx> wrote:

On Wed, Dec 24, 2025 at 11:46:44AM +1300, Barry Song wrote:

Uninit was created at:
 __alloc_frozen_pages_noprof+0x421/0xab0 mm/page_alloc.c:5233
 alloc_pages_mpol+0x328/0x860 mm/mempolicy.c:2486
 folio_alloc_mpol_noprof+0x56/0x1d0 mm/mempolicy.c:2505
 shmem_alloc_folio mm/shmem.c:1890 [inline]
 shmem_alloc_and_add_folio+0xc56/0x1bd0 mm/shmem.c:1932
 shmem_get_folio_gfp+0xad3/0x1fc0 mm/shmem.c:2556
 shmem_get_folio mm/shmem.c:2662 [inline]
 shmem_symlink+0x562/0xad0 mm/shmem.c:4129
 vfs_symlink+0x42f/0x4c0 fs/namei.c:5514
 do_symlinkat+0x2ae/0xbb0 fs/namei.c:5541

+Hugh and Baolin.

Thanks for CCing me.


This happens in the shmem symlink path, where newly allocated
folios are not cleared for some reason. As a result,
is_folio_zero_filled() ends up reading uninitialized data.


I'm not Hugh nor Baolin, but I would guess that letting
is_folio_zero_filled() skip/disable KMSAN would also work. Since all we want
is to skip writeout if the folio is zero, whether it is incidentally zero, or not,
does not really matter, I think.

Hi Pedro, thanks! You’re always welcome to chime in.

You are probably right. However, I still prefer the remaining
data to be zeroed, as it may be more compression-friendly.

Random data could potentially lead to larger compressed output,
whereas a large area of zeros would likely result in much smaller
compressed data.

Thanks Pedro and Barry. I remember Hugh raised a similar issue before (See [1], but I did not investigate further:(). I agree with Hugh's point that the uninitialized parts should be zeroed before going the outside world.

[1] https://lore.kernel.org/all/02a21a55-8fe3-a9eb-f54b-051d75ae8335@xxxxxxxxxx/

Not quite sure if the below can fix the issue:

diff --git a/mm/shmem.c b/mm/shmem.c
index ec6c01378e9d..0ca2d4bffdb4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -4131,6 +4131,7 @@ static int shmem_symlink(struct mnt_idmap *idmap, struct inode *dir,
goto out_remove_offset;
inode->i_op = &shmem_symlink_inode_operations;
memcpy(folio_address(folio), symname, len);
+ memset(folio_address(folio) + len, 0, folio_size(folio) - len);
folio_mark_uptodate(folio);
folio_mark_dirty(folio);
folio_unlock(folio);

That looks reasonable to me, though I prefer to use the more readable helper: folio_zero_range(). Barry, could you send out a formal patch? Thanks.