Re: [PATCH v2] fs/hfs: fix s_fs_info leak on setup_bdev_super() failure

From: Mehdi Ben Hadj Khelifa

Date: Sat Nov 29 2025 - 06:48:13 EST


On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote:
On Thu, 2025-11-27 at 09:59 +0100, Christian Brauner wrote:
On Wed, Nov 26, 2025 at 10:30:30PM +0000, Viacheslav Dubeyko wrote:
On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote:
On 11/26/25 2:48 PM, Christian Brauner wrote:
On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote:
On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
The regression introduced by commit aca740cecbe5 ("fs: open block device
after superblock creation") allows setup_bdev_super() to fail after a new
superblock has been allocated by sget_fc(), but before hfs_fill_super()
takes ownership of the filesystem-specific s_fs_info data.

In that case, hfs_put_super() and the failure paths of hfs_fill_super()
are never reached, leaving the HFS mdb structures attached to s->s_fs_info
unreleased.The default kill_block_super() teardown also does not free
HFS-specific resources, resulting in a memory leak on early mount failure.

Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
hfs_put_super() and the hfs_fill_super() failure path into a dedicated
hfs_kill_sb() implementation. This ensures that both normal unmount and
early teardown paths (including setup_bdev_super() failure) correctly
release HFS metadata.

This also preserves the intended layering: generic_shutdown_super()
handles VFS-side cleanup, while HFS filesystem state is fully destroyed
afterwards.

Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
Reported-by: syzbot+ad45f827c88778ff7df6@xxxxxxxxxxxxxxxxxxxxxxxxx
Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
Tested-by: syzbot+ad45f827c88778ff7df6@xxxxxxxxxxxxxxxxxxxxxxxxx
Suggested-by: Al Viro <viro@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@xxxxxxxxx>
---
ChangeLog:

Changes from v1:

-Changed the patch direction to focus on hfs changes specifically as
suggested by al viro

Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@xxxxxxxxx/

Note:This patch might need some more testing as I only did run selftests
with no regression, check dmesg output for no regression, run reproducer
with no bug and test it with syzbot as well.

Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
failures for HFS now. And you can check the list of known issues here [1]. The
main point of such run of xfstests is to check that maybe some issue(s) could be
fixed by the patch. And, more important that you don't introduce new issues. ;)


fs/hfs/super.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/fs/hfs/super.c b/fs/hfs/super.c
index 47f50fa555a4..06e1c25e47dc 100644
--- a/fs/hfs/super.c
+++ b/fs/hfs/super.c
@@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
{
cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
hfs_mdb_close(sb);
- /* release the MDB's resources */
- hfs_mdb_put(sb);
}
static void flush_mdb(struct work_struct *work)
@@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
bail_no_root:
pr_err("get root inode failed\n");
bail:
- hfs_mdb_put(sb);
return res;
}
@@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
return 0;
}
+static void hfs_kill_sb(struct super_block *sb)
+{
+ generic_shutdown_super(sb);
+ hfs_mdb_put(sb);
+ if (sb->s_bdev) {
+ sync_blockdev(sb->s_bdev);
+ bdev_fput(sb->s_bdev_file);
+ }
+
+}
+
static struct file_system_type hfs_fs_type = {
.owner = THIS_MODULE,
.name = "hfs",
- .kill_sb = kill_block_super,

It looks like we have the same issue for the case of HFS+ [2]. Could you please
double check that HFS+ should be fixed too?

There's no need to open-code this unless I'm missing something. All you
need is the following two patches - untested. Both issues were
introduced by the conversion to the new mount api.
Yes, I don't think open-code is needed here IIUC, also as I mentionned
before I went by the suggestion of Al Viro in previous replies that's my
main reason for doing it that way in the first place.

Also me and Slava are working on testing the mentionned patches, Should
I sent them from my part to the maintainers and mailing lists once
testing has been done?



I have run the xfstests on the latest kernel. Everything works as expected:

sudo ./check -g auto
FSTYP -- hfsplus
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP
PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch

generic/001 22s ... 53s
generic/002 17s ... 43s

<skipped>

Failures: generic/003 generic/013 generic/020 generic/034 generic/037
generic/039 generic/040 generic/041 generic/056 generic/057 generic/062
generic/065 generic/066 generic/069 generic/070 generic/073 generic/074
generic/079 generic/091 generic/097 generic/101 generic/104 generic/106
generic/107 generic/113 generic/127 generic/241 generic/258 generic/263
generic/285 generic/321 generic/322 generic/335 generic/336 generic/337
generic/339 generic/341 generic/342 generic/343 generic/348 generic/363
generic/376 generic/377 generic/405 generic/412 generic/418 generic/464
generic/471 generic/475 generic/479 generic/480 generic/481 generic/489
generic/490 generic/498 generic/502 generic/510 generic/523 generic/525
generic/526 generic/527 generic/533 generic/534 generic/535 generic/547
generic/551 generic/552 generic/557 generic/563 generic/564 generic/617
generic/631 generic/637 generic/640 generic/642 generic/647 generic/650
generic/690 generic/728 generic/729 generic/760 generic/764 generic/771
generic/776
Failed 84 of 767 tests

Currently, failures are expected. But I don't see any serious crash, especially,
on every single test.

So, I can apply two patches that Christian shared and test it on my side.

I had impression that Christian has taken the patch for HFS already in his tree.
Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests
with applied patches at first.

Feel free to taken them.

Sounds good!

So, I have xfestests run results:

HFS without patch:

sudo ./check -g auto
FSTYP -- hfs
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch

<skipped>

Failed 140 of 766 tests

HFS with patch:

sudo ./check -g auto
FSTYP -- hfs
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch

<skipped>

Failed 139 of 766 tests

HFS+ without patch:

sudo ./check -g auto
FSTYP -- hfsplus
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP
PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch

<skipped>

Failed 84 of 767 tests

HFS+ with patch:

sudo ./check -g
FSTYP -- hfsplus
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch

<skipped>

Failed 81 of 767 tests

As far as I can see, the situation is improving with the patches. I can say that
patches have been tested and I am ready to pick up the patches into HFS/HFS+
tree.

Mehdi, should I expect the formal patches from you? Or should I take the patches
as it is?


I can send them from my part. Should I add signed-off-by tag at the end appended to them?


Also, I want to give an apologies for the delayed/none reply about the crash of xfstests on my part. I went back testing them 3 days earlier and they started showing different results again and then I have broken my finger....Which caused me to have much slower progress.I'm still working on getting the same crashes as I did before where I get them when running any test.Because I ran quick tests and they didn't crash. only with auto around the 631 test for desktop and around 642 on my laptop for both not patched and patched kernels.I'm going to update you on that matter when I can have predictable behavior and cause of the crash/call stack.But expect slow progress from my part here for the reason I mentionned before.


Thanks,
Slava.
Best Regards,
Mehdi Ben Hadj Khelifa