Re: [PATCH v5] hfsplus: fix uninit-value by validating catalog record size

From: Viacheslav Dubeyko

Date: Mon Feb 23 2026 - 13:59:34 EST


On Sat, 2026-02-21 at 11:46 +0530, Deepanshu Kartikey wrote:
> Syzbot reported a KMSAN uninit-value issue in hfsplus_strcasecmp(). The
> root cause is that hfs_brec_read() doesn't validate that the on-disk
> record size matches the expected size for the record type being read.
>
> When mounting a corrupted filesystem, hfs_brec_read() may read less data
> than expected. For example, when reading a catalog thread record, the
> debug output showed:
>
> HFSPLUS_BREC_READ: rec_len=520, fd->entrylength=26
> HFSPLUS_BREC_READ: WARNING - entrylength (26) < rec_len (520) - PARTIAL READ!
>
> hfs_brec_read() only validates that entrylength is not greater than the
> buffer size, but doesn't check if it's less than expected. It successfully
> reads 26 bytes into a 520-byte structure and returns success, leaving 494
> bytes uninitialized.
>
> This uninitialized data in tmp.thread.nodeName then gets copied by
> hfsplus_cat_build_key_uni() and used by hfsplus_strcasecmp(), triggering
> the KMSAN warning when the uninitialized bytes are used as array indices
> in case_fold().
>
> Fix by introducing hfsplus_brec_read_cat() wrapper that:
> 1. Calls hfs_brec_read() to read the data
> 2. Validates the record size based on the type field:
> - Fixed size for folder and file records
> - Variable size for thread records (depends on string length)
> 3. Returns -EIO if size doesn't match expected
>
> For thread records, check minimum size before reading nodeName.length to
> avoid reading uninitialized data at call sites that don't zero-initialize
> the entry structure.
>
> Also initialize the tmp variable in hfsplus_find_cat() as defensive
> programming to ensure no uninitialized data even if validation is
> bypassed.
>
> Reported-by: syzbot+d80abb5b890d39261e72@xxxxxxxxxxxxxxxxxxxxxxxxx
> Closes: https://urldefense.proofpoint.com/v2/url?u=https-3A__syzkaller.appspot.com_bug-3Fextid-3Dd80abb5b890d39261e72&d=DwIDAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=BXdn3wIpmMlMNms5cd3HWVwws0AUq3PRR8HAQvqDsmGoQ3l_KOE-MEvwfuZsxUKa&s=A4W_ncse0UuOomNKgOBsksUS3UjSy9zocmcvWCSeGck&e=
> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
> Tested-by: syzbot+d80abb5b890d39261e72@xxxxxxxxxxxxxxxxxxxxxxxxx
> Reviewed-by: Viacheslav Dubeyko <slava@xxxxxxxxxxx>
> Tested-by: Viacheslav Dubeyko <slava@xxxxxxxxxxx>
> Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20260120051114.1281285-2D1-2Dkartikey406-40gmail.com_&d=DwIDAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=BXdn3wIpmMlMNms5cd3HWVwws0AUq3PRR8HAQvqDsmGoQ3l_KOE-MEvwfuZsxUKa&s=D3aNhw4UfjLCgJfRRGBN5Cx5SfPv2zQoaKviCEtuzPY&e= [v1]
> Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20260121063109.1830263-2D1-2Dkartikey406-40gmail.com_&d=DwIDAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=BXdn3wIpmMlMNms5cd3HWVwws0AUq3PRR8HAQvqDsmGoQ3l_KOE-MEvwfuZsxUKa&s=gQs71jEhf2HGk9JYib9k-pcaXAjsPpqSAnsqr3WSL0M&e= [v2]
> Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20260212014233.2422046-2D1-2Dkartikey406-40gmail.com_&d=DwIDAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=BXdn3wIpmMlMNms5cd3HWVwws0AUq3PRR8HAQvqDsmGoQ3l_KOE-MEvwfuZsxUKa&s=A_RSp7x28FfAItoPlcUwWc02fCniq0ULR9GHf5hU-yk&e= [v3]
> Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20260214002100.436125-2D1-2Dkartikey406-40gmail.com_T_&d=DwIDAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=BXdn3wIpmMlMNms5cd3HWVwws0AUq3PRR8HAQvqDsmGoQ3l_KOE-MEvwfuZsxUKa&s=HTToLPYmUMe4gaQM0N47AxW9WCNB8FweVIsWQzvA4-o&e= [v4]
> Signed-off-by: Deepanshu Kartikey <kartikey406@xxxxxxxxx>
> ---
> Changes in v5:
> - Add minimum size check for thread records before reading nodeName.length
> to avoid reading uninitialized data, as suggested by Charalampos Mitrodimas

Maybe I am missing something. But where is suggested check? I don't see this
check at all. :)

Thanks,
Slava.

>
> Changes in v4:
> - Move hfsplus_cat_thread_size() as static inline to header file
>
> Changes in v3:
> - Introduced hfsplus_brec_read_cat() wrapper function for catalog-specific
> validation instead of modifying generic hfs_brec_read()
> - Added hfsplus_cat_thread_size() helper to calculate variable-size thread
> record sizes
> - Use exact size match (!=) instead of minimum size check (<)
> - Use sizeof(hfsplus_unichr) instead of hardcoded value 2
> - Updated all catalog record read sites to use new wrapper function
>
> Changes in v2:
> - Use structure initialization (= {0}) instead of memset()
> - Improved commit message to clarify how uninitialized data is used
> ---
> fs/hfsplus/bfind.c | 52 +++++++++++++++++++++++++++++++++++++++++
> fs/hfsplus/catalog.c | 4 ++--
> fs/hfsplus/dir.c | 2 +-
> fs/hfsplus/hfsplus_fs.h | 9 +++++++
> fs/hfsplus/super.c | 2 +-
> 5 files changed, 65 insertions(+), 4 deletions(-)
>
> diff --git a/fs/hfsplus/bfind.c b/fs/hfsplus/bfind.c
> index 336d654861c5..2b9152c3107b 100644
> --- a/fs/hfsplus/bfind.c
> +++ b/fs/hfsplus/bfind.c
> @@ -287,3 +287,55 @@ int hfs_brec_goto(struct hfs_find_data *fd, int cnt)
> fd->bnode = bnode;
> return res;
> }
> +
> +/**
> + * hfsplus_brec_read_cat - read and validate a catalog record
> + * @fd: find data structure
> + * @entry: pointer to catalog entry to read into
> + *
> + * Reads a catalog record and validates its size matches the expected
> + * size based on the record type.
> + *
> + * Returns 0 on success, or negative error code on failure.
> + */
> +int hfsplus_brec_read_cat(struct hfs_find_data *fd, hfsplus_cat_entry *entry)
> +{
> + int res;
> + u32 expected_size;
> +
> + res = hfs_brec_read(fd, entry, sizeof(hfsplus_cat_entry));
> + if (res)
> + return res;
> +
> + /* Validate catalog record size based on type */
> + switch (be16_to_cpu(entry->type)) {
> + case HFSPLUS_FOLDER:
> + expected_size = sizeof(struct hfsplus_cat_folder);
> + break;
> + case HFSPLUS_FILE:
> + expected_size = sizeof(struct hfsplus_cat_file);
> + break;
> + case HFSPLUS_FOLDER_THREAD:
> + case HFSPLUS_FILE_THREAD:
> + /* Ensure we have at least the fixed fields before reading nodeName.length */
> + if (fd->entrylength < offsetof(struct hfsplus_cat_thread, nodeName) +
> + offsetof(struct hfsplus_unistr, unicode)) {
> + pr_err("thread record too short (got %u)\n", fd->entrylength);
> + return -EIO;
> + }
> + expected_size = hfsplus_cat_thread_size(&entry->thread);
> + break;
> + default:
> + pr_err("unknown catalog record type %d\n",
> + be16_to_cpu(entry->type));
> + return -EIO;
> + }
> +
> + if (fd->entrylength != expected_size) {
> + pr_err("catalog record size mismatch (type %d, got %u, expected %u)\n",
> + be16_to_cpu(entry->type), fd->entrylength, expected_size);
> + return -EIO;
> + }
> +
> + return 0;
> +}
> diff --git a/fs/hfsplus/catalog.c b/fs/hfsplus/catalog.c
> index 02c1eee4a4b8..6c8380f7208d 100644
> --- a/fs/hfsplus/catalog.c
> +++ b/fs/hfsplus/catalog.c
> @@ -194,12 +194,12 @@ static int hfsplus_fill_cat_thread(struct super_block *sb,
> int hfsplus_find_cat(struct super_block *sb, u32 cnid,
> struct hfs_find_data *fd)
> {
> - hfsplus_cat_entry tmp;
> + hfsplus_cat_entry tmp = {0};
> int err;
> u16 type;
>
> hfsplus_cat_build_key_with_cnid(sb, fd->search_key, cnid);
> - err = hfs_brec_read(fd, &tmp, sizeof(hfsplus_cat_entry));
> + err = hfsplus_brec_read_cat(fd, &tmp);
> if (err)
> return err;
>
> diff --git a/fs/hfsplus/dir.c b/fs/hfsplus/dir.c
> index ca5f74a140ec..8aeb861969d3 100644
> --- a/fs/hfsplus/dir.c
> +++ b/fs/hfsplus/dir.c
> @@ -49,7 +49,7 @@ static struct dentry *hfsplus_lookup(struct inode *dir, struct dentry *dentry,
> if (unlikely(err < 0))
> goto fail;
> again:
> - err = hfs_brec_read(&fd, &entry, sizeof(entry));
> + err = hfsplus_brec_read_cat(&fd, &entry);
> if (err) {
> if (err == -ENOENT) {
> hfs_find_exit(&fd);
> diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
> index 5f891b73a646..61d52091dd28 100644
> --- a/fs/hfsplus/hfsplus_fs.h
> +++ b/fs/hfsplus/hfsplus_fs.h
> @@ -509,6 +509,15 @@ int hfsplus_submit_bio(struct super_block *sb, sector_t sector, void *buf,
> void **data, blk_opf_t opf);
> int hfsplus_read_wrapper(struct super_block *sb);
>
> +static inline u32 hfsplus_cat_thread_size(const struct hfsplus_cat_thread *thread)
> +{
> + return offsetof(struct hfsplus_cat_thread, nodeName) +
> + offsetof(struct hfsplus_unistr, unicode) +
> + be16_to_cpu(thread->nodeName.length) * sizeof(hfsplus_unichr);
> +}
> +
> +int hfsplus_brec_read_cat(struct hfs_find_data *fd, hfsplus_cat_entry *entry);
> +
> /*
> * time helpers: convert between 1904-base and 1970-base timestamps
> *
> diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
> index 592d8fbb748c..dcb4357aae3e 100644
> --- a/fs/hfsplus/super.c
> +++ b/fs/hfsplus/super.c
> @@ -571,7 +571,7 @@ static int hfsplus_fill_super(struct super_block *sb, struct fs_context *fc)
> err = hfsplus_cat_build_key(sb, fd.search_key, HFSPLUS_ROOT_CNID, &str);
> if (unlikely(err < 0))
> goto out_put_root;
> - if (!hfs_brec_read(&fd, &entry, sizeof(entry))) {
> + if (!hfsplus_brec_read_cat(&fd, &entry)) {
> hfs_find_exit(&fd);
> if (entry.type != cpu_to_be16(HFSPLUS_FOLDER)) {
> err = -EIO;