RE: [PATCH v4] hfsplus: fix uninit-value by validating catalog record size

From: Viacheslav Dubeyko

Date: Mon Feb 16 2026 - 19:13:58 EST


On Mon, 2026-02-16 at 02:15 +0000, Charalampos Mitrodimas wrote:
> Deepanshu Kartikey <kartikey406@xxxxxxxxx> writes:
>
> > Syzbot reported a KMSAN uninit-value issue in hfsplus_strcasecmp(). The
> > root cause is that hfs_brec_read() doesn't validate that the on-disk
> > record size matches the expected size for the record type being read.
> >
> > When mounting a corrupted filesystem, hfs_brec_read() may read less data
> > than expected. For example, when reading a catalog thread record, the
> > debug output showed:
> >
> > HFSPLUS_BREC_READ: rec_len=520, fd->entrylength=26
> > HFSPLUS_BREC_READ: WARNING - entrylength (26) < rec_len (520) - PARTIAL READ!
> >
> > hfs_brec_read() only validates that entrylength is not greater than the
> > buffer size, but doesn't check if it's less than expected. It successfully
> > reads 26 bytes into a 520-byte structure and returns success, leaving 494
> > bytes uninitialized.
> >
> > This uninitialized data in tmp.thread.nodeName then gets copied by
> > hfsplus_cat_build_key_uni() and used by hfsplus_strcasecmp(), triggering
> > the KMSAN warning when the uninitialized bytes are used as array indices
> > in case_fold().
> >
> > Fix by introducing hfsplus_brec_read_cat() wrapper that:
> > 1. Calls hfs_brec_read() to read the data
> > 2. Validates the record size based on the type field:
> > - Fixed size for folder and file records
> > - Variable size for thread records (depends on string length)
> > 3. Returns -EIO if size doesn't match expected
> >
> > Also initialize the tmp variable in hfsplus_find_cat() as defensive
> > programming to ensure no uninitialized data even if validation is
> > bypassed.
> >
> > Reported-by: syzbot+d80abb5b890d39261e72@xxxxxxxxxxxxxxxxxxxxxxxxx
> > Closes: https://urldefense.proofpoint.com/v2/url?u=https-3A__syzkaller.appspot.com_bug-3Fextid-3Dd80abb5b890d39261e72&d=DwIBAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=nC0HfdhtxlARLS3_CZ1hmzBHc6jFNK-5cUGbrUjuDbLPDcgRvEFW2wgjZMITubfk&s=LJy4ssHVkAhJ1O6iwvIHnY4XWlRC-EiUdTCwCCS18j0&e=
> > Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
> > Tested-by: syzbot+d80abb5b890d39261e72@xxxxxxxxxxxxxxxxxxxxxxxxx
> > Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20260120051114.1281285-2D1-2Dkartikey406-40gmail.com_&d=DwIBAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=nC0HfdhtxlARLS3_CZ1hmzBHc6jFNK-5cUGbrUjuDbLPDcgRvEFW2wgjZMITubfk&s=VYf_6vN6_Imd-hafYJbBj62WwnlPNdacx73WDLussWo&e= [v1]
> > Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20260121063109.1830263-2D1-2Dkartikey406-40gmail.com_&d=DwIBAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=nC0HfdhtxlARLS3_CZ1hmzBHc6jFNK-5cUGbrUjuDbLPDcgRvEFW2wgjZMITubfk&s=qjAt8GxYxwdQndNEWiiBy2NRhLCvDcaXdvAFv7yxAj8&e= [v2]
> > Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20260212014233.2422046-2D1-2Dkartikey406-40gmail.com_&d=DwIBAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=nC0HfdhtxlARLS3_CZ1hmzBHc6jFNK-5cUGbrUjuDbLPDcgRvEFW2wgjZMITubfk&s=zdyPgw-L5MS2vYOhrVGxHfOfAtjJkjM7nzGw6JJC1GQ&e= [v3]
> > Signed-off-by: Deepanshu Kartikey <kartikey406@xxxxxxxxx>
> > ---
> > Changes in v4:
> > - Move hfsplus_cat_thread_size() as static inline to header file as
> > suggested by Viacheslav Dubeyko
> >
> > Changes in v3:
> > - Introduced hfsplus_brec_read_cat() wrapper function for catalog-specific
> > validation instead of modifying generic hfs_brec_read()
> > - Added hfsplus_cat_thread_size() helper to calculate variable-size thread
> > record sizes
> > - Use exact size match (!=) instead of minimum size check (<)
> > - Use sizeof(hfsplus_unichr) instead of hardcoded value 2
> > - Updated all catalog record read sites to use new wrapper function
> > - Addressed review feedback from Viacheslav Dubeyko
> >
> > Changes in v2:
> > - Use structure initialization (= {0}) instead of memset()
> > - Improved commit message to clarify how uninitialized data is used
> > ---
> > fs/hfsplus/bfind.c | 46 +++++++++++++++++++++++++++++++++++++++++
> > fs/hfsplus/catalog.c | 4 ++--
> > fs/hfsplus/dir.c | 2 +-
> > fs/hfsplus/hfsplus_fs.h | 9 ++++++++
> > fs/hfsplus/super.c | 2 +-
> > 5 files changed, 59 insertions(+), 4 deletions(-)
> >
> > diff --git a/fs/hfsplus/bfind.c b/fs/hfsplus/bfind.c
> > index 9b89dce00ee9..4c5fd21585ef 100644
> > --- a/fs/hfsplus/bfind.c
> > +++ b/fs/hfsplus/bfind.c
> > @@ -297,3 +297,49 @@ int hfs_brec_goto(struct hfs_find_data *fd, int cnt)
> > fd->bnode = bnode;
> > return res;
> > }
> > +
> > +/**
> > + * hfsplus_brec_read_cat - read and validate a catalog record
> > + * @fd: find data structure
> > + * @entry: pointer to catalog entry to read into
> > + *
> > + * Reads a catalog record and validates its size matches the expected
> > + * size based on the record type.
> > + *
> > + * Returns 0 on success, or negative error code on failure.
> > + */
> > +int hfsplus_brec_read_cat(struct hfs_find_data *fd, hfsplus_cat_entry *entry)
> > +{
> > + int res;
> > + u32 expected_size;
> > +
> > + res = hfs_brec_read(fd, entry, sizeof(hfsplus_cat_entry));
> > + if (res)
> > + return res;
> > +
> > + /* Validate catalog record size based on type */
> > + switch (be16_to_cpu(entry->type)) {
> > + case HFSPLUS_FOLDER:
> > + expected_size = sizeof(struct hfsplus_cat_folder);
> > + break;
> > + case HFSPLUS_FILE:
> > + expected_size = sizeof(struct hfsplus_cat_file);
> > + break;
> > + case HFSPLUS_FOLDER_THREAD:
> > + case HFSPLUS_FILE_THREAD:
> > + expected_size = hfsplus_cat_thread_size(&entry->thread);
>
> Should we check fd->entrylength < HFSPLUS_MIN_THREAD_SZ here before
> calling hfsplus_cat_thread_size(), so we don't read uninitialized
> nodeName.length at call sites that don't zero-initialize entry?
> hfsplus_readdir() already does this check for thread records.
>

I think that it makes sense. It sounds like a good suggestion.

Thanks,
Slava.