[PATCH] ntfs3: validate split-point offset in indx_insert_into_buffer

From: Michael Bommarito

Date: Fri Apr 17 2026 - 18:57:43 EST


indx_insert_into_buffer() computes

used = used1 - to_copy - sp_size;
memmove(de_t, Add2Ptr(sp, sp_size), used - le32_to_cpu(hdr1->de_off));

where sp and sp_size come from hdr_find_split(). hdr_find_split()
walks entries by le16_to_cpu(e->size) without validating that each
step stays within hdr->used or that the size field is at least
sizeof(struct NTFS_DE). index_hdr_check(), the on-load gatekeeper,
only validates header-level fields (used, total, de_off) and does
not walk per-entry sizes.

A crafted NTFS image whose leaf INDEX_HDR reports used == total but
contains one interior NTFS_DE with size = 0xFFF0 therefore passes
validation, descends to indx_insert_into_buffer() through the
ntfs_create() -> indx_insert_entry() path, and makes hdr_find_split()
return an sp whose sp_size (0xFFF0) greatly exceeds the remaining
bytes in the buffer. The u32 subtraction underflows and the memmove
count becomes a near-4-GiB value, producing an out-of-bounds kernel
write that corrupts adjacent allocations and panics the kernel.

Reproduced on 7.0.0-rc7 with UML + KASAN via a crafted image and a
single 'touch' inside the mounted directory; crash site resolves to
fs/ntfs3/index.c at the memmove. Trigger requires only local mount
of an attacker-supplied filesystem image (USB, loopback, or removable
media auto-mount).

Reject the split whenever the chosen sp plus its declared size
already extends past hdr1->used. This is the minimal fix; it
preserves the existing hdr_find_split() contract and relies on the
same out: cleanup path as the pre-existing error returns.

A prior OOB read in the very same indx_insert_into_buffer() memmove
was fixed in commit b8c44949044e ("fs/ntfs3: Fix OOB read in
indx_insert_into_buffer") by tightening hdr_find_e(), but that fix
does not cover the split-point size field path addressed here: sp is
returned by hdr_find_split(), not hdr_find_e(), and the underflow is
driven by sp->size rather than hdr->used exceeding hdr->total.

Fixes: 82cae269cfa9 ("fs/ntfs3: Add initialization of super block")
Cc: stable@xxxxxxxxxxxxxxx
Reported-by: Michael Bommarito <michael.bommarito@xxxxxxxxx>
Signed-off-by: Michael Bommarito <michael.bommarito@xxxxxxxxx>
Assisted-by: Claude:claude-opus-4-7
---

- FYI, I have a larger refactor variant that migrates
hdr_find_split() to the validated hdr_next_de() helper and closes
the whole per-entry size-read class for that walker. Happy to
send it as v2 if you prefer the wider change; otherwise this
minimal guard is scoped to the actual memmove underflow site and
is easier to backport.

fs/ntfs3/index.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)

diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
index 2c43e7c27861..24add048b4b5 100644
--- a/fs/ntfs3/index.c
+++ b/fs/ntfs3/index.c
@@ -1844,6 +1844,20 @@ indx_insert_into_buffer(struct ntfs_index *indx, struct ntfs_inode *ni,
memcpy(up_e, sp, sp_size);

used1 = le32_to_cpu(hdr1->used);
+
+ /*
+ * hdr_find_split does not validate per-entry sizes, so a crafted
+ * NTFS_DE whose le16 size field is out of range can place sp such
+ * that (PtrOffset(hdr1, sp) + sp_size) exceeds used1. Without this
+ * guard the u32 'used = used1 - to_copy - sp_size' underflows and
+ * the subsequent memmove count becomes a near-4-GiB value,
+ * triggering an out-of-bounds kernel write.
+ */
+ if (PtrOffset(hdr1, sp) + sp_size > used1) {
+ err = -EINVAL;
+ goto out;
+ }
+
hdr1_saved = kmemdup(hdr1, used1, GFP_NOFS);
if (!hdr1_saved) {
err = -ENOMEM;
--
2.53.0