Re: [Bug] Memory allocation errors and system crashing due to buggy disk cache/inode allocations by ntfs3 kernel module.
From: Konstantin Komarov
Date: Wed Nov 19 2025 - 10:11:19 EST
On 11/19/25 11:57, craftfever@xxxxxxxxxxxx wrote:
Nov 14, 2025, 10:39 by almaz.alexandrovich@xxxxxxxxxxxxxxxxxxxx:
On 10/4/25 13:26, craftfever@xxxxxxxxxxxx wrote:Thanks for response. After that situation, I changed driver to NTFS-3G, so were with stable work, but degraded performance, but at least without crashes. Today, after your response, I reverted again to ntfs3 to test cases, that I mention and I can't reproduce it either. As it didn't response from you for so long before now, I changed so many Linux OS settings, disabled USB autosuspend, disabled rtkit-daemon canary-thread, so there no more highest RT threads, changed some scheduling and memory management options, so now it's stable. I'm not expecting any crashes and lockups even with large amount of files. Unfortunately, during bug presenting I didn't able catch dmesg messages, `cause system crashed. The only my assumption, that it may had MFT allocation bug, when disk is practically full and driver have to handle MFT allocation, additional space, where new pieces is stored. I guess it, 'cause when I tested new ntfsplus driver, I expected this bug, when downloading multiple chunks of files, but without system crashing, just the download manager aborted file downloading with "memory allocation error" with corresponding dmesg errors. Right now, I don't expecting any issues with ntfs3, so I'll respond, if there will be any. Thank you.
I'm posting there first time, so I through it like generic bug mailing list, but I can say, that, for example, version 6.12.50-lts a little less pron to bug, but it occurs there as well. I'm using Linux 6.16.10 for now. So, bug is present a while, but i hardly to tell, in what kernel version it appeared, cause earlier, I didn't manage that big amount of files. Again, it's okay with ntfs-3g.Hello,
Oct 4, 2025, 14:12 by regressions@xxxxxxxxxxxxx:
On 10/4/25 13:03, craftfever@xxxxxxxxxxxx wrote:
Oct 4, 2025, 11:55 by craftfever@xxxxxxxxxxxx:Thx for the report.
I'm expecting serious bug when writing large amount of files toTo reproduce a bug, try cloning two big Git repositories to an
NTFS hard drive, shortly after memory allocation errors and system
crash occurs/ Firstly, I thought, than this is bug in linux kernel
itself, somewhat disk cache allocation error, but when I tested
same operations on ext4 drive or using NTFS-3G module, bug is not
present.
external NTFS drive mounted with ntfs3 module.
What kernel version are your using?
You CCed the regression list, so I assume this used to work, which leads
to two more questions: What was the last version where this works? Could
you bisect?
Ciao, Thorsten
I tried to reproduce the problem by cloning multiple large Git repositories
onto an ntfs3-mounted NTFS volume, but the issue did not trigger on my side
and no system crash occurred.
Could you provide a bit more detail about your case?
- What appears in the kernel logs before the crash or before the process
enters the unkillable state? Any warnings, memory allocation errors,
stack traces, or lockdep messages from dmesg would be very useful.
- What mount options are you using for ntfs3?
- Roughly how much data or how many files are needed to trigger the
behavior?
- Does the problem happen immediately, or only after sustained I/O or high
memory pressure?
If you can capture the relevant portion of dmesg or the last messages
shown before the freeze/hang, that would help a lot in diagnosing this.
Regards,
Konstantin
Hello,
Thanks for the update and for taking the time to retest. Even though the
issue is not currently reproducible, I’ll keep your report and the
possible MFT allocation cause in mind. If it happens again and you’re
able to capture any logs, please let me know - any extra information would
be very helpful.
Regards,
Konstantin