Re: [RFC PATCH] vfs: limit directory child dentry retention
From: Linus Torvalds
Date: Tue Mar 31 2026 - 22:53:47 EST
On Tue, 31 Mar 2026 at 19:21, Ian Kent <raven@xxxxxxxxxx> wrote:
>
> On 1/4/26 09:47, Gao Xiang wrote:
> >>>
> >>> `d_lockref.count` can exceed `int` on very very large
> >>> directories in reality (also combined with cached
> >>> negative dentries).
>
> I hadn't considered such a large number of dentries so I wasn't
> trying to resolve this case and I guess the change here would
> only postpone the need to re-think dcache design which I suspect
> is what would be needed.
I think it should be trivial to limit the lockref count. We did that
for the page count, and it wasn't all that hard: see try_get_page().
It doesn't even require complicated atomic sequences, because you
don't have to be very precise. If things get close to being too large,
you just fail it. And you don't fail every kind of operation, you only
fail the ones that are accessible to users as a way to artificially
inflate the numbers.
In the case of page counts, it was things like splicing the same page
over and over again, so the only operation that actually needed that
"stop at big numbers" was generic_pipe_buf_get().
I'm not sure how you make up large number of dentries in directories
if we just have that limit on negative dentries (which seems
reasonable).
So I think this is very analogous to that page count thing.
Linus