Re: [PATCH] fs: use __fput_sync in close(2)
From: Christian Brauner
Date: Tue Aug 08 2023 - 14:56:28 EST
On Tue, Aug 08, 2023 at 09:57:04AM -0700, Linus Torvalds wrote:
> On Mon, 7 Aug 2023 at 22:57, Eric W. Biederman <ebiederm@xxxxxxxxxxxx> wrote:
> >
> > Taking a quick look at the history it appears that fput was always
> > synchronous [..]
>
> Indeed. Synchronous used to be the only case.
>
> The reason it's async now is because several drivers etc do the final
> close from nasty contexts, so 'fput()' needed to be async for the
> general case.
>
> > All 3 issues taken together says that a synchronous fput is a
> > loaded foot gun that must be used very carefully. That said
> > close(2) does seem to be a reliably safe place to be synchronous.
>
> Yes.
>
> That said, I detest Mateusz' patch. I hate these kinds of "do
> different things based on flags" interfaces. Particularly when it
> spreads out like this.
>
> So I do like having close() be synchronous, because we actually do
> have correctness issues wrt the close having completed properly by the
> time we return to user space, so we have that "task_work_add()" there
> that will force the synchronization anyway before we return.
>
> So the system call case is indeed a special case. Arguably
> close_range() could be too, but honestly, once you start doing ranges
> of file descriptors, you are (a) doint something fairly unusual, and
> (b) the "queue them up on the task work" might actually be a *good*
> thing.
>
> It's definitely not a good thing for the single-fd-close case, though.
>
> But even if we want to do this - and I have absolutely no objections
> to it conceptually as per above - we need to be a lot more surgical
> about it, and not pass stupid flags around.
>
> Here's a TOTALLY UNTESTED(!) patch that I think effectively does what
> Mateusz wants done, but does it all within just fs/open.c and only for
> the obvious context of the close() system call itself.
>
> All it needs is to just split out the "flush" part from filp_close(),
> and we already had all the other infrastructure for this operation.
>
> Mateusz, two questions:
>
> (a) does this patch work for you?
>
> (b) do you have numbers for this all?
I really would like to have good ways of testing the impact of such
things because I'm a little scared of endless optimization patches that
overall either complicate or uglify our code. Maybe I'm paranoid, maybe
that's dumb but it worries me.
>
> and if it all looks good I have no problems with this kind of much
> more targeted and obvious patch.
>
> Again: TOTALLY UNTESTED. It looks completely obvious, but mistakes happen.
I think you're at least missing the removal of the PF_KTHREAD check in
void __fput_sync(struct file *file)
{
if (atomic_long_dec_and_test(&file->f_count)) {
- struct task_struct *task = current;
- BUG_ON(!(task->flags & PF_KTHREAD));
__fput(file);
}
}
so right now we'd BUG_ON(). It'd be neat to leave that in so
__fput_sync() doesn't get proliferated to non PF_KTHREAD without us
noticing. So maybe we just need a tiny primitive.