Re: [PATCH v2 1/3] mm: kick writeback flusher instead of inline flush for IOCB_DONTCACHE
From: IBM
Date: Thu Apr 16 2026 - 23:49:54 EST
Jeff Layton <jlayton@xxxxxxxxxx> writes:
> On Thu, 2026-04-09 at 07:10 +0530, Ritesh Harjani wrote:
>> Jeff Layton <jlayton@xxxxxxxxxx> writes:
>>
>> > The IOCB_DONTCACHE writeback path in generic_write_sync() calls
>> > filemap_flush_range() on every write, submitting writeback inline in
>> > the writer's context. Perf lock contention profiling shows the
>> > performance problem is not lock contention but the writeback submission
>> > work itself — walking the page tree and submitting I/O blocks the
>> > writer for milliseconds, inflating p99.9 latency from 23ms (buffered)
>> > to 93ms (dontcache).
>> >
>> > Replace the inline filemap_flush_range() call with a
>> > wakeup_flusher_threads_bdi() call that kicks the BDI's flusher thread
>> > to drain dirty pages in the background. This moves writeback
>> > submission completely off the writer's hot path. The flusher thread
>> > handles writeback asynchronously, naturally coalescing and rate-limiting
>> > I/O without any explicit skip-if-busy or dirty pressure checks.
>> >
>>
>> Thanks Jeff for explaining this. It make sense now.
>>
>>
>> > Add WB_REASON_DONTCACHE as a new writeback reason for tracing
>> > visibility.
>> >
>> > Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
>> > ---
>> > fs/fs-writeback.c | 14 ++++++++++++++
>> > include/linux/backing-dev-defs.h | 1 +
>> > include/linux/fs.h | 6 ++----
>> > include/trace/events/writeback.h | 3 ++-
>> > 4 files changed, 19 insertions(+), 5 deletions(-)
>> >
>> > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
>> > index 3c75ee025bda..88dc31388a31 100644
>> > --- a/fs/fs-writeback.c
>> > +++ b/fs/fs-writeback.c
>> > @@ -2466,6 +2466,20 @@ void wakeup_flusher_threads_bdi(struct backing_dev_info *bdi,
>> > rcu_read_unlock();
>> > }
>> >
>> > +/**
>> > + * filemap_dontcache_kick_writeback - kick flusher for IOCB_DONTCACHE writes
>> > + * @mapping: address_space that was just written to
>> > + *
>> > + * Wake the BDI flusher thread to start writeback of dirty pages in the
>> > + * background.
>> > + */
>> > +void filemap_dontcache_kick_writeback(struct address_space *mapping)
>>
>> This api gives a wrong sense that we are kicking writeback to write
>> dirty pages which belongs to only this inode's address space mapping.
>> But instead we are starting wb for everything on the respective bdi.
>>
>> So instead why not just export symbol for wakeup_flusher_threads_bdi()
>> and use it instead?
>>
>> If not, then IMO at least making it...
>> filemap_kick_writeback_all(mapping, enum wb_reason)
>>
>> ... might be better.
>
> I did draft up a version of this -- adding a way to tell the flusher
> thread to only flush a single inode. The performance is better than
> today's DONTCACHE, but was worse than just kicking the flusher thread.
>
> I think we're probably better off not doing this because we lose some
> batching opportunities by trying to force out a single inode's pages
> rather than allowing the thread to do its thing.
>
So, if I understood it correctly, Christoph might be talking about a
different approach here.
Instead of kicking flusher thread to writeback pages for a single inode,
if we can track the number of dontcache pages
(get_nr_dontcache_pages()), then we can kick the flusher for those many
target pages. I think this way we are still reducing the dirty page
cache pressure - the problem which RWF_DONTCACHE is supposed to solve.
But I guess, that doesn't necessarily always mean that only dontcache
marked folios will get written.
If we implement that then, this should still help with the batching
problem you mentioned and hopefully should not cause a major regression
for the workload which Jan mentioned.
Feel free to correct my understanding here please.
-ritesh