Re: [PATCH v4 2/4] mm: kick writeback flusher for IOCB_DONTCACHE with targeted dirty tracking
From: Jan Kara
Date: Sun May 03 2026 - 14:18:15 EST
On Fri 01-05-26 10:49:36, Jeff Layton wrote:
> The IOCB_DONTCACHE writeback path in generic_write_sync() calls
> filemap_flush_range() on every write, submitting writeback inline in
> the writer's context. Perf lock contention profiling shows the
> performance problem is not lock contention but the writeback submission
> work itself — walking the page tree and submitting I/O blocks the writer
> for milliseconds, inflating p99.9 latency from 23ms (buffered) to 93ms
> (dontcache).
>
> Replace the inline filemap_flush_range() call with a flusher kick that
> drains dirty pages in the background. This moves writeback submission
> completely off the writer's hot path.
>
> To avoid flushing unrelated buffered dirty data, add a dedicated
> WB_start_dontcache bit and wb_check_start_dontcache() handler that uses
> the per-wb WB_DONTCACHE_DIRTY counter to determine how many pages to
> write back. The flusher writes back that many pages from the oldest dirty
> inodes (not restricted to dontcache-specific inodes). This helps
> preserve I/O batching while limiting the scope of expedited writeback.
>
> Like WB_start_all, the WB_start_dontcache bit coalesces multiple
> DONTCACHE writes into a single flusher wakeup without per-write
> allocations.
>
> Also add WB_REASON_DONTCACHE as a new writeback reason for tracing
> visibility, and target the correct cgroup writeback domain via
> unlocked_inode_to_wb_begin().
>
> dontcache-bench results (same host, T6F_SKL_1920GBF, 251 GiB RAM,
> xfs on NVMe, fio io_uring):
>
> Buffered and direct I/O paths are unaffected by this patchset. All
> improvements are confined to the dontcache path:
>
> Single-stream throughput (MB/s):
> Before After Change
> seq-write/dontcache 298 897 +201%
> rand-write/dontcache 131 236 +80%
>
> Tail latency improvements (seq-write/dontcache):
> p99: 135,266 us -> 23,986 us (-82%)
> p99.9: 8,925,479 us -> 28,443 us (-99.7%)
>
> Multi-writer (4 jobs, sequential write):
> Before After Change
> dontcache aggregate (MB/s) 2,529 4,532 +79%
> dontcache p99 (us) 8,553 1,002 -88%
> dontcache p99.9 (us) 109,314 1,057 -99%
>
> Dontcache multi-writer throughput now matches buffered (4,532 vs
> 4,616 MB/s).
>
> 32-file write (Axboe test):
> Before After Change
> dontcache aggregate (MB/s) 1,548 3,499 +126%
> dontcache p99 (us) 10,170 602 -94%
> Peak dirty pages (MB) 1,837 213 -88%
>
> Dontcache now reaches 81% of buffered throughput (was 35%).
>
> Competing writers (dontcache vs buffered, separate files):
> Before After
> buffered writer 868 433 MB/s
> dontcache writer 415 433 MB/s
> Aggregate 1,284 866 MB/s
>
> Previously the buffered writer starved the dontcache writer 2:1.
> With per-bdi_writeback tracking, both writers now receive equal
> bandwidth. The aggregate matches the buffered-vs-buffered baseline
> (863 MB/s), indicating fair sharing regardless of I/O mode.
>
> The dontcache writer's p99.9 latency collapsed from 119 ms to
> 33 ms (-73%), eliminating the severe periodic stalls seen in the
> baseline. Both writers now share identical latency profiles,
> matching the buffered-vs-buffered pattern.
>
> The per-bdi_writeback dirty tracking dramatically reduces peak dirty
> pages in dontcache workloads, with the 32-file test dropping from
> 1.8 GB to 213 MB. Dontcache sequential write throughput triples and
> multi-writer throughput reaches parity with buffered I/O, with tail
> latencies collapsing by 1-2 orders of magnitude.
>
> Assisted-by: Claude:claude-opus-4-6
> Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
Nice and looks good to me now. Feel free to add:
Reviewed-by: Jan Kara <jack@xxxxxxx>
One nit below:
> +/**
> + * filemap_dontcache_kick_writeback - kick flusher for IOCB_DONTCACHE writes
> + * @mapping: address_space that was just written to
> + *
> + * Kick the writeback flusher thread to expedite writeback of dontcache
> + * dirty pages. Uses a dedicated WB_start_dontcache bit so that only
> + * pages tracked by WB_DONTCACHE_DIRTY are written back, rather than
> + * flushing the entire BDI's dirty pages.
This comment is a bit confusing as in fact we write arbitrary dirty pages.
It is only the amount of pages that is influenced by WB_DONTCACHE_DIRTY. So
I'd rephrase the last sentence like: We queue writeback for the inode's wb
for as many pages as there are dontcache pages but we don't restrict
writeback to dontcache pages only. This significantly improves performance
over either writing all wb's pages or writing only dontcache pages.
Although it doesn't guarantee quick writeback and reclaim of dontcache
pages it keeps the amount of dirty pages in check and over longer term
dontcache pages get written and reclaimed by background writeback even with
this rough heuristic.
Honza
> + */
> +void filemap_dontcache_kick_writeback(struct address_space *mapping)
> +{
> + struct inode *inode = mapping->host;
> + struct bdi_writeback *wb;
> + struct wb_lock_cookie cookie = {};
> +
> + wb = unlocked_inode_to_wb_begin(inode, &cookie);
> + wb_start_dontcache_writeback(wb);
> + unlocked_inode_to_wb_end(inode, &cookie);
> +}
> +EXPORT_SYMBOL_GPL(filemap_dontcache_kick_writeback);
> +
> /*
> * Wakeup the flusher threads to start writeback of all currently dirty pages
> */
> diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
> index cb660dd37286..4f1084937315 100644
> --- a/include/linux/backing-dev-defs.h
> +++ b/include/linux/backing-dev-defs.h
> @@ -26,6 +26,7 @@ enum wb_state {
> WB_writeback_running, /* Writeback is in progress */
> WB_has_dirty_io, /* Dirty inodes on ->b_{dirty|io|more_io} */
> WB_start_all, /* nr_pages == 0 (all) work pending */
> + WB_start_dontcache, /* dontcache writeback pending */
> };
>
> enum wb_stat_item {
> @@ -56,6 +57,7 @@ enum wb_reason {
> */
> WB_REASON_FORKER_THREAD,
> WB_REASON_FOREIGN_FLUSH,
> + WB_REASON_DONTCACHE,
>
> WB_REASON_MAX,
> };
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 11559c513dfb..df72b42a9e9b 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -2624,6 +2624,7 @@ extern int __must_check file_write_and_wait_range(struct file *file,
> loff_t start, loff_t end);
> int filemap_flush_range(struct address_space *mapping, loff_t start,
> loff_t end);
> +void filemap_dontcache_kick_writeback(struct address_space *mapping);
>
> static inline int file_write_and_wait(struct file *file)
> {
> @@ -2657,10 +2658,7 @@ static inline ssize_t generic_write_sync(struct kiocb *iocb, ssize_t count)
> if (ret)
> return ret;
> } else if (iocb->ki_flags & IOCB_DONTCACHE) {
> - struct address_space *mapping = iocb->ki_filp->f_mapping;
> -
> - filemap_flush_range(mapping, iocb->ki_pos - count,
> - iocb->ki_pos - 1);
> + filemap_dontcache_kick_writeback(iocb->ki_filp->f_mapping);
> }
>
> return count;
> diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
> index bdac0d685a98..13ee076ccd16 100644
> --- a/include/trace/events/writeback.h
> +++ b/include/trace/events/writeback.h
> @@ -44,7 +44,8 @@
> EM( WB_REASON_PERIODIC, "periodic") \
> EM( WB_REASON_FS_FREE_SPACE, "fs_free_space") \
> EM( WB_REASON_FORKER_THREAD, "forker_thread") \
> - EMe(WB_REASON_FOREIGN_FLUSH, "foreign_flush")
> + EM( WB_REASON_FOREIGN_FLUSH, "foreign_flush") \
> + EMe(WB_REASON_DONTCACHE, "dontcache")
>
> WB_WORK_REASON
>
>
> --
> 2.54.0
>
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR