[PATCH 09/14] writeback: Simplify the loops in write_cache_pages()
From: Christoph Hellwig
Date: Mon Feb 12 2024 - 02:17:04 EST
From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx>
Collapse the two nested loops into one. This is needed as a step
towards turning this into an iterator.
Note that this drops the "index <= end" check in the previous outer loop
and just relies on filemap_get_folios_tag() to return 0 entries when
index > end. This actually has a subtle implication when end == -1
because then the returned index will be -1 as well and thus if there is
page present on index -1, we could be looping indefinitely. But as the
comment in filemap_get_folios_tag documents this as already broken anyway
we should not worry about it here either. The fix for that would
probably a change to the filemap_get_folios_tag() calling convention.
Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
[hch: updated the commit log based on feedback from Jan Kara]
Signed-off-by: Christoph Hellwig <hch@xxxxxx>
Reviewed-by: Brian Foster <bfoster@xxxxxxxxxx>
Reviewed-by: Jan Kara <jack@xxxxxxx>
Acked-by: Dave Chinner <dchinner@xxxxxxxxxx>
---
mm/page-writeback.c | 75 ++++++++++++++++++++++-----------------------
1 file changed, 36 insertions(+), 39 deletions(-)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 045ca252c0423d..a94a77b1805969 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2454,6 +2454,7 @@ int write_cache_pages(struct address_space *mapping,
int error;
struct folio *folio;
pgoff_t end; /* Inclusive */
+ int i = 0;
if (wbc->range_cyclic) {
wbc->index = mapping->writeback_index; /* prev offset */
@@ -2467,53 +2468,49 @@ int write_cache_pages(struct address_space *mapping,
folio_batch_init(&wbc->fbatch);
- while (wbc->index <= end) {
- int i;
-
- writeback_get_batch(mapping, wbc);
-
+ for (;;) {
+ if (i == wbc->fbatch.nr) {
+ writeback_get_batch(mapping, wbc);
+ i = 0;
+ }
if (wbc->fbatch.nr == 0)
break;
- for (i = 0; i < wbc->fbatch.nr; i++) {
- folio = wbc->fbatch.folios[i];
+ folio = wbc->fbatch.folios[i++];
- folio_lock(folio);
- if (!folio_prepare_writeback(mapping, wbc, folio)) {
- folio_unlock(folio);
- continue;
- }
+ folio_lock(folio);
+ if (!folio_prepare_writeback(mapping, wbc, folio)) {
+ folio_unlock(folio);
+ continue;
+ }
- trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
+ trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
- error = writepage(folio, wbc, data);
- wbc->nr_to_write -= folio_nr_pages(folio);
+ error = writepage(folio, wbc, data);
+ wbc->nr_to_write -= folio_nr_pages(folio);
- if (error == AOP_WRITEPAGE_ACTIVATE) {
- folio_unlock(folio);
- error = 0;
- }
+ if (error == AOP_WRITEPAGE_ACTIVATE) {
+ folio_unlock(folio);
+ error = 0;
+ }
- /*
- * For integrity writeback we have to keep going until
- * we have written all the folios we tagged for
- * writeback above, even if we run past wbc->nr_to_write
- * or encounter errors.
- * We stash away the first error we encounter in
- * wbc->saved_err so that it can be retrieved when we're
- * done. This is because the file system may still have
- * state to clear for each folio.
- *
- * For background writeback we exit as soon as we run
- * past wbc->nr_to_write or encounter the first error.
- */
- if (wbc->sync_mode == WB_SYNC_ALL) {
- if (error && !ret)
- ret = error;
- } else {
- if (error || wbc->nr_to_write <= 0)
- goto done;
- }
+ /*
+ * For integrity writeback we have to keep going until we have
+ * written all the folios we tagged for writeback above, even if
+ * we run past wbc->nr_to_write or encounter errors.
+ * We stash away the first error we encounter in wbc->saved_err
+ * so that it can be retrieved when we're done. This is because
+ * the file system may still have state to clear for each folio.
+ *
+ * For background writeback we exit as soon as we run past
+ * wbc->nr_to_write or encounter the first error.
+ */
+ if (wbc->sync_mode == WB_SYNC_ALL) {
+ if (error && !ret)
+ ret = error;
+ } else {
+ if (error || wbc->nr_to_write <= 0)
+ goto done;
}
}
--
2.39.2