Re: linux-next: manual merge of the mm-stable tree with the cifs tree
From: David Howells
Date: Tue Feb 21 2023 - 09:40:37 EST
Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> wrote:
> Andrew has already asked for it to be merged, so its up to Linus.
>
> You could fetch it yourself and do a trial merge and send me your
> resolution ..
>
> git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm tags/mm-stable-2023-02-20-13-37
Okay, did that. See attached. Also here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=iov-cifs-mm
David
---
commit 71ad4f67439e60fe04bbf7aed8870e6f83a5d15e
Author: David Howells <dhowells@xxxxxxxxxx>
Date: Tue Feb 21 13:23:05 2023 +0000
cifs: Handle transition to filemap_get_folios_tag()
filemap_get_folios_tag() is being added and find_get_pages_range_tag() is
being removed in effectively a single event. This causes a problem for
the:
cifs: Change the I/O paths to use an iterator rather than a page list
patch[1] on the cifs/for-next branch as it's adding a new user of the
latter (which is going away), but can't yet be converted to using the
former (which doesn't yet exist upstream).
Here's a conversion patch that could be applied at merge time to deal with
this. The new cifs_writepages_region() is based directly on
afs_writepages_region() and the AFS changes in the mm tree[2]:
commit acc8d8588cb7e3e64b0d2fa611dad06574cd67b1
Author: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx>
afs: convert afs_writepages_region() to use filemap_get_folios_tag()
can be replicated in cifs almost exactly.
Signed-off-by: David Howells <dhowells@xxxxxxxxxx>
cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx>
cc: Steve French <sfrench@xxxxxxxxx>
cc: Shyam Prasad N <nspmangalore@xxxxxxxxx>
cc: Rohith Surabattula <rohiths.msft@xxxxxxxxx>
cc: Tom Talpey <tom@xxxxxxxxxx>
cc: Paulo Alcantara <pc@xxxxxx>
cc: Jeff Layton <jlayton@xxxxxxxxxx>
cc: linux-cifs@xxxxxxxxxxxxxxx
cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx>
Link: https://lore.kernel.org/r/20230216214745.3985496-15-dhowells@xxxxxxxxxx/ [1]
Link: https://lore.kernel.org/r/20230104211448.4804-6-vishal.moola@xxxxxxxxx/ [2]
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 58801d39213a..52af9cf93c65 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2856,78 +2856,85 @@ static int cifs_writepages_region(struct address_space *mapping,
struct writeback_control *wbc,
loff_t start, loff_t end, loff_t *_next)
{
+ struct folio_batch fbatch;
struct folio *folio;
- struct page *head_page;
+ unsigned int i;
ssize_t ret;
int n, skips = 0;
+ folio_batch_init(&fbatch);
+
do {
pgoff_t index = start / PAGE_SIZE;
- n = find_get_pages_range_tag(mapping, &index, end / PAGE_SIZE,
- PAGECACHE_TAG_DIRTY, 1, &head_page);
+ n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
+ PAGECACHE_TAG_DIRTY, &fbatch);
if (!n)
break;
- folio = page_folio(head_page);
- start = folio_pos(folio); /* May regress with THPs */
+ for (i = 0; i < n; i++) {
+ folio = fbatch.folios[i];
+ start = folio_pos(folio); /* May regress with THPs */
- /* At this point we hold neither the i_pages lock nor the
- * page lock: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled
- * back from swapper_space to tmpfs file mapping
- */
- if (wbc->sync_mode != WB_SYNC_NONE) {
- ret = folio_lock_killable(folio);
- if (ret < 0) {
- folio_put(folio);
- return ret;
- }
- } else {
- if (!folio_trylock(folio)) {
- folio_put(folio);
- return 0;
+ /* At this point we hold neither the i_pages lock nor the
+ * page lock: the page may be truncated or invalidated
+ * (changing page->mapping to NULL), or even swizzled
+ * back from swapper_space to tmpfs file mapping
+ */
+ if (wbc->sync_mode != WB_SYNC_NONE) {
+ ret = folio_lock_killable(folio);
+ if (ret < 0) {
+ folio_batch_release(&fbatch);
+ return ret;
+ }
+ } else {
+ if (!folio_trylock(folio))
+ continue;
}
- }
- if (folio_mapping(folio) != mapping ||
- !folio_test_dirty(folio)) {
- start += folio_size(folio);
- folio_unlock(folio);
- folio_put(folio);
- continue;
- }
+ if (folio->mapping != mapping ||
+ !folio_test_dirty(folio)) {
+ start += folio_size(folio);
+ folio_unlock(folio);
+ continue;
+ }
- if (folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- if (wbc->sync_mode != WB_SYNC_NONE) {
- folio_wait_writeback(folio);
+ if (folio_test_writeback(folio) ||
+ folio_test_fscache(folio)) {
+ folio_unlock(folio);
+ if (wbc->sync_mode != WB_SYNC_NONE) {
+ folio_wait_writeback(folio);
#ifdef CONFIG_CIFS_FSCACHE
- folio_wait_fscache(folio);
+ folio_wait_fscache(folio);
#endif
- } else {
- start += folio_size(folio);
- }
- folio_put(folio);
- if (wbc->sync_mode == WB_SYNC_NONE) {
- if (skips >= 5 || need_resched())
- break;
- skips++;
+ } else {
+ start += folio_size(folio);
+ }
+ if (wbc->sync_mode == WB_SYNC_NONE) {
+ if (skips >= 5 || need_resched()) {
+ *_next = start;
+ return 0;
+ }
+ skips++;
+ }
+ continue;
}
- continue;
- }
- if (!folio_clear_dirty_for_io(folio))
- /* We hold the page lock - it should've been dirty. */
- WARN_ON(1);
+ if (!folio_clear_dirty_for_io(folio))
+ /* We hold the page lock - it should've been dirty. */
+ WARN_ON(1);
- ret = cifs_write_back_from_locked_folio(mapping, wbc, folio, start, end);
- folio_put(folio);
- if (ret < 0)
- return ret;
+ ret = cifs_write_back_from_locked_folio(mapping, wbc,
+ folio, start, end);
+ if (ret < 0) {
+ folio_batch_release(&fbatch);
+ return ret;
+ }
+
+ start += ret;
+ }
- start += ret;
+ folio_batch_release(&fbatch);
cond_resched();
} while (wbc->nr_to_write > 0);