In order to set up zswap_store_pages() to enable a clean batching
implementation in [1], this patch implements the following changes:
1) Addition of zswap_alloc_entries() which will allocate zswap entries for
all pages in the specified range for the folio, upfront. If this fails,
we return an error status to zswap_store().
2) Addition of zswap_compress_pages() that calls zswap_compress() for each
page, and returns false if any zswap_compress() fails, so
zswap_store_page() can cleanup resources allocated and return an error
status to zswap_store().
3) A "store_pages_failed" label that is a catch-all for all failure points
in zswap_store_pages(). This facilitates cleaner error handling within
zswap_store_pages(), which will become important for IAA compress
batching in [1].
[1]: https://patchwork.kernel.org/project/linux-mm/list/?series=911935
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@xxxxxxxxx>
---
mm/zswap.c | 93 +++++++++++++++++++++++++++++++++++++++++-------------
1 file changed, 71 insertions(+), 22 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index b09d1023e775..db80c66e2205 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1409,9 +1409,56 @@ static void shrink_worker(struct work_struct *w)
* main API
**********************************/
+static bool zswap_compress_pages(struct page *pages[],
+ struct zswap_entry *entries[],
+ u8 nr_pages,
+ struct zswap_pool *pool)
+{
+ u8 i;
+
+ for (i = 0; i < nr_pages; ++i) {
+ if (!zswap_compress(pages[i], entries[i], pool))
+ return false;
+ }
+
+ return true;
+}