Re: [PATCH v7 3/8] mm: zswap: Refactor code to store an entry in zswap xarray.

From: Yosry Ahmed
Date: Tue Sep 24 2024 - 15:15:47 EST


On Mon, Sep 23, 2024 at 6:17 PM Kanchana P Sridhar
<kanchana.p.sridhar@xxxxxxxxx> wrote:
>
> Added a new procedure zswap_store_entry() that refactors the code
> currently in zswap_store() to store an entry in the zswap xarray.
> This will allow us to call this procedure for each storing the swap
> offset of each page in an mTHP in the xarray, as part of zswap_store()
> supporting mTHP.
>
> Also, made a minor edit in the comments for 'struct zswap_entry' to delete
> the description of the 'value' member that was deleted in commit
> 20a5532ffa53d6ecf41ded920a7b0ff9c65a7dcf ("mm: remove code to handle
> same filled pages").
>
> Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@xxxxxxxxx>
> ---
> mm/zswap.c | 51 ++++++++++++++++++++++++++++++++++-----------------
> 1 file changed, 34 insertions(+), 17 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 59b7733a62d3..fd35a81b6e36 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -190,7 +190,6 @@ static struct shrinker *zswap_shrinker;
> * section for context.
> * pool - the zswap_pool the entry's data is in
> * handle - zpool allocation handle that stores the compressed page data
> - * value - value of the same-value filled pages which have same content
> * objcg - the obj_cgroup that the compressed memory is charged to
> * lru - handle to the pool's lru used to evict pages.
> */
> @@ -1404,12 +1403,44 @@ static void shrink_worker(struct work_struct *w)
> /*********************************
> * main API
> **********************************/
> +
> +/*
> + * Returns true if the entry was successfully
> + * stored in the xarray, and false otherwise.
> + */
> +static bool zswap_store_entry(struct xarray *tree,
> + struct zswap_entry *entry)


I think zswap_tree_store() is a more descriptive name.

>
> +{
> + struct zswap_entry *old;
> + pgoff_t offset = swp_offset(entry->swpentry);


Reverse xmas tree where possible please (longest to shortest declarations).

>
> +
> + old = xa_store(tree, offset, entry, GFP_KERNEL);
> +

No need for the blank line here.

> + if (xa_is_err(old)) {
> + int err = xa_err(old);
> +
> + WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err);
> + zswap_reject_alloc_fail++;
> + return false;
> + }
> +
> + /*
> + * We may have had an existing entry that became stale when
> + * the folio was redirtied and now the new version is being
> + * swapped out. Get rid of the old.
> + */
> + if (old)
> + zswap_entry_free(old);
> +
> + return true;
> +}
> +
> bool zswap_store(struct folio *folio)
> {
> swp_entry_t swp = folio->swap;
> pgoff_t offset = swp_offset(swp);
> struct xarray *tree = swap_zswap_tree(swp);
> - struct zswap_entry *entry, *old;
> + struct zswap_entry *entry;
> struct obj_cgroup *objcg = NULL;
> struct mem_cgroup *memcg = NULL;
>
> @@ -1465,22 +1496,8 @@ bool zswap_store(struct folio *folio)
> entry->objcg = objcg;
> entry->referenced = true;
>
> - old = xa_store(tree, offset, entry, GFP_KERNEL);
> - if (xa_is_err(old)) {
> - int err = xa_err(old);
> -
> - WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err);
> - zswap_reject_alloc_fail++;
> + if (!zswap_store_entry(tree, entry))
> goto store_failed;
> - }
> -
> - /*
> - * We may have had an existing entry that became stale when
> - * the folio was redirtied and now the new version is being
> - * swapped out. Get rid of the old.
> - */
> - if (old)
> - zswap_entry_free(old);
>
> if (objcg) {
> obj_cgroup_charge_zswap(objcg, entry->length);
> --
> 2.27.0
>