RE: [PATCH v7 6/8] mm: zswap: Support mTHP swapout in zswap_store().

From: Sridhar, Kanchana P
Date: Wed Sep 25 2024 - 14:49:33 EST


> -----Original Message-----
> From: Johannes Weiner <hannes@xxxxxxxxxxx>
> Sent: Wednesday, September 25, 2024 7:28 AM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@xxxxxxxxx>
> Cc: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> yosryahmed@xxxxxxxxxx; nphamcs@xxxxxxxxx;
> chengming.zhou@xxxxxxxxx; usamaarif642@xxxxxxxxx;
> shakeel.butt@xxxxxxxxx; ryan.roberts@xxxxxxx; Huang, Ying
> <ying.huang@xxxxxxxxx>; 21cnbao@xxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx;
> Zou, Nanhai <nanhai.zou@xxxxxxxxx>; Feghali, Wajdi K
> <wajdi.k.feghali@xxxxxxxxx>; Gopal, Vinodh <vinodh.gopal@xxxxxxxxx>
> Subject: Re: [PATCH v7 6/8] mm: zswap: Support mTHP swapout in
> zswap_store().
>
> On Mon, Sep 23, 2024 at 06:17:07PM -0700, Kanchana P Sridhar wrote:
> > zswap_store() will now store mTHP and PMD-size THP folios by compressing
>
> The hugepage terminology throughout the patches is a bit convoluted.
>
> There is no real distinction in this code between PMD-size THPs and
> sub-PMD-sized mTHPs e.g. In particular, I think "mTHP" made sense when
> they were added, to distinguish them from conventional THPs. But using
> this term going forward just causes confusion, IMO.
>
> We're going through a big effort in the codebase to call all of these
> things simply "folios" - which stands for "one or more pages". If you
> want to emphasize the "more than one page", the convention is to call
> it a "large folio". (If you need to emphasize that it's PMD size -
> which doesn't apply to these patches, but just for the record - the
> convention is "pmd-mappable folio".)
>
> So what this patch set does is "support large folios in zswap".

Sure. Will modify this to be "support large folios in zswap _stores"
as per Yosry's follow-up clarification.

>
> > @@ -1551,51 +1559,63 @@ static bool __maybe_unused
> zswap_store_page(struct folio *folio, long index,
> > return false;
> > }
> >
> > +/*
> > + * Modified to store mTHP folios. Each page in the mTHP will be
> compressed
> > + * and stored sequentially.
> > + */
>
> This is a changelog, not a code comment ;) Please delete it.

Ok, sure.

>
> > bool zswap_store(struct folio *folio)
> > {
> > long nr_pages = folio_nr_pages(folio);
> > swp_entry_t swp = folio->swap;
> > pgoff_t offset = swp_offset(swp);
> > struct xarray *tree = swap_zswap_tree(swp);
> > - struct zswap_entry *entry;
> > struct obj_cgroup *objcg = NULL;
> > struct mem_cgroup *memcg = NULL;
> > + struct zswap_pool *pool;
> > + bool ret = false;
> > + long index;
> >
> > VM_WARN_ON_ONCE(!folio_test_locked(folio));
> > VM_WARN_ON_ONCE(!folio_test_swapcache(folio));
> >
> > - /* Large folios aren't supported */
> > - if (folio_test_large(folio))
> > + /* Storing large folios isn't enabled */
> > + if (!zswap_mthp_enabled && folio_test_large(folio))
> > return false;
> >
> > if (!zswap_enabled)
> > - goto check_old;
> > + goto reject;
> >
> > - /* Check cgroup limits */
> > + /*
> > + * Check cgroup limits:
> > + *
> > + * The cgroup zswap limit check is done once at the beginning of an
> > + * mTHP store, and not within zswap_store_page() for each page
> > + * in the mTHP. We do however check the zswap pool limits at the
>
> Use "folio" and "large folio" as appropriate here and throughout.

Sounds good.

Thanks,
Kanchana