Re: [PATCH v9 6/7] mm: zswap: Support large folios in zswap_store().
From: Yosry Ahmed
Date: Tue Oct 01 2024 - 19:39:36 EST
[..]
> > > > > > > > store_failed:
> > > > > > > > zpool_free(entry->pool->zpool, entry->handle);
> > > > > > > > -put_pool:
> > > > > > > > - zswap_pool_put(entry->pool);
> > > > > > > > -freepage:
> > > > > > > > +put_pool_objcg:
> > > > > > > > + zswap_pool_put(pool);
> > > > > > > > + obj_cgroup_put(objcg);
> > > > > > >
> > > > > > > I think if we reorder the function we can drop these calls, make the
> > > > > > > comments positioned a bit better, and centralize the entry
> > > > > > > initializations. I am also not a fan of passing a semi-initialized
> > > > > > > entry to zswap_compress() to get the pool pointer.
> > > > > > >
> > > > > > > Does the following diff improve things or did I miss something?
> > > > > >
> > > > > > We shouldn’t be adding the entry to the xarray before initializing its
> > pool
> > > > > > and objcg, right? Please let me know if I am misunderstanding what
> > > you're
> > > > > > proposing in the diff.
> > > > >
> > > > > It should be safe. We already initialize entry->lru after we insert
> > > > > the entry in the tree. See the comment above the call to
> > > > > zswap_lru_add(). Basically we are protected against concurrent
> > > > > stores/loads through the folio lock, and are protected against
> > > > > writeback because the entry is not on the LRU yet.
> > > >
> > > > Thanks for the clarification, Yosry. Since this is a change in the entry
> > > > initialization wrt the mainline, is it Ok if this is done in a follow-up patch?
> > >
> > > Sure. We can discuss it separately. Do you want me to send a patch or
> > > do you intend to?
> >
> > Thanks Yosry! I will send the patch separately.
>
> Hi Yosry,
>
> I am preparing the follow-up patch so I can submit it once this patch-series is
> merged to mm-unstable (since these changes have dependencies on my
> existing patch).
>
> Is my understanding correct that the folio lock also protects against swapoff
> happening in between addition of the entry to the xarray and initializing its
> members, which will need to be valid for
> swapoff --> ... -> free_swap_slot() --> zswap_invalidate() ? Would appreciate
> it if you can confirm.
Yes, the folio lock should protect against swapoff, as the folio must
be in the swapcache.
For shmem, try_to_unuse() (called by swapoff()) will end up calling
shmem_swapin_folio(), which will lookup the folio in the swapcache,
find it, then lock it before proceeding to delete it from the swap
cache and ultimately freeing the swap entry.
For anonymous memory, try_to_unuse() will call unuse_mm() -> .. ->
unuse_pte_range(), which will also lookup the folio and lock it before
deleting it from the swap cache and freeing the entry.
try_to_unuse() will also loop over any remaining swapcache entries,
lock the folios and then try to free the swap entry.