Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()
From: Yosry Ahmed
Date: Thu Jan 25 2024 - 19:06:30 EST
On Thu, Jan 25, 2024 at 4:03 PM Chengming Zhou
<zhouchengming@xxxxxxxxxxxxx> wrote:
>
> On 2024/1/25 15:53, Yosry Ahmed wrote:
> >> Hello,
> >>
> >> I also thought about this problem for some time, maybe something like below
> >> can be changed to fix it? It's likely I missed something, just some thoughts.
> >>
> >> IMHO, the problem is caused by the different way in which we use zswap entry
> >> in the writeback, that should be much like zswap_load().
> >>
> >> The zswap_load() comes in with the folio locked in swap cache, so it has
> >> stable zswap tree to search and lock... But in writeback case, we don't,
> >> shrink_memcg_cb() comes in with only a zswap entry with lru list lock held,
> >> then release lru lock to get tree lock, which maybe freed already.
> >>
> >> So we should change here, we read swpentry from entry with lru list lock held,
> >> then release lru lock, to try to lock corresponding folio in swap cache,
> >> if we success, the following things is much the same like zswap_load().
> >> We can get tree lock, to recheck the invalidate race, if no race happened,
> >> we can make sure the entry is still right and get refcount of it, then
> >> release the tree lock.
> >
> > Hmm I think you may be onto something here. Moving the swap cache
> > allocation ahead before referencing the tree should give us the same
> > guarantees as zswap_load() indeed. We can also consolidate the
> > invalidate race checks (right now we have one in shrink_memcg_cb() and
> > another one inside zswap_writeback_entry()).
> >
> > We will have to be careful about the error handling path to make sure
> > we delete the folio from the swap cache only after we know the tree
> > won't be referenced anymore. Anyway, I think this can work.
> >
> > On a separate note, I think there is a bug in zswap_writeback_entry()
> > when we delete a folio from the swap cache. I think we are missing a
> > folio_unlock() there.
> >
>
> Hi, want to know if you are preparing the fix patch, I would just wait to
> review if you are. Or I can work on it if you are busy with other thing.
If you're talking about implementing your solution, I was assuming you
were going to send a patch out (and hoping others would chime in in
case I missed something).
I can take a stab at implementing it if you prefer that, just let me know.