Re: [RFC 1/4] mm/zswap: skip swapcache for swapping in zswap pages

From: Yosry Ahmed
Date: Tue Oct 22 2024 - 20:47:19 EST


[..]
> >> @@ -1576,6 +1576,52 @@ bool zswap_store(struct folio *folio)
> >> return ret;
> >> }
> >>
> >> +static bool swp_offset_in_zswap(unsigned int type, pgoff_t offset)
> >> +{
> >> + return (offset >> SWAP_ADDRESS_SPACE_SHIFT) < nr_zswap_trees[type];
> >
> > I am not sure I understand what we are looking for here. When does
> > this return false? Aren't the zswap trees always allocated during
> > swapon?
> >
>
> Hi Yosry,
>
> Thanks for the review!
>
> It becomes useful in patch 3 when trying to determine if a large folio can be allocated.
>
> For e.g. if the swap entry is the last entry of the last tree, and 1M folios are enabled
> (nr_pages = 256), then the while loop in zswap_present_test will try to access a tree
> that doesn't exist from the 2nd 4K page onwards if we dont have this check in
> zswap_present_test.

Doesn't swap_pte_batch() make sure that the range of swap entries
passed here all corresponds to existing swap entries, and those
entries should always have a corresponding zswap tree? How can the
passed in range contain an entry that is not in any zswap tree?

I feel like I am missing something.

>
> >> +}
> >> +
> >> +/* Returns true if the entire folio is in zswap */
> >
> > There isn't really a folio at this point, maybe "Returns true if the
> > entire range is in zswap"?
>
> Will change, Thanks!
>
> >
> > Also, this is racy because an exclusive load, invalidation, or
> > writeback can cause an entry to be removed from zswap. Under what
> > conditions is this safe? The caller can probably guarantee we don't
> > race against invalidation, but can we guarantee that concurrent
> > exclusive loads or writebacks don't happen?
> >
> > If the answer is yes, this needs to be properly documented.
>
> swapcache_prepare should stop things from becoming racy.
>
> lets say trying to swapin a mTHP of size 32 pages:
> - T1 is doing do_swap_page, T2 is doing zswap_writeback.
> - T1 - Check if the entire 32 pages is in zswap, swapcache_prepare(entry, nr_pages) in do_swap_page is not yet called.
> - T2 - zswap_writeback_entry starts and lets say writes page 2 to swap. it calls __read_swap_cache_async -> swapcache_prepare increments swap_map count, writes page to swap.

Can the folio be dropped from the swapcache at this point (e.g. by
reclaim)? If yes, it seems like swapcache_prepare() can succeed and
try to read it from zswap.

> - T1 - swapcache_prepare is then called and fails and then there will be a pagefault again for it.
>
> I will try and document this better.

We need to establish the rules for zswap_present_test() to not be
racy, document them at the definition, establish the safety of racy
callers (i.e. can_swapin_thp()), and document them at the call sites.

>
> >
> >> +bool zswap_present_test(swp_entry_t swp, int nr_pages)
> >> +{
> >> + pgoff_t offset = swp_offset(swp), tree_max_idx;
> >> + int max_idx = 0, i = 0, tree_offset = 0;
> >> + unsigned int type = swp_type(swp);
> >> + struct zswap_entry *entry = NULL;
> >> + struct xarray *tree;
> >> +
> >> + while (i < nr_pages) {
> >> + tree_offset = offset + i;
> >> + /* Check if the tree exists. */
> >> + if (!swp_offset_in_zswap(type, tree_offset))
> >> + return false;
> >> +
> >> + tree = swap_zswap_tree(swp_entry(type, tree_offset));
> >> + XA_STATE(xas, tree, tree_offset);
> >
> > Please do not mix declarations with code.
> >
> >> +
> >> + tree_max_idx = tree_offset % SWAP_ADDRESS_SPACE_PAGES ?
> >> + ALIGN(tree_offset, SWAP_ADDRESS_SPACE_PAGES) :
> >> + ALIGN(tree_offset + 1, SWAP_ADDRESS_SPACE_PAGES);
> >
> > Does this work if we always use ALIGN(tree_offset + 1,
> > SWAP_ADDRESS_SPACE_PAGES)?
>
> Yes, I think max_idx = min(offset + nr_pages, ALIGN(tree_offset + 1, SWAP_ADDRESS_SPACE_PAGES)) - 1;
> will work. I will test it out, Thanks!

Might need to split it over two lines.