Re: [PATCH] mm: move idle swap cache pages to the tail of LRU after COW

From: Johannes Weiner
Date: Wed May 19 2021 - 21:47:28 EST


On Thu, May 20, 2021 at 09:22:45AM +0800, Huang, Ying wrote:
> Johannes Weiner <hannes@xxxxxxxxxxx> writes:
>
> > On Wed, May 19, 2021 at 09:33:13AM +0800, Huang Ying wrote:
> >> diff --git a/mm/memory.c b/mm/memory.c
> >> index b83f734c4e1d..2b6847f4c03e 100644
> >> --- a/mm/memory.c
> >> +++ b/mm/memory.c
> >> @@ -3012,6 +3012,11 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
> >> munlock_vma_page(old_page);
> >> unlock_page(old_page);
> >> }
> >> + if (page_copied && PageSwapCache(old_page) &&
> >> + !page_mapped(old_page) && trylock_page(old_page)) {
> >> + try_to_free_idle_swapcache(old_page);
> >> + unlock_page(old_page);
> >
> > If there are no more swap or pte references, can we just attempt to
> > free the page right away, like we do during regular unmap?
> >
> > if (page_copied)
> > free_swap_cache(old_page);
> > put_page(old_page);
>
> A previous version of the patch does roughly this.
>
> https://lore.kernel.org/lkml/20210113024241.179113-1-ying.huang@xxxxxxxxx/
>
> But Linus has concerns with the overhead introduced in the hot COW path.

Sorry, I had missed that thread.

It sounds like there were the same concerns about the LRU shuffling
overhead in the COW page. Now we have numbers for that, but not the
free_swap_cache version. Would you be able to run the numbers for that
as well? It would be interesting to see how much the additional code
complexity buys us.

> Another possibility is to move the idle swap cache page to the tail of
> the file LRU list. But the question is how to identify the page.

The LRU type is identified by PG_swapbacked, and we do clear that for
anon pages to implement MADV_FREE. It may work here too. But I'm
honestly a bit skeptical about the ROI on this...