Re: [linus:master] [swap_state] 5649d113ff: vm-scalability.throughput -33.1% regression

From: Yin, Fengwei
Date: Mon Apr 03 2023 - 22:37:29 EST


Hi Yang,

On Tue, 2023-03-21 at 07:56 +0000, Yang Yang wrote:
> > commit:
> >  04bac040bc ("mm/hugetlb: convert get_hwpoison_huge_page() to
> > folios")
> >  5649d113ff ("swap_state: update shadow_nodes for anonymous page")
> > 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \ 
> >  10026093 ±  3%     -33.1%    6702748 ±  2%  vm-
> > scalability.throughput
>
> > 04bac040bc71b4b3 5649d113ffce9f532a9ecc5ab96
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \ 
> >    553378           -11.1%     492012 ±  2%  vm-scalability.median
>
> I see the two results are much different, one is -33.1%, another is -
> 11.1%.
> So I tried more times to reproduce on my machine, and see a 8% of
> regression
> of vm-scalability.throughput.
>
> As this test add/delete/clear swap cache frequently, the impact of
> commit
> 5649d113ff might be magnified ?
>
> Commit 5649d113ff tried to fix the problem that if swap space is huge
> and
> apps are using many shadow entries, shadow nodes may waste much space
> in
> memory. So the shadow nodes should be reclaimed when it's number is
> huge while
> memory is in tense.
>
> I reviewed commit 5649d113ff carefully, and didn't found any
> obviously
> problem. If we want to correctly update shadow_nodes for anonymous
> page,
> we have to update them when add/delete/clear swap cache.
Thanks for the info and sorry for delayed response. We didn't get your
replies in our compony Inbox (don't know why). Just noticed you replies
on lore.kernel.org when revise the ticket. We will mark this regression
won't fix as it's for functional fixing.


Regards
Yin, Fengwei

>
> Thanks.