Re: [PATCH] mm/zswap: avoid touching XArray for unnecessary invalidation
From: Kairui Song
Date: Sat Oct 12 2024 - 00:48:47 EST
On Sat, Oct 12, 2024 at 11:33 AM Chengming Zhou
<chengming.zhou@xxxxxxxxx> wrote:
>
> On 2024/10/12 11:04, Kairui Song wrote:
> > Johannes Weiner <hannes@xxxxxxxxxxx> 于 2024年10月12日周六 02:28写道:
> >>
> >> On Fri, Oct 11, 2024 at 10:53:31AM -0700, Yosry Ahmed wrote:
> >>> On Fri, Oct 11, 2024 at 10:20 AM Kairui Song <ryncsn@xxxxxxxxx> wrote:
> >>>>
> >>>> From: Kairui Song <kasong@xxxxxxxxxxx>
> >>>>
> >>>> zswap_invalidation simply calls xa_erase, which acquires the Xarray
> >>>> lock first, then does a look up. This has a higher overhead even if
> >>>> zswap is not used or the tree is empty.
> >>>>
> >>>> So instead, do a very lightweight xa_empty check first, if there is
> >>>> nothing to erase, don't touch the lock or the tree.
> >>
> >> Great idea!
> >>
> >>> XA_STATE(xas, ..);
> >>>
> >>> rcu_read_lock();
> >>> entry = xas_load(&xas);
> >>> if (entry) {
> >>> xas_lock(&xas);
> >>> WARN_ON_ONCE(xas_reload(&xas) != entry);
> >>> xas_store(&xas, NULL);
> >>> xas_unlock(&xas);
> >>> }
> >>> rcu_read_unlock():
> >>
> >> This does the optimization more reliably, and I think we should go
> >> with this version.
> >
> > Hi Yosry and Johannes,
> >
> > This is a good idea. But xa_empty is just much lighweighter, it's just
> > a inlined ( == NULL ) check, so unsurprising it has better performance
> > than xas_load.
> >
> > And surprisingly it's faster than zswap_never_enabled. So I think it
>
> Do you have CONFIG_ZSWAP_DEFAULT_ON enabled? In your case, CPU will go
> to the unlikely branch of static_key every time, which maybe the cause.
No, it's off by default. Maybe it's just noise, the performance
difference is very tiny.