Re: [PATCH] mm: avoid unconditional one-tick sleep when swapcache_prepare fails

From: Barry Song
Date: Tue Oct 08 2024 - 09:08:39 EST


On Thu, Oct 3, 2024 at 8:35 AM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
>
> Barry Song <21cnbao@xxxxxxxxx> writes:
>
> > On Wed, Oct 2, 2024 at 8:43 AM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
> >>
> >> Barry Song <21cnbao@xxxxxxxxx> writes:
> >>
> >> > On Tue, Oct 1, 2024 at 7:43 AM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
> >> >>
> >> >> Barry Song <21cnbao@xxxxxxxxx> writes:
> >> >>
> >> >> > On Sun, Sep 29, 2024 at 3:43 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
> >> >> >>
> >> >> >> Hi, Barry,
> >> >> >>
> >> >> >> Barry Song <21cnbao@xxxxxxxxx> writes:
> >> >> >>
> >> >> >> > From: Barry Song <v-songbaohua@xxxxxxxx>
> >> >> >> >
> >> >> >> > Commit 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
> >> >> >> > introduced an unconditional one-tick sleep when `swapcache_prepare()`
> >> >> >> > fails, which has led to reports of UI stuttering on latency-sensitive
> >> >> >> > Android devices. To address this, we can use a waitqueue to wake up
> >> >> >> > tasks that fail `swapcache_prepare()` sooner, instead of always
> >> >> >> > sleeping for a full tick. While tasks may occasionally be woken by an
> >> >> >> > unrelated `do_swap_page()`, this method is preferable to two scenarios:
> >> >> >> > rapid re-entry into page faults, which can cause livelocks, and
> >> >> >> > multiple millisecond sleeps, which visibly degrade user experience.
> >> >> >>
> >> >> >> In general, I think that this works.  Why not extend the solution to
> >> >> >> cover schedule_timeout_uninterruptible() in __read_swap_cache_async()
> >> >> >> too?  We can call wake_up() when we clear SWAP_HAS_CACHE.  To avoid
> >> >> >
> >> >> > Hi Ying,
> >> >> > Thanks for your comments.
> >> >> > I feel extending the solution to __read_swap_cache_async() should be done
> >> >> > in a separate patch. On phones, I've never encountered any issues reported
> >> >> > on that path, so it might be better suited for an optimization rather than a
> >> >> > hotfix?
> >> >>
> >> >> Yes.  It's fine to do that in another patch as optimization.
> >> >
> >> > Ok. I'll prepare a separate patch for optimizing that path.
> >>
> >> Thanks!
> >>
> >> >>
> >> >> >> overhead to call wake_up() when there's no task waiting, we can use an
> >> >> >> atomic to count waiting tasks.
> >> >> >
> >> >> > I'm not sure it's worth adding the complexity, as wake_up() on an empty
> >> >> > waitqueue should have a very low cost on its own?
> >> >>
> >> >> wake_up() needs to call spin_lock_irqsave() unconditionally on a global
> >> >> shared lock.  On systems with many CPUs (such servers), this may cause
> >> >> severe lock contention.  Even the cache ping-pong may hurt performance
> >> >> much.
> >> >
> >> > I understand that cache synchronization was a significant issue before
> >> > qspinlock, but it seems to be less of a concern after its implementation.
> >>
> >> Unfortunately, qspinlock cannot eliminate cache ping-pong issue, as
> >> discussed in the following thread.
> >>
> >> https://lore.kernel.org/lkml/20220510192708.GQ76023@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/
> >>
> >> > However, using a global atomic variable would still trigger cache broadcasts,
> >> > correct?
> >>
> >> We can only change the atomic variable to non-zero when
> >> swapcache_prepare() returns non-zero, and call wake_up() when the atomic
> >> variable is non-zero.  Because swapcache_prepare() returns 0 most times,
> >> the atomic variable is 0 most times.  If we don't change the value of
> >> atomic variable, cache ping-pong will not be triggered.
> >
> > yes. this can be implemented by adding another atomic variable.
>
> Just realized that we don't need another atomic variable for this, just
> use waitqueue_active() before wake_up() should be enough.
>
> >>
> >> Hi, Kairui,
> >>
> >> Do you have some test cases to test parallel zram swap-in?  If so, that
> >> can be used to verify whether cache ping-pong is an issue and whether it
> >> can be fixed via a global atomic variable.
> >>
> >
> > Yes, Kairui please run a test on your machine with lots of cores before
> > and after adding a global atomic variable as suggested by Ying. I am
> > sorry I don't have a server machine.
> >
> > if it turns out you find cache ping-pong can be an issue, another
> > approach would be a waitqueue hash:
>
> Yes.  waitqueue hash may help reduce lock contention.  And, we can have
> both waitqueue_active() and waitqueue hash if necessary.  As the first
> step, waitqueue_active() appears simpler.

Hi Andrew,
If there are no objections, can you please squash the below change? Oven
has already tested the change and the original issue was still fixed with
it. If you want me to send v2 instead, please let me know.