Re: [PATCH 0/2] mm/page_alloc: Remote per-cpu lists drain support

From: Mel Gorman
Date: Thu Mar 10 2022 - 11:32:03 EST


On Mon, Mar 07, 2022 at 02:57:47PM +0100, Nicolas Saenz Julienne wrote:
> > > Note that this is not the first attempt at fixing this per-cpu page lists:
> > > - The first attempt[1] tried to conditionally change the pagesets locking
> > > scheme based the NOHZ_FULL config. It was deemed hard to maintain as the
> > > NOHZ_FULL code path would be rarely tested. Also, this only solves the issue
> > > for NOHZ_FULL setups, which isn't ideal.
> > > - The second[2] unanimously switched the local_locks to per-cpu spinlocks. The
> > > performance degradation was too big.
> > >
> >
> > For unrelated reasons I looked at using llist to avoid locks entirely. It
> > turns out it's not possible and needs a lock. We know "local_locks to
> > per-cpu spinlocks" took a large penalty so I considered alternatives on
> > how a lock could be used. I found it's possible to both remote drain
> > the lists and avoid the disable/enable of IRQs entirely as long as a
> > preempting IRQ is willing to take the zone lock instead (should be very
> > rare). The IRQ part is a bit hairy though as softirqs are also a problem
> > and preempt-rt needs different rules and the llist has to sort PCP
> > refills which might be a loss in total. However, the remote draining may
> > still be interesting. The full series is at
> > https://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git/ mm-pcpllist-v1r2
>
> I'll have a proper look at it soon.
>

Thanks. I'm still delayed actually finishing the series as most of my
time is dedicated to a separate issue. However, there is at least one
bug in there at patch "mm/page_alloc: Remotely drain per-cpu lists"
that causes a lockup under severe memory pressure. The fix is

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c9a6f2b5548e..11b54f383d04 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3065,10 +3065,8 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
*/
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
{
- unsigned long flags;
int to_drain, batch;

- pcp_local_lock(&pagesets.lock, flags);
batch = READ_ONCE(pcp->batch);
to_drain = min(pcp->count, batch);
if (to_drain > 0) {
@@ -3076,7 +3074,6 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
free_pcppages_bulk(zone, to_drain, pcp, 0);
spin_unlock(&pcp->lock);
}
- pcp_local_unlock(&pagesets.lock, flags);
}
#endif

@@ -3088,16 +3085,12 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone)
unsigned long flags;
struct per_cpu_pages *pcp;

- pcp_local_lock(&pagesets.lock, flags);
-
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
if (pcp->count) {
spin_lock(&pcp->lock);
free_pcppages_bulk(zone, pcp->count, pcp, 0);
spin_unlock(&pcp->lock);
}
-
- pcp_local_unlock(&pagesets.lock, flags);
}

/*