Re: [RFC 2/2] mm: page_alloc: per-cpu pageblock buddy allocator

From: Frank van der Linden

Date: Mon Apr 06 2026 - 13:31:25 EST


On Fri, Apr 3, 2026 at 12:45 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> On large machines, zone->lock is a scaling bottleneck for page
> allocation. Two common patterns drive contention:
>
> 1. Affinity violations: pages are allocated on one CPU but freed on
> another (jemalloc, exit, reclaim). The freeing CPU's PCP drains to
> zone buddy, and the allocating CPU refills from zone buddy -- both
> under zone->lock, defeating PCP batching entirely.
>
> 2. Concurrent exits: processes tearing down large address spaces
> simultaneously overwhelm per-CPU PCP capacity, serializing on
> zone->lock for overflow.
>
> Solution
>
> Extend the PCP to operate on whole pageblocks with ownership tracking.
>
> Each CPU claims pageblocks from the zone buddy and splits them
> locally. Pages are tagged with their owning CPU, so frees route back
> to the owner's PCP regardless of which CPU frees. This eliminates
> affinity violations: the owner CPU's PCP absorbs both allocations and
> frees for its blocks without touching zone->lock.
>
> It also shortens zone->lock hold time during drain and refill
> cycles. Whole blocks are acquired under zone->lock and then split
> outside of it. Affinity routing to the owning PCP on free enables
> buddy merging outside the zone->lock as well; a bottom-up merge pass
> runs under pcp->lock on drain, freeing larger chunks under zone->lock.
>
> PCP refill uses a four-phase approach:
>
> Phase 0: recover owned fragments previously drained to zone buddy.
> Phase 1: claim whole pageblocks from zone buddy.
> Phase 2: grab sub-pageblock chunks without migratetype stealing.
> Phase 3: traditional __rmqueue() with migratetype fallback.
>

Since the migrate type passed to rmqueue_bulk, where these changes
are, is the PCP migratetype, this will prefer MIGRATE_MOVABLE more
than before in the presence of MIGRATE_CMA pageblocks, right?

Currently, the CMA fallback is done when > 50% of free zone memory is
MIGRATE_CMA. For a PCP list, this isn't strictly true of course, since
grabbing a page of the PCP list doesn't do this check, and MIGRATE_CMA
doesn't have its own PCP list. But since rmqueue_bulk does do it, I'm
guessing the fallback still mostly adheres to that 50%.

With this change to rmqueue_bulk, it feels like it would prefer
MIGRATE_MOVABLE more, since that is the mt passed to it (never
MIGRATE_CMA), and the fallback is only done if the final phase is
needed.

Have you tested this with a zone that has a large amount of CMA in it
and checked the percentages?

- Frank