Re: [PATCH v2] mm/percpu, memcontrol: Per-memcg-lruvec percpu accounting

From: Joshua Hahn

Date: Wed Apr 15 2026 - 10:40:02 EST


On Wed, 15 Apr 2026 11:32:47 +0900 "Harry Yoo (Oracle)" <harry@xxxxxxxxxx> wrote:

> On Tue, Apr 14, 2026 at 01:26:31PM -0700, Joshua Hahn wrote:
> > On Fri, 3 Apr 2026 20:38:43 -0700 Joshua Hahn <joshua.hahnjy@xxxxxxxxx> wrote:
> >
> > > enum memcg_stat_item includes memory that is tracked on a per-memcg
> > > level, but not at a per-node (and per-lruvec) level. Diagnosing
> > > memory pressure for memcgs in multi-NUMA systems can be difficult,
> > > since not all of the memory accounted in memcg can be traced back
> > > to a node. In scenarios where numa nodes in an memcg are asymmetrically
> > > stressed, this difference can be invisible to the user.
> > >
> > > Convert MEMCG_PERCPU_B from a memcg_stat_item to a memcg_node_stat_item
> > > to give visibility into per-node breakdowns for percpu allocations.
> > >
> > > This will get us closer to being able to know the memcg and physical
> > > association of all memory on the system. Specifically for percpu, this
> > > granularity will help demonstrate footprint differences on systems with
> > > asymmetric NUMA nodes.
> > >
> > > Because percpu memory is accounted at a sub-PAGE_SIZE level, we must
> > > account node level statistics (accounted in PAGE_SIZE units) and
> > > memcg-lruvec statistics separately. Account node statistics when the pcpu
> > > pages are allocated, and account memcg-lruvec statistics when pcpu
> > > objects are handed out.
> >
> > [...snip...]
> >
> > > @@ -55,7 +55,8 @@ static void pcpu_free_pages(struct pcpu_chunk *chunk,
> > > struct page **pages, int page_start, int page_end)
> > > {
> > > unsigned int cpu;
> > > - int i;
> > > + int nr_pages = page_end - page_start;
> > > + int i, nid;
> > >
> > > for_each_possible_cpu(cpu) {
> > > for (i = page_start; i < page_end; i++) {
> > > @@ -65,6 +66,10 @@ static void pcpu_free_pages(struct pcpu_chunk *chunk,
> > > __free_page(page);
> > > }
> > > }
> > > +
> > > + for_each_node(nid)
> > > + mod_node_page_state(NODE_DATA(nid), NR_PERCPU_B,
> > > + -1L * nr_pages * nr_cpus_node(nid) * PAGE_SIZE);
> > > }
> > >
> > > /**
> > > @@ -84,7 +89,8 @@ static int pcpu_alloc_pages(struct pcpu_chunk *chunk,
> > > gfp_t gfp)
> > > {
> > > unsigned int cpu, tcpu;
> > > - int i;
> > > + int nr_pages = page_end - page_start;
> > > + int i, nid;
> > >
> > > gfp |= __GFP_HIGHMEM;
> > >
> > > @@ -97,6 +103,10 @@ static int pcpu_alloc_pages(struct pcpu_chunk *chunk,
> > > goto err;
> > > }
> > > }
> > > +
> > > + for_each_node(nid)
> > > + mod_node_page_state(NODE_DATA(nid), NR_PERCPU_B,
> > > + nr_pages * nr_cpus_node(nid) * PAGE_SIZE);
> > > return 0;
> >
> > Hello reviewers,
> >
> > Since I submitted this, I have been thinking about the feedback that Sashiko
> > has given this patch [1]. Harry has already pointed out the points about
> > drifting due to CPU hotplug, but one there is one particular concern that
> > I have been trying to tackle with no avail.
> >
> > The issue is, pcpu allocations for CPUs on node A may actually fall back to
> > node B, if node A is out of space and under pressure. This design seems to be
> > intentional, to prevent memory pressure from failing these allocations.
> >
> > However, this means that we cannot charge percpu memory based on the number
> > of CPUs present on a node, because although the memory "belongs" to the node
> > (since the CPU it actually belongs to is on the node), the memory can be
> > serviced from elsewhere.
>
> Ouch.
>
> > To handle this, I've tried several approaches. All of them were either too
> > expensive (iterating through all pages at allocation / free time)
>
> How expensive was it compared to the baseline?

I haven't done any performance analyses, but the changes that were made required
every pcpu allocation to iterate over all the pages in a loop, and account the
page where it came from, whereas previously we didn't need to do any iteration,
just charging or uncharging based on the size. But maybe it's not so bad
after all, since these allocations should usually be pretty small.

Let me try running some tests to see what the absolute worst case scenario
regression would look like.

> > or introduces
> > new drift (I thought of managing per-chunk statistics as well).
>
> How does it introduce a new drift?

The other approach I tried to do to avoid the iteration over pages was to
stash per-node counters per-chunk. But of course this doesn't work well if
we need to have statistics per-pcpu allocation, or if we change the ordering
of the charges based on the ordering of the chunk's pcpu allocations.

In any case, thanks for taking the time to check on the patch.
I'll try to spin up something soon!

Joshua