[PATCH 3/8 RFC] mm/page_counter: use page_counter_stock in page_counter_uncharge

From: Joshua Hahn

Date: Fri Apr 10 2026 - 17:08:14 EST


Make page_counter_uncharge() stock-aware. We preserve the same semantics
as the existing stock handling logic in try_charge_memcg:

1. Instead of immediately walking the page_counter hierarchy, see if
depositing the charge to the stock puts it over the batch limit.
If not, deposit the charge and return immediately.
2. If we put the stock over the batch limit, walk up the page_counter
hierarchy and uncharge the excess.

Extract the repeated work of hierarchically cancelling page_counter
charges into a helper function as well.

As of this patch, the page_counter_stock is unused, as it has not been
enabled on any memcg yet. No functional changes intended.

Suggested-by: Johannes Weiner <hannes@xxxxxxxxxxx>
Signed-off-by: Joshua Hahn <joshua.hahnjy@xxxxxxxxx>
---
mm/page_counter.c | 36 +++++++++++++++++++++++++++---------
1 file changed, 27 insertions(+), 9 deletions(-)

diff --git a/mm/page_counter.c b/mm/page_counter.c
index 7a921872079b8..7be214034bfad 100644
--- a/mm/page_counter.c
+++ b/mm/page_counter.c
@@ -207,6 +207,15 @@ bool page_counter_try_charge(struct page_counter *counter,
return false;
}

+static void page_counter_cancel_hierarchy(struct page_counter *counter,
+ unsigned long nr_pages)
+{
+ struct page_counter *c;
+
+ for (c = counter; c; c = c->parent)
+ page_counter_cancel(c, nr_pages);
+}
+
/**
* page_counter_uncharge - hierarchically uncharge pages
* @counter: counter
@@ -214,10 +223,23 @@ bool page_counter_try_charge(struct page_counter *counter,
*/
void page_counter_uncharge(struct page_counter *counter, unsigned long nr_pages)
{
- struct page_counter *c;
+ unsigned long charge = nr_pages;

- for (c = counter; c; c = c->parent)
- page_counter_cancel(c, nr_pages);
+ if (counter->stock && local_trylock(&counter->stock->lock)) {
+ struct page_counter_stock *stock = this_cpu_ptr(counter->stock);
+
+ stock->nr_pages += nr_pages;
+ if (stock->nr_pages > counter->batch) {
+ charge = stock->nr_pages - counter->batch;
+ stock->nr_pages = counter->batch;
+ local_unlock(&counter->stock->lock);
+ } else {
+ local_unlock(&counter->stock->lock);
+ return;
+ }
+ }
+
+ page_counter_cancel_hierarchy(counter, charge);
}

/**
@@ -364,12 +386,8 @@ void page_counter_disable_stock(struct page_counter *counter)
stock_to_drain += stock->nr_pages;
}

- if (stock_to_drain) {
- struct page_counter *c;
-
- for (c = counter; c; c = c->parent)
- page_counter_cancel(c, stock_to_drain);
- }
+ if (stock_to_drain)
+ page_counter_cancel_hierarchy(counter, stock_to_drain);

/* This prevents future charges from trying to deposit pages */
counter->batch = 0;
--
2.52.0