[RFC net-next] net: page_pool: cap alloc cache size and refill by pool ring size

From: Nimrod Oren

Date: Mon Feb 23 2026 - 04:28:29 EST


Hi all,

The current page_pool alloc cache constants were chosen to match the NAPI
budget and to leave headroom for XDP_DROP recycling, hence the current
defaults PP_ALLOC_CACHE_REFILL (64) and PP_ALLOC_CACHE_SIZE (128).

This logic implicitly assumes a reasonably large backing ring. However, on
systems with 64K page size, these values may exceed the number of pages
actually managed by a pool instance. In practice this means we can
bulk-allocate or cache significantly more pages than a given pool can ever
meaningfully use. This becomes particularly problematic when scaling to
many interfaces/channels, where the total amount of memory tied up in
per-pool alloc caches becomes significant.

I'm proposing to cap the alloc cache size and refill values by the pool
ring size, while preserving the existing behavior as much as possible.

The implementation I have right now is:

pool->alloc.refill = min_t(unsigned int, PP_ALLOC_CACHE_REFILL, ring_qsize);
pool->alloc.size = pool->alloc.refill * 2;

This keeps the existing relationship "cache size = 2 x refill" and ensures
that refill never exceeds ring_qsize.

I am also considering a couple of alternatives and would like feedback on
which shape makes most sense:

Option B:

pool->alloc.size = min_t(unsigned int, PP_ALLOC_CACHE_SIZE, ring_qsize);
pool->alloc.refill = pool->alloc.size / 2;

Option C:

pool->alloc.size = min_t(unsigned int, PP_ALLOC_CACHE_SIZE, ring_qsize);
pool->alloc.refill = min_t(unsigned int, PP_ALLOC_CACHE_REFILL, ring_qsize);

Option A keeps refill as the primary parameter and derives size from it,
preserving the current "refill == NAPI budget" motivation as long as the
ring is large enough. Options B and C instead cap size directly by
ring_qsize and then either derive refill from size (B) or cap both
independently (C).

Looking forward, it might be useful to allow drivers to configure these
values explicitly, so they can tune the cache and refill based on their
specific use case and hardware characteristics. Even if such an option is
added later, the logic above would still define the default behavior.

I'd appreciate feedback on:
* Whether this per-pool cache capping approach makes sense
* If so, which option is preferable
* Any alternative suggestions to better cap/scale the page_pool cache
parameters for large pages

Thanks,
Nimrod Oren

Reviewed-by: Dragos Tatulea <dtatulea@xxxxxxxxxx>
Reviewed-by: Tariq Toukan <tariqt@xxxxxxxxxx>
Signed-off-by: Nimrod Oren <noren@xxxxxxxxxx>
---
include/net/page_pool/types.h | 2 ++
net/core/page_pool.c | 10 +++++++---
2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 0d453484a585..521d0ca587dd 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -55,6 +55,8 @@
#define PP_ALLOC_CACHE_REFILL 64
struct pp_alloc_cache {
u32 count;
+ u8 refill;
+ u8 size;
netmem_ref cache[PP_ALLOC_CACHE_SIZE];
};

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 265a729431bb..07474ff201d5 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -213,6 +213,10 @@ static int page_pool_init(struct page_pool *pool,
if (pool->p.pool_size)
ring_qsize = min(pool->p.pool_size, 16384);

+ pool->alloc.refill = min_t(unsigned int, PP_ALLOC_CACHE_REFILL,
+ ring_qsize);
+ pool->alloc.size = pool->alloc.refill * 2;
+
/* DMA direction is either DMA_FROM_DEVICE or DMA_BIDIRECTIONAL.
* DMA_BIDIRECTIONAL is for allowing page used for DMA sending,
* which is the XDP_TX use-case.
@@ -416,7 +420,7 @@ static noinline netmem_ref page_pool_refill_alloc_cache(struct page_pool *pool)
netmem = 0;
break;
}
- } while (pool->alloc.count < PP_ALLOC_CACHE_REFILL);
+ } while (pool->alloc.count < pool->alloc.refill);

/* Return last page */
if (likely(pool->alloc.count > 0)) {
@@ -590,7 +594,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool,
static noinline netmem_ref __page_pool_alloc_netmems_slow(struct page_pool *pool,
gfp_t gfp)
{
- const int bulk = PP_ALLOC_CACHE_REFILL;
+ const int bulk = pool->alloc.refill;
unsigned int pp_order = pool->p.order;
bool dma_map = pool->dma_map;
netmem_ref netmem;
@@ -799,7 +803,7 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, netmem_ref netmem)
static bool page_pool_recycle_in_cache(netmem_ref netmem,
struct page_pool *pool)
{
- if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) {
+ if (unlikely(pool->alloc.count == pool->alloc.size)) {
recycle_stat_inc(pool, cache_full);
return false;
}
--
2.45.0