Re: [PATCH] slab: Update stale comment for sheaf_capacity.

From: Vlastimil Babka (SUSE)

Date: Wed Mar 04 2026 - 07:31:02 EST


On 2/28/26 9:15 PM, Kuniyuki Iwashima wrote:
> The comment for sheaf_capacity says it does not enforce NUMA
> placement, but it's not true since commit 4ec1a08d2031 ("slab:
> allow NUMA restricted allocations to use percpu sheaves").
>
> Let's update the comment.
>
> Signed-off-by: Kuniyuki Iwashima <kuniyu@xxxxxxxxxx>

Hm the comment is now more stale than the NUMA aspect. With 7.0-rc1
sheaves exist for all (non-debug) caches. We probably don't need to
explain the implementation details there anymore. That includes the NUMA
aspect as well. The sheaf_capacity argument can partially override (make
it larger, but not smaller) the automatic sheaf size calculation.

Would you like to rewrite the comment as per above then?

Thanks,
Vlastimil

> ---
> include/linux/slab.h | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 15a60b501b95..7477109eb315 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -359,9 +359,8 @@ struct kmem_cache_args {
> * may replace it with an empty sheaf, unless it's over capacity. In
> * that case a sheaf is bulk freed to slab pages.
> *
> - * The sheaves do not enforce NUMA placement of objects, so allocations
> - * via kmem_cache_alloc_node() with a node specified other than
> - * NUMA_NO_NODE will bypass them.
> + * The sheaves try to enforce NUMA placement of objects, but the
> + * allocation may fall back to the normal operation.
> *
> * Bulk allocation and free operations also try to use the cpu sheaves
> * and barn, but fallback to using slab pages directly.