Re: [RFC Patch 3/3] mm/slub: setup maxim per-node partial according to cpu numbers

From: Hyeonggon Yoo
Date: Tue Sep 12 2023 - 09:48:40 EST


On Tue, Sep 5, 2023 at 11:07 PM Feng Tang <feng.tang@xxxxxxxxx> wrote:
>
> Currently most of the slab's min_partial is set to 5 (as MIN_PARTIAL
> is 5). This is fine for older or small systesms, and could be too
> small for a large system with hundreds of CPUs, when per-node
> 'list_lock' is contended for allocating from and freeing to per-node
> partial list.
>
> So enlarge it based on the CPU numbers per node.
>
> Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx>
> ---
> include/linux/nodemask.h | 1 +
> mm/slub.c | 9 +++++++--
> 2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
> index 8d07116caaf1..6e22caab186d 100644
> --- a/include/linux/nodemask.h
> +++ b/include/linux/nodemask.h
> @@ -530,6 +530,7 @@ static inline int node_random(const nodemask_t *maskp)
>
> #define num_online_nodes() num_node_state(N_ONLINE)
> #define num_possible_nodes() num_node_state(N_POSSIBLE)
> +#define num_cpu_nodes() num_node_state(N_CPU)
> #define node_online(node) node_state((node), N_ONLINE)
> #define node_possible(node) node_state((node), N_POSSIBLE)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 09ae1ed642b7..984e012d7bbc 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4533,6 +4533,7 @@ static int calculate_sizes(struct kmem_cache *s)
>
> static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
> {
> + unsigned long min_partial;
> s->flags = kmem_cache_flags(s->size, flags, s->name);
> #ifdef CONFIG_SLAB_FREELIST_HARDENED
> s->random = get_random_long();
> @@ -4564,8 +4565,12 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
> * The larger the object size is, the more slabs we want on the partial
> * list to avoid pounding the page allocator excessively.
> */
> - s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2);
> - s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial);
> +
> + min_partial = rounddown_pow_of_two(num_cpus() / num_cpu_nodes());
> + min_partial = max_t(unsigned long, MIN_PARTIAL, min_partial);
> +
> + s->min_partial = min_t(unsigned long, min_partial * 2, ilog2(s->size) / 2);
> + s->min_partial = max_t(unsigned long, min_partial, s->min_partial);

Hello Feng,

How much memory is consumed by this change on your machine?

I won't argue that it would be huge for large machines but it
increases the minimum value for every
cache (even for those that are not contended) and there is no way to
reclaim this.

Maybe a way to reclaim a full slab on memory pressure (on buddy side)
wouldn't hurt?

> set_cpu_partial(s);
>
> --
> 2.27.0
>