Re: [PATCH] Reenable NUMA policy support in the slab allocator

From: Vlastimil Babka
Date: Mon Aug 26 2024 - 15:44:55 EST


On 8/19/24 20:54, Christoph Lameter via B4 Relay wrote:
> From: Christoph Lameter <cl@xxxxxxxxxx>
>
> Revert commit 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()").
>
> The patch disabled the numa policy support in the slab allocator. It
> did not consider that alloc_pages() uses memory policies but
> alloc_pages_node() does not.
>
> As a result of this patch slab memory allocations are no longer spread via
> interleave policy across all available NUMA nodes on bootup. Instead
> all slab memory is allocated close to the boot processor. This leads to
> an imbalance of memory accesses on NUMA systems.
>
> Also applications using MPOL_INTERLEAVE as a memory policy will no longer
> spread slab allocations over all nodes in the interleave set but allocate
> memory locally. This may also result in unbalanced allocations
> on a single numa node.
>
> SLUB does not apply memory policies to individual object allocations.
> However, it relies on the page allocators support of memory policies
> through alloc_pages() to do the NUMA memory allocations on a per
> folio or page level. SLUB also applies memory policies when retrieving
> partial allocated slab pages from the partial list.
>
> Fixes: 8014c46ad991 ("slub: use alloc_pages_node() in alloc_slab_page()")
> Cc: stable@xxxxxxxxxx

I'm removing this as (unlike the stable tree maintainers) I try to follow
the stable tree rules, and this wouldn't apply by them. Also it's a revert
of 6.8 commit, so the LTS kernel 6.6 doesn't care anyway.

> Signed-off-by: Christoph Lameter <cl@xxxxxxxxxx>

Thanks, added to slab/for-next

> ---
> mm/slub.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index c9d8a2497fd6..4dea3c7df5ad 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2318,7 +2318,11 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node,
> struct slab *slab;
> unsigned int order = oo_order(oo);
>
> - folio = (struct folio *)alloc_pages_node(node, flags, order);
> + if (node == NUMA_NO_NODE)
> + folio = (struct folio *)alloc_pages(flags, order);
> + else
> + folio = (struct folio *)__alloc_pages_node(node, flags, order);
> +
> if (!folio)
> return NULL;
>
>
> ---
> base-commit: b0da640826ba3b6506b4996a6b23a429235e6923
> change-id: 20240806-numa_policy-5188f44ba0d8
>
> Best regards,