Re: [PATCH v2 6/7] mm: list_lru: introduce memcg_list_lru_alloc_folio()

From: David Hildenbrand (Arm)

Date: Tue Mar 17 2026 - 06:10:12 EST


On 3/12/26 21:51, Johannes Weiner wrote:
> memcg_list_lru_alloc() is called every time an object that may end up
> on the list_lru is created. It needs to quickly check if the list_lru
> heads for the memcg already exist, and allocate them when they don't.
>
> Doing this with folio objects is tricky: folio_memcg() is not stable
> and requires either RCU protection or pinning the cgroup. But it's
> desirable to make the existence check lightweight under RCU, and only
> pin the memcg when we need to allocate list_lru heads and may block.
>
> In preparation for switching the THP shrinker to list_lru, add a
> helper function for allocating list_lru heads coming from a folio.
>
> Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> ---
> include/linux/list_lru.h | 12 ++++++++++++
> mm/list_lru.c | 39 ++++++++++++++++++++++++++++++++++-----
> 2 files changed, 46 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index 4afc02deb44d..df6bd3c64b06 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -81,6 +81,18 @@ static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker
>
> int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
> gfp_t gfp);
> +
> +#ifdef CONFIG_MEMCG
> +int memcg_list_lru_alloc_folio(struct folio *folio, struct list_lru *lru,
> + gfp_t gfp);
> +#else
> +static inline int memcg_list_lru_alloc_folio(struct folio *folio,
> + struct list_lru *lru, gfp_t gfp)
> +{
> + return 0;
> +}
> +#endif
> +
> void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent);
>
> /**
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 779cb26cec84..562b2b1f8c41 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -534,17 +534,14 @@ static inline bool memcg_list_lru_allocated(struct mem_cgroup *memcg,
> return idx < 0 || xa_load(&lru->xa, idx);
> }
>
> -int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
> - gfp_t gfp)
> +static int __memcg_list_lru_alloc(struct mem_cgroup *memcg,
> + struct list_lru *lru, gfp_t gfp)
> {
> unsigned long flags;
> struct list_lru_memcg *mlru = NULL;
> struct mem_cgroup *pos, *parent;
> XA_STATE(xas, &lru->xa, 0);
>
> - if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
> - return 0;
> -
> gfp &= GFP_RECLAIM_MASK;
> /*
> * Because the list_lru can be reparented to the parent cgroup's
> @@ -585,6 +582,38 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
>
> return xas_error(&xas);
> }
> +
> +int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru,
> + gfp_t gfp)
> +{
> + if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru))
> + return 0;
> + return __memcg_list_lru_alloc(memcg, lru, gfp);
> +}
> +
> +int memcg_list_lru_alloc_folio(struct folio *folio, struct list_lru *lru,
> + gfp_t gfp)

The function reads as if we would be allocating a folio ...

folio_memcg_list_lru_alloc() ?

Or memcg_list_lru_alloc_for_folio() ?


LGTM, with my limited understanding of memcg lifetimes :)

Reviewed-by: David Hildenbrand (Arm) <david@xxxxxxxxxx>

--
Cheers,

David