Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts

From: Andrew Morton
Date: Thu Jun 27 2024 - 16:33:49 EST


On Thu, 27 Jun 2024 15:59:58 +0800 Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:

> We always use insert_zspage() and remove_zspage() to update zspage's
> fullness location, which will account correctly.
>
> But this special async free path use "splice" instead of remove_zspage(),
> so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
>
> Fix it by decreasing when iterate over the zspage free list.
>
> ...
>
> Signed-off-by: Chengming Zhou <chengming.zhou@xxxxxxxxx>
> +++ b/mm/zsmalloc.c
> @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work)
>
> class = zspage_class(pool, zspage);
> spin_lock(&class->lock);
> + class_stat_dec(class, ZS_INUSE_RATIO_0, 1);
> __free_zspage(pool, class, zspage);
> spin_unlock(&class->lock);
> }

What are the runtime effects of this bug? Should we backport the fix
into earlier kernels? And are we able to identify the appropriate
Fixes: target?

Thanks.