Re: [PATCH] mm/page_alloc: Add a bulk page allocator -fix -fix

From: Colin Ian King
Date: Tue Mar 30 2021 - 07:52:13 EST


On 30/03/2021 12:48, Mel Gorman wrote:
> Colin Ian King reported the following problem (slightly edited)
>
> Author: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> Date: Mon Mar 29 11:12:24 2021 +1100
>
> mm/page_alloc: add a bulk page allocator
>
> ...
>
> Static analysis on linux-next with Coverity has found a potential
> uninitialized variable issue in function __alloc_pages_bulk with
> the following commit:
>
> ...
>
> Uninitialized scalar variable (UNINIT)
> 15. uninit_use_in_call: Using uninitialized value alloc_flags when
> calling prepare_alloc_pages.
>
> 5056 if (!prepare_alloc_pages(gfp, 0, preferred_nid, nodemask,
> &ac, &alloc_gfp, &alloc_flags))
>
> The problem is that prepare_alloc_flags only updates alloc_flags
> which must have a valid initial value. The appropriate initial value is
> ALLOC_WMARK_LOW to avoid the bulk allocator pushing a zone below the low
> watermark without waking kswapd assuming the GFP mask allows kswapd to
> be woken.
>
> This is a second fix to the mmotm patch
> mm-page_alloc-add-a-bulk-page-allocator.patch . It will cause a mild conflict
> with a later patch due to renaming of an adjacent variable that is trivially
> resolved. I can post a full series with the fixes merged if that is preferred.
>
> Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> ---
> mm/page_alloc.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 92d55f80c289..dabef0b910c9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4990,7 +4990,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid,
> struct list_head *pcp_list;
> struct alloc_context ac;
> gfp_t alloc_gfp;
> - unsigned int alloc_flags;
> + unsigned int alloc_flags = ALLOC_WMARK_LOW;
> int allocated = 0;
>
> if (WARN_ON_ONCE(nr_pages <= 0))
>

Thanks Mel, that definitely fixes the issue.

Reviewed-by: Colin Ian King <colin.king@xxxxxxxxxxxxx>