Re: [PATCH] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations

From: Vlastimil Babka
Date: Wed Mar 11 2020 - 19:04:07 EST


On 3/11/20 11:58 PM, Roman Gushchin wrote:
>>
>> I agree it should be in the noise. But please do put it behind CONFIG_CMA
>> #ifdef. My x86_64 desktop distro kernel doesn't have CONFIG_CMA. Even if this is
>> effectively no-op with __rmqueue_cma_fallback() returning NULL immediately, I
>> think the compiler cannot eliminate the two zone_page_state()'s which are
>> atomic_long_read(), even if it's just ultimately READ_ONCE() here, that's a
>> volatile cast which means elimination not possible AFAIK? Other architectures
>> might be even more involved.
>
> I agree.
>
> Andrew,
> can you, please, squash the following diff into the patch?

Thanks,

then please add to the result

Acked-by: Vlastimil Babka <vbabka@xxxxxxx>

> Thank you!
>
> --
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 7d9067b75dcb..bc65931b3901 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2767,6 +2767,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
> {
> struct page *page;
>
> +#ifdef CONFIG_CMA
> /*
> * Balance movable allocations between regular and CMA areas by
> * allocating from CMA when over half of the zone's free memory
> @@ -2779,6 +2780,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
> if (page)
> return page;
> }
> +#endif
> retry:
> page = __rmqueue_smallest(zone, order, migratetype);
> if (unlikely(!page)) {
>
>