Re: [PATCH 2/2][v2] powerpc: Make the CMM memory hotplug aware

From: Robert Jennings
Date: Thu Oct 08 2009 - 09:15:26 EST


* Gerald Schaefer (geralds@xxxxxxxxxxxxxxxxxx) wrote:
> Hi,
>
> I am currently working on the s390 port for the cmm + hotplug
> patch, and I'm a little confused about the memory allocation
> policy, see below. Is it correct that the balloon cannot grow
> into ZONE_MOVABLE, while the pages for the balloon page list
> can?
>
> Robert Jennings wrote:
>> @@ -110,6 +125,9 @@ static long cmm_alloc_pages(long nr)
>> cmm_dbg("Begin request for %ld pages\n", nr);
>>
>> while (nr) {
>> + if (atomic_read(&hotplug_active))
>> + break;
>> +
>> addr = __get_free_page(GFP_NOIO | __GFP_NOWARN |
>> __GFP_NORETRY | __GFP_NOMEMALLOC);
>> if (!addr)
>> @@ -119,8 +137,10 @@ static long cmm_alloc_pages(long nr)
>> if (!pa || pa->index >= CMM_NR_PAGES) {
>> /* Need a new page for the page list. */
>> spin_unlock(&cmm_lock);
>> - npa = (struct cmm_page_array *)__get_free_page(GFP_NOIO | __GFP_NOWARN |
>> - __GFP_NORETRY | __GFP_NOMEMALLOC);
>> + npa = (struct cmm_page_array *)__get_free_page(
>> + GFP_NOIO | __GFP_NOWARN |
>> + __GFP_NORETRY | __GFP_NOMEMALLOC |
>> + __GFP_MOVABLE);
>> if (!npa) {
>> pr_info("%s: Can not allocate new page list\n", __func__);
>> free_page(addr);
>
> Why is the __GFP_MOVABLE added here, for the page list alloc, and not
> above for the balloon page alloc?

The pages allocated as __GFP_MOVABLE are used to store the list of pages
allocated by the balloon. They reference virtual addresses and it would
be fine for the kernel to migrate the physical pages for those, the
balloon would not notice this.

The pages loaned by the balloon are not allocated with __GFP_MOVABLE
because we will inform the hypervisor which page has been loaned by
Linux according to the physical address. Migration of those physical
pages would invalidate the loan, so we do not mark them as movable.

Regards,
Robert Jennings
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/