Re: [PATCH v8 4/4] hugetlb: allow to free gigantic pages regardless of the configuration

From: Alexandre Ghiti
Date: Wed Mar 27 2019 - 08:54:45 EST


On 03/27/2019 11:05 AM, Aneesh Kumar K.V wrote:
Alexandre Ghiti <alex@xxxxxxxx> writes:

On 03/27/2019 09:55 AM, Aneesh Kumar K.V wrote:
On 3/27/19 2:14 PM, Alexandre Ghiti wrote:

On 03/27/2019 08:01 AM, Aneesh Kumar K.V wrote:
On 3/27/19 12:06 PM, Alexandre Ghiti wrote:
.....

This is now
#define __HAVE_ARCH_GIGANTIC_PAGE_RUNTIME_SUPPORTED
static inline bool gigantic_page_runtime_supported(void)
{
if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled())
ÂÂÂÂÂÂÂ return false;

ÂÂÂÂreturn true;
}


I am wondering whether it should be

#define __HAVE_ARCH_GIGANTIC_PAGE_RUNTIME_SUPPORTED
static inline bool gigantic_page_runtime_supported(void)
{

ÂÂ if (!IS_ENABLED(CONFIG_CONTIG_ALLOC))
ÂÂÂÂÂÂÂ return false;
I don't think this test should happen here, CONFIG_CONTIG_ALLOC only allows
to allocate gigantic pages, doing that check here would prevent powerpc
to free boottime gigantic pages when not a guest. Note that this check
is actually done in set_max_huge_pages.


if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled())
ÂÂÂÂÂÂÂ return false;
Maybe I did not understand this check: I understood that, in the case
the system
is virtualized, we do not want it to hand back gigantic pages. Does this
check
test if the system is currently being virtualized ?
If yes, I think the patch is correct: it prevents freeing gigantic pages
when the system
is virtualized but allows a 'normal' system to free gigantic pages.


Ok double checked the patch applying the the tree. I got confused by the
removal of that #ifdef. So we now disallow the runtime free by checking
for gigantic_page_runtime_supported() in __nr_hugepages_store_common.
Now if we allow and if CONFIG_CONTIG_ALLOC is disabled, we still should
allow to free the boot time allocated pages back to buddy.

The patch looks good. You can add for the series

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx>

-aneesh


Thanks for your time Aneesh,

Alex