[PATCH v6 14/16] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap

From: Muchun Song
Date: Tue Nov 24 2020 - 04:59:17 EST


Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
---
Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++
Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++
mm/hugetlb_vmemmap.c | 19 ++++++++++++++++++-
3 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 5debfe238027..d28c3acde965 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1551,6 +1551,15 @@
Documentation/admin-guide/mm/hugetlbpage.rst.
Format: size[KMG]

+ hugetlb_free_vmemmap=
+ [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+ this controls freeing unused vmemmap pages associated
+ with each HugeTLB page.
+ Format: { on | off (default) }
+
+ on: enable the feature
+ off: disable the feature
+
hung_task_panic=
[KNL] Should the hung task detector generate panics.
Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..6a8b57f6d3b7 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -145,6 +145,9 @@ default_hugepagesz

will all result in 256 2M huge pages being allocated. Valid default
huge page size is architecture dependent.
+hugetlb_free_vmemmap
+ When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
+ unused vmemmap pages associated each HugeTLB page.

When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
indicates the current number of pre-allocated huge pages of the default size.
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 509ca451e232..b2222f8d1245 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -131,6 +131,22 @@ typedef void (*vmemmap_pte_remap_func_t)(struct page *reuse, pte_t *ptep,
unsigned long start, unsigned long end,
void *priv);

+static bool hugetlb_free_vmemmap_enabled __initdata;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+ if (!buf)
+ return -EINVAL;
+
+ if (!strcmp(buf, "on"))
+ hugetlb_free_vmemmap_enabled = true;
+ else if (strcmp(buf, "off"))
+ return -EINVAL;
+
+ return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
+
static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
{
return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
@@ -322,7 +338,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
unsigned int order = huge_page_order(h);
unsigned int vmemmap_pages;

- if (!is_power_of_2(sizeof(struct page))) {
+ if (!is_power_of_2(sizeof(struct page)) ||
+ !hugetlb_free_vmemmap_enabled) {
pr_info("disable freeing vmemmap pages for %s\n", h->name);
return;
}
--
2.11.0