Re: [PATCH] memory-hotplug: Fix bad area access on dissolve_free_huge_pages()

From: Rui Teng
Date: Wed Sep 14 2016 - 12:34:12 EST

On 9/14/16 1:32 AM, Dave Hansen wrote:
On 09/13/2016 01:39 AM, Rui Teng wrote:
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 87e11d8..64b5f81 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1442,7 +1442,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
static void dissolve_free_huge_page(struct page *page)
- if (PageHuge(page) && !page_count(page)) {
+ if (PageHuge(page) && !page_count(page) && PageHead(page)) {
struct hstate *h = page_hstate(page);
int nid = page_to_nid(page);

This is goofy. What is calling dissolve_free_huge_page() on a tail page?


for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)

So, skip through the area being offlined at the smallest huge page size,
and try to dissolve a huge page in each place one might appear. But,
after we dissolve a 16GB huge page, we continue looking through the
remaining 15.98GB tail area for huge pages in the area we just
dissolved. The tail pages are still PageHuge() (how??), and we call
page_hstate() on the tail page whose head was just dissolved.

Note, even with the fix, this taking a (global) spinlock 1023 more times
that it doesn't have to.

This seems inefficient, and fails to fully explain what is going on, and
how tail pages still _look_ like PageHuge(), which seems pretty wrong.

I guess the patch _works_. But, sheesh, it leaves a lot of room for

Thanks for your suggestion!
How about return the size of page freed from dissolve_free_huge_page(), and jump such step on pfn?