Re: [PATCH] mm/madvise: Dont poison entire HugeTLB page for single page errors
From: Naoya Horiguchi
Date: Fri May 12 2017 - 04:11:22 EST
On Thu, Apr 20, 2017 at 04:36:27PM +0530, Anshuman Khandual wrote:
> Currently soft_offline_page() migrates the entire HugeTLB page, then
> dequeues it from the active list by making it a dangling HugeTLB page
> which ofcourse can not be used further and marks the entire HugeTLB
> page as poisoned. This might be a costly waste of memory if the error
> involved affects only small section of the entire page.
>
> This changes the behaviour so that only the affected page is marked
> poisoned and then the HugeTLB page is released back to buddy system.
Hi Anshuman,
This is a good catch, and we can solve this issue now because freeing
hwpoisoned page (previously forbidden) is available now.
And I'm thinking that the same issue for hard/soft-offline on free
hugepages can be solved, so I'll submit a patchset which includes
updated version of your patch.
Thanks,
Naoya Horiguchi
>
> Signed-off-by: Anshuman Khandual <khandual@xxxxxxxxxxxxxxxxxx>
> ---
> The number of poisoned pages on the system has reduced as seen from
> dmesg triggered with 'echo m > /proc/sysrq-enter' on powerpc.
>
> include/linux/hugetlb.h | 1 +
> mm/hugetlb.c | 2 +-
> mm/memory-failure.c | 9 ++++-----
> 3 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 7a5917d..f6b80a4 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -470,6 +470,7 @@ static inline pgoff_t basepage_index(struct page *page)
> return __basepage_index(page);
> }
>
> +extern int dissolve_free_huge_page(struct page *page);
> extern int dissolve_free_huge_pages(unsigned long start_pfn,
> unsigned long end_pfn);
> static inline bool hugepage_migration_supported(struct hstate *h)
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 1edfdb8..2fb9ba3 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1444,7 +1444,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> * number of free hugepages would be reduced below the number of reserved
> * hugepages.
> */
> -static int dissolve_free_huge_page(struct page *page)
> +int dissolve_free_huge_page(struct page *page)
> {
> int rc = 0;
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 27f7210..1e377fd 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1597,13 +1597,12 @@ static int soft_offline_huge_page(struct page *page, int flags)
> ret = -EIO;
> } else {
> /* overcommit hugetlb page will be freed to buddy */
> + SetPageHWPoison(page);
> + num_poisoned_pages_inc();
> +
> if (PageHuge(page)) {
> - set_page_hwpoison_huge_page(hpage);
> dequeue_hwpoisoned_huge_page(hpage);
> - num_poisoned_pages_add(1 << compound_order(hpage));
> - } else {
> - SetPageHWPoison(page);
> - num_poisoned_pages_inc();
> + dissolve_free_huge_page(hpage);
> }
> }
> return ret;
> --
> 1.8.5.2
>
>