Re: [PATCH v2 4/9] mm, hwpoison, hugetlb: support saving mechanism of raw error pages
From: HORIGUCHI NAOYA(堀口 直也)
Date: Tue Jun 28 2022 - 04:21:06 EST
On Tue, Jun 28, 2022 at 02:26:47PM +0800, Muchun Song wrote:
> On Tue, Jun 28, 2022 at 02:41:22AM +0000, HORIGUCHI NAOYA(堀口 直也) wrote:
> > On Mon, Jun 27, 2022 at 05:26:01PM +0800, Muchun Song wrote:
> > > On Fri, Jun 24, 2022 at 08:51:48AM +0900, Naoya Horiguchi wrote:
> > > > From: Naoya Horiguchi <naoya.horiguchi@xxxxxxx>
...
> > > > + } else {
> > > > + /*
> > > > + * Failed to save raw error info. We no longer trace all
> > > > + * hwpoisoned subpages, and we need refuse to free/dissolve
> > > > + * this hwpoisoned hugepage.
> > > > + */
> > > > + set_raw_hwp_unreliable(hpage);
> > > > + return ret;
> > > > + }
> > > > + return ret;
> > > > +}
> > > > +
> > > > +inline int hugetlb_clear_page_hwpoison(struct page *hpage)
> > > > +{
> > > > + struct llist_head *head;
> > > > + struct llist_node *t, *tnode;
> > > > +
> > > > + if (raw_hwp_unreliable(hpage))
> > > > + return -EBUSY;
> > >
> > > IIUC, we use head page's PageHWPoison to synchronize hugetlb_clear_page_hwpoison()
> > > and hugetlb_set_page_hwpoison(), right? If so, who can set hwp_unreliable here?
> >
> > Sorry if I might miss your point, but raw_hwp_unreliable is set when
> > allocating raw_hwp_page failed. hugetlb_set_page_hwpoison() can be called
>
> Sorry. I have missed this. Thanks for your clarification.
>
> > multiple times on a hugepage and if one of the calls fails, the hwpoisoned
> > hugepage becomes unreliable.
> >
> > BTW, as you pointed out above, if we switch to passing GFP_ATOMIC to kmalloc(),
> > the kmalloc() never fails, so we no longer have to implement this unreliable
>
> No. kmalloc() with GFP_ATOMIC can fail unless I miss something important.
OK, I've interpretted the comment about GFP_ATOMIC wrongly.
* %GFP_ATOMIC users can not sleep and need the allocation to succeed. A lower
* watermark is applied to allow access to "atomic reserves".
> > flag, so things get simpler.
> >
> > >
> > > > + ClearPageHWPoison(hpage);
> > > > + head = raw_hwp_list_head(hpage);
> > > > + llist_for_each_safe(tnode, t, head->first) {
> > >
> > > Is it possible that a new item is added hugetlb_set_page_hwpoison() and we do not
> > > traverse it (we have cleared page's PageHWPoison)? Then we ignored a real hwpoison
> > > page, right?
> >
> > Maybe you are mentioning the race like below. Yes, that's possible.
> >
>
> Sorry, ignore my previous comments, I'm thinking something wrong.
>
> > CPU 0 CPU 1
> >
> > free_huge_page
> > lock hugetlb_lock
> > ClearHPageMigratable
> remove_hugetlb_page()
> // the page is non-HugeTLB now
Oh, I missed that.
> > unlock hugetlb_lock
> > get_huge_page_for_hwpoison
> > lock hugetlb_lock
> > __get_huge_page_for_hwpoison
>
> // cannot reach here since it is not a HugeTLB page now.
> // So this race is impossible. Then we fallback to normal
> // page handling. Seems there is a new issue here.
> //
> // memory_failure()
> // try_memory_failure_hugetlb()
> // if (hugetlb)
> // goto unlock_mutex;
> // if (TestSetPageHWPoison(p)) {
> // // This non-HugeTLB page's vmemmap is still optimized.
>
> Setting COMPOUND_PAGE_DTOR after hugetlb_vmemmap_restore() might fix this
> issue, but we will encounter this race as you mentioned below.
I don't have clear ideas about this now (I don't test vmemmap-optimized case
yet), so I will think more about this case. Maybe memory_failure() need
detect it because memory_failure() heaviliy depends on the status of struct
page.
Thanks,
Naoya Horiguchi
>
> Thanks.
>
> > hugetlb_set_page_hwpoison
> > allocate raw_hwp_page
> > TestSetPageHWPoison
> > update_and_free_page
> > __update_and_free_page
> > if (PageHWPoison)
> > hugetlb_clear_page_hwpoison
> > TestClearPageHWPoison
> > // remove all list items
> > llist_add
> > unlock hugetlb_lock
> >
> >
> > The end result seems not critical (leaking raced raw_hwp_page?), but
> > we need fix.