Re: [PATCH v6 1/2] mm: migration: fix the FOLL_GET failure on following huge page
From: Mike Kravetz
Date: Fri Aug 19 2022 - 13:38:47 EST
On 08/19/22 21:22, Michael Ellerman wrote:
> Mike Kravetz <mike.kravetz@xxxxxxxxxx> writes:
> > On 08/16/22 22:43, Andrew Morton wrote:
> >> On Wed, 17 Aug 2022 03:31:37 +0000 "Wang, Haiyue" <haiyue.wang@xxxxxxxxx> wrote:
> >>
> >> > > > }
> >> > >
> >> > > I would be better to fix this for real at those three client code sites?
> >> >
> >> > Then 5.19 will break for a while to wait for the final BIG patch ?
> >>
> >> If that's the proposal then your [1/2] should have had a cc:stable and
> >> changelog words describing the plan for 6.0.
> >>
> >> But before we do that I'd like to see at least a prototype of the final
> >> fixes to s390 and hugetlb, so we can assess those as preferable for
> >> backporting. I don't think they'll be terribly intrusive or risky?
> >
> > I will start on adding follow_huge_pgd() support. Although, I may need
> > some help with verification from the powerpc folks, as that is the only
> > architecture which supports hugetlb pages at that level.
> >
> > mpe any suggestions?
>
> I'm happy to test.
>
> I have a system where I can allocate 1GB huge pages.
>
> I'm not sure how to actually test this path though. I hacked up the
> vm/migration.c test to allocate 1GB hugepages, but I can't see it going
> through follow_huge_pgd() (using ftrace).
I thing you needed to use 16GB to trigger this code path. Anshuman introduced
support for page offline (and migration) at this level in commit 94310cbcaa3c
("mm/madvise: enable (soft|hard) offline of HugeTLB pages at PGD level").
When asked about the use case, he mentioned:
"Yes, its in the context of 16GB pages on POWER8 system where all the
gigantic pages are pre allocated from the platform and passed on to
the kernel through the device tree. We dont allocate these gigantic
pages on runtime."
--
Mike Kravetz
>
> Maybe I hacked it up badly, I'll have a closer look on Monday. But if
> you have any tips on how to trigger that path let me know :)
>
> cheers