Re: [PATCH] mm: hugetlb: yield when prepping struct pages

From: Cannon Matthews
Date: Thu Jun 28 2018 - 18:17:04 EST


Thanks for the quick turnaround.

Good to know about the how the 2M code path differs, I have been
trying to trace through some of this and it's easy to get lost between
which applies to which size.

Thanks!
On Thu, Jun 28, 2018 at 12:03 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
>
> On Wed 27-06-18 14:44:47, Cannon Matthews wrote:
> > When booting with very large numbers of gigantic (i.e. 1G) pages, the
> > operations in the loop of gather_bootmem_prealloc, and specifically
> > prep_compound_gigantic_page, takes a very long time, and can cause a
> > softlockup if enough pages are requested at boot.
> >
> > For example booting with 3844 1G pages requires prepping
> > (set_compound_head, init the count) over 1 billion 4K tail pages, which
> > takes considerable time. This should also apply to reserving the same
> > amount of memory as 2M pages, as the same number of struct pages
> > are affected in either case.
> >
> > Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to
> > prevent this lockup.
> >
> > Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and
> > no softlockup is reported, and the hugepages are reported as
> > successfully setup.
> >
> > Signed-off-by: Cannon Matthews <cannonmatthews@xxxxxxxxxx>
>
> Acked-by: Michal Hocko <mhocko@xxxxxxxx>
>
> Thanks!
>
> > ---
> > mm/hugetlb.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index a963f2034dfc..d38273c32d3b 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2169,6 +2169,7 @@ static void __init gather_bootmem_prealloc(void)
> > */
> > if (hstate_is_gigantic(h))
> > adjust_managed_page_count(page, 1 << h->order);
> > + cond_resched();
> > }
> > }
> >
> > --
> > 2.18.0.rc2.346.g013aa6912e-goog
>
> --
> Michal Hocko
> SUSE Labs