Re: [PATCH v2] mm, page_alloc: Fix has_unmovable_pages for HugePages
From: Michal Hocko
Date: Thu Dec 20 2018 - 09:39:44 EST
On Thu 20-12-18 15:21:27, Oscar Salvador wrote:
> On Thu, Dec 20, 2018 at 02:41:32PM +0100, Oscar Salvador wrote:
> > On Thu, Dec 20, 2018 at 02:06:06PM +0100, Michal Hocko wrote:
> > > You did want iter += skip_pages - 1 here right?
> >
> > Bleh, yeah.
> > I am taking vacation today so my brain has left me hours ago, sorry.
> > Should be:
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 4812287e56a0..0634fbdef078 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -8094,7 +8094,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> > goto unmovable;
> >
> > skip_pages = (1 << compound_order(head)) - (page - head);
> > - iter = round_up(iter + 1, skip_pages) - 1;
> > + iter += skip_pages - 1;
> > continue;
> > }
>
> On a second thought, I think it should not really matter.
>
> AFAICS, we can have these scenarios:
>
> 1) the head page is the first page in the pabeblock
> 2) first page in the pageblock is not a head but part of a hugepage
> 3) the head is somewhere within the pageblock
>
> For cases 1) and 3), iter will just get the right value and we will
> break the loop afterwards.
>
> In case 2), iter will be set to a value to skip over the remaining pages.
>
> I am assuming that hugepages are allocated and packed together.
>
> Note that I am not against the change, but I just wanted to see if there is
> something I am missing.
Yes, you are missing that this code should be as sane as possible ;) You
are right that we are only processing one pageorder worth of pfns and
that the page order is bound to HUGETLB_PAGE_ORDER _right_now_. But
there is absolutely zero reason to hardcode that assumption into a
simple loop, right?
--
Michal Hocko
SUSE Labs