Re: [Question] Should direct reclaim time be bounded?

From: Mel Gorman
Date: Mon Jul 01 2019 - 04:59:25 EST


On Fri, Jun 28, 2019 at 11:20:42AM -0700, Mike Kravetz wrote:
> On 4/24/19 7:35 AM, Vlastimil Babka wrote:
> > On 4/23/19 6:39 PM, Mike Kravetz wrote:
> >>> That being said, I do not think __GFP_RETRY_MAYFAIL is wrong here. It
> >>> looks like there is something wrong in the reclaim going on.
> >>
> >> Ok, I will start digging into that. Just wanted to make sure before I got
> >> into it too deep.
> >>
> >> BTW - This is very easy to reproduce. Just try to allocate more huge pages
> >> than will fit into memory. I see this 'reclaim taking forever' behavior on
> >> v5.1-rc5-mmotm-2019-04-19-14-53. Looks like it was there in v5.0 as well.
> >
> > I'd suspect this in should_continue_reclaim():
> >
> > /* Consider stopping depending on scan and reclaim activity */
> > if (sc->gfp_mask & __GFP_RETRY_MAYFAIL) {
> > /*
> > * For __GFP_RETRY_MAYFAIL allocations, stop reclaiming if the
> > * full LRU list has been scanned and we are still failing
> > * to reclaim pages. This full LRU scan is potentially
> > * expensive but a __GFP_RETRY_MAYFAIL caller really wants to succeed
> > */
> > if (!nr_reclaimed && !nr_scanned)
> > return false;
> >
> > And that for some reason, nr_scanned never becomes zero. But it's hard
> > to figure out through all the layers of functions :/
>
> I got back to looking into the direct reclaim/compaction stalls when
> trying to allocate huge pages. As previously mentioned, the code is
> looping for a long time in shrink_node(). The routine
> should_continue_reclaim() returns true perhaps more often than it should.
>
> As Vlastmil guessed, my debug code output below shows nr_scanned is remaining
> non-zero for quite a while. This was on v5.2-rc6.
>

I think it would be reasonable to have should_continue_reclaim allow an
exit if scanning at higher priority than DEF_PRIORITY - 2, nr_scanned is
less than SWAP_CLUSTER_MAX and no pages are being reclaimed.

--
Mel Gorman
SUSE Labs