Re: mm, vmscan: commit makes PAE kernel crash nightly (bisected)
From: Michal Hocko
Date: Tue Jan 17 2017 - 09:55:10 EST
On Tue 17-01-17 14:21:14, Mel Gorman wrote:
> On Tue, Jan 17, 2017 at 02:52:28PM +0100, Michal Hocko wrote:
> > On Mon 16-01-17 11:09:34, Mel Gorman wrote:
> > [...]
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 532a2a750952..46aac487b89a 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -2684,6 +2684,7 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> > > continue;
> > >
> > > if (sc->priority != DEF_PRIORITY &&
> > > + !buffer_heads_over_limit &&
> > > !pgdat_reclaimable(zone->zone_pgdat))
> > > continue; /* Let kswapd poll it */
> >
> > I think we should rather remove pgdat_reclaimable here. This sounds like
> > a wrong layer to decide whether we want to reclaim and how much.
> >
>
> I had considered that but it'd also be important to add the other 32-bit
> patches you have posted to see the impact. Because of the ratio of LRU pages
> to slab pages, it may not have an impact but it'd need to be eliminated.
OK, Trevor you can pull from
git://git.kernel.org/pub/scm/linux/kernel/git/mhocko/mm.git tree
fixes/highmem-node-fixes branch. This contains the current mmotm tree +
the latest highmem fixes. I also do not expect this would help much in
your case but as Mel've said we should rule that out at least.
> Right now, I don't either other than a heavy-handed approach of checking if
> a) it's a pgdat with a highmem node
I do not think this is a right approach because we have a similar
problem even without the highmem. I have already seen cases where the
slab memory has eaten the whole DMA32 zone.
> b) if the ratio of LRU pages to slab
> pages on the lower zones is out of whack and if so, ignore nr_scanned for
> the slab shrinker.
this sounds much more promissing.
> Before prototyping such a thing, I'd like to hear the outcome of this
> heavy hack and then add your 32-bit patches onto the list. If the problem
> is still there then I'd next look at taking slab pages into account in
> pgdat_reclaimable() instead of an outright removal that has a much wider
> impact. If that doesn't work then I'll prototype a heavy-handed forced
> slab reclaim when lower zones are almost all slab pages.
I would be really curious to hear whether pgdat_reclaimable removal
makes any bad side effects. It just smells wrong from a highlevel point
of view. Besides that I really _hate_ pgdat_reclaimable for any decision
making. It just behaves very randomly... I do not expect it help much in
this case, though, as the highmem can easily bias the decision.
--
Michal Hocko
SUSE Labs