Re: [PATCH v6 06/11] mm, compaction: more reliably increase direct compaction priority
From: Michal Hocko
Date: Thu Aug 18 2016 - 05:50:25 EST
On Thu 18-08-16 11:44:00, Vlastimil Babka wrote:
> On 08/18/2016 11:10 AM, Michal Hocko wrote:
> > On Wed 10-08-16 11:12:21, Vlastimil Babka wrote:
> > > During reclaim/compaction loop, compaction priority can be increased by the
> > > should_compact_retry() function, but the current code is not optimal. Priority
> > > is only increased when compaction_failed() is true, which means that compaction
> > > has scanned the whole zone. This may not happen even after multiple attempts
> > > with a lower priority due to parallel activity, so we might needlessly
> > > struggle on the lower priorities and possibly run out of compaction retry
> > > attempts in the process.
> > >
> > > After this patch we are guaranteed at least one attempt at the highest
> > > compaction priority even if we exhaust all retries at the lower priorities.
> >
> > I expect we will tend to do some special handling at the highest
> > priority so guaranteeing at least one run with that prio seems sensible to me. The only
> > question is whether we really want to enforce the highest priority for
> > costly orders as well. I think we want to reserve the highest (maybe add
> > one more) prio for !costly orders as those invoke the OOM killer and the
> > failure are quite disruptive.
>
> Costly orders are already ruled out of reaching the highest priority unless
> they are __GFP_REPEAT, so I assumed that if they are allocations with
> __GFP_REPEAT, they really would like to succeed, so let them use the highest
> priority.
But even when __GFP_REPEAT is set then we do not want to be too
aggressive. E.g. hugetlb pages are better to fail than the cause
excessive reclaim or cause some long term fragmentation issues which
might be a result of the skipped heuristics. costly orders are IMHO
simply second class citizens even with they ask to try harder with
__GFP_REPEAT.
--
Michal Hocko
SUSE Labs