[patch for-4.7] mm, compaction: prevent VM_BUG_ON when terminating freeing scanner fix

From: David Rientjes
Date: Mon Jul 11 2016 - 15:50:54 EST


On Wed, 6 Jul 2016, Joonsoo Kim wrote:

> > diff --git a/mm/compaction.c b/mm/compaction.c
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -1009,8 +1009,6 @@ static void isolate_freepages(struct compact_control *cc)
> > block_end_pfn = block_start_pfn,
> > block_start_pfn -= pageblock_nr_pages,
> > isolate_start_pfn = block_start_pfn) {
> > - unsigned long isolated;
> > -
> > /*
> > * This can iterate a massively long zone without finding any
> > * suitable migration targets, so periodically check if we need
> > @@ -1034,36 +1032,31 @@ static void isolate_freepages(struct compact_control *cc)
> > continue;
> >
> > /* Found a block suitable for isolating free pages from. */
> > - isolated = isolate_freepages_block(cc, &isolate_start_pfn,
> > - block_end_pfn, freelist, false);
> > - /* If isolation failed early, do not continue needlessly */
> > - if (!isolated && isolate_start_pfn < block_end_pfn &&
> > - cc->nr_migratepages > cc->nr_freepages)
> > - break;
> > + isolate_freepages_block(cc, &isolate_start_pfn, block_end_pfn,
> > + freelist, false);
> >
> > /*
> > - * If we isolated enough freepages, or aborted due to async
> > - * compaction being contended, terminate the loop.
> > - * Remember where the free scanner should restart next time,
> > - * which is where isolate_freepages_block() left off.
> > - * But if it scanned the whole pageblock, isolate_start_pfn
> > - * now points at block_end_pfn, which is the start of the next
> > - * pageblock.
> > - * In that case we will however want to restart at the start
> > - * of the previous pageblock.
> > + * If we isolated enough freepages, or aborted due to lock
> > + * contention, terminate.
> > */
> > if ((cc->nr_freepages >= cc->nr_migratepages)
> > || cc->contended) {
> > - if (isolate_start_pfn >= block_end_pfn)
> > + if (isolate_start_pfn >= block_end_pfn) {
> > + /*
> > + * Restart at previous pageblock if more
> > + * freepages can be isolated next time.
> > + */
> > isolate_start_pfn =
> > block_start_pfn - pageblock_nr_pages;
> > + }
> > break;
> > - } else {
> > + } else if (isolate_start_pfn < block_end_pfn) {
> > /*
> > - * isolate_freepages_block() should not terminate
> > - * prematurely unless contended, or isolated enough
> > + * If isolation failed early, do not continue
> > + * needlessly.
> > */
> > - VM_BUG_ON(isolate_start_pfn < block_end_pfn);
> > + isolate_start_pfn = block_start_pfn;
> > + break;
>
> I don't think this line is correct. It would make cc->free_pfn go
> backward though it would not be a big problem. Just leaving
> isolate_start_pfn as isolate_freepages_block returns would be a proper
> solution here.
>

I guess, but I don't see what value there is in starting free page
isolation within a pageblock.

----->o-----

mm, compaction: prevent VM_BUG_ON when terminating freeing scanner fix

Per Joonsoo.

An artifact of __isolate_free_page() doing per-zone watermark checks (?),
we don't want to rescan pages in a pageblock that were not successfully
isolated last time.

Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx>
---
mm/compaction.c | 1 -
1 file changed, 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index e4f89da..45eaa2a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1114,7 +1114,6 @@ static void isolate_freepages(struct compact_control *cc)
* If isolation failed early, do not continue
* needlessly.
*/
- isolate_start_pfn = block_start_pfn;
break;
}
}