On 2015ë 08ì 05ì 08:31, Minchan Kim wrote:
Hello,
On Tue, Aug 04, 2015 at 03:09:37PM -0700, Andrew Morton wrote:
On Tue, 04 Aug 2015 19:40:08 +0900 Jaewon Kim <jaewon31.kim@xxxxxxxxxxx> wrote:
reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
number of pages removed from the candidate list. But shrink_page_list()
puts back mlocked pages without passing it to caller and without
counting as nr_reclaimed. This incurrs increasing nr_isolated.
To fix this, this patch changes shrink_page_list() to pass unevictable
pages back to caller. Caller will take care those pages.
..
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1157,7 +1157,7 @@ cull_mlocked:
if (PageSwapCache(page))
try_to_free_swap(page);
unlock_page(page);
- putback_lru_page(page);
+ list_add(&page->lru, &ret_pages);
continue;
activate_locked:
Is this going to cause a whole bunch of mlocked pages to be migrated
whereas in current kernels they stay where they are?
It fixes two issues.
1. With unevictable page, cma_alloc will be successful.
Exactly speaking, cma_alloc of current kernel will fail due to unevictable pages.
2. fix leaking of NR_ISOLATED counter of vmstat
With it, too_many_isolated works. Otherwise, it could make hang until
the process get SIGKILL.
So, I think it's stable material.
Acked-by: Minchan Kim <minchan@xxxxxxxxxx>
Hello
Traditional shrink_inactive_list will put back the unevictable pages as it does through putback_inactive_pages.
However as Minchan Kim said, cma_alloc will be more successful by migrating unevictable pages.
In current kernel, I think, cma_alloc is already trying to migrate unevictable pages except clean page cache.
This patch will allow clean page cache also to be migrated in cma_alloc.
Thank you
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>