[PATCH v1 06/10] mm: remove SWAP_AGAIN in ttu
From: Minchan Kim
Date: Sun Mar 12 2017 - 20:38:10 EST
In 2002, [1] introduced SWAP_AGAIN.
At that time, try_to_unmap_one used spin_trylock(&mm->page_table_lock)
so it's really easy to contend and fail to hold a lock so SWAP_AGAIN
to keep LRU status makes sense.
However, now we changed it to mutex-based lock and be able to block
without skip pte so there is few of small window to return SWAP_AGAIN
so remove SWAP_AGAIN and just return SWAP_FAIL.
[1] c48c43e, minimal rmap
Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx>
---
mm/rmap.c | 11 +++--------
mm/vmscan.c | 2 --
2 files changed, 3 insertions(+), 10 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 38e8ab1..2a5d854 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1499,13 +1499,10 @@ static int page_mapcount_is_zero(struct page *page)
* Return values are:
*
* SWAP_SUCCESS - we succeeded in removing all mappings
- * SWAP_AGAIN - we missed a mapping, try again later
* SWAP_FAIL - the page is unswappable
*/
int try_to_unmap(struct page *page, enum ttu_flags flags)
{
- int ret;
-
struct rmap_walk_control rwc = {
.rmap_one = try_to_unmap_one,
.arg = (void *)flags,
@@ -1525,13 +1522,11 @@ int try_to_unmap(struct page *page, enum ttu_flags flags)
rwc.invalid_vma = invalid_migration_vma;
if (flags & TTU_RMAP_LOCKED)
- ret = rmap_walk_locked(page, &rwc);
+ rmap_walk_locked(page, &rwc);
else
- ret = rmap_walk(page, &rwc);
+ rmap_walk(page, &rwc);
- if (!page_mapcount(page))
- ret = SWAP_SUCCESS;
- return ret;
+ return !page_mapcount(page) ? SWAP_SUCCESS : SWAP_FAIL;
}
static int page_not_mapped(struct page *page)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2a208f0..7727fbe 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1145,8 +1145,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
case SWAP_FAIL:
nr_unmap_fail++;
goto activate_locked;
- case SWAP_AGAIN:
- goto keep_locked;
case SWAP_SUCCESS:
; /* try to free the page below */
}
--
2.7.4